Sphinx-1.3.6/0000775000175000017500000000000012665046155014134 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/EXAMPLES0000664000175000017500000002476012665045070015301 0ustar vagrantvagrant00000000000000Projects using Sphinx ===================== This is an (incomplete) alphabetic list of projects that use Sphinx or are experimenting with using it for their documentation. If you like to be included, please mail to `the Google group `_. I've grouped the list into sections to make it easier to find interesting examples. Documentation using the default theme ------------------------------------- * APSW: http://apidoc.apsw.googlecode.com/hg/index.html * ASE: https://wiki.fysik.dtu.dk/ase/ * Calibre: http://manual.calibre-ebook.com/ * CodePy: http://documen.tician.de/codepy/ * Cython: http://docs.cython.org/ * Cormoran: http://cormoran.nhopkg.org/docs/ * Director: http://pythonhosted.org/director/ * Dirigible: http://www.projectdirigible.com/documentation/ * F2py: http://f2py.sourceforge.net/docs/ * GeoDjango: https://docs.djangoproject.com/en/dev/ref/contrib/gis/ * Genomedata: http://noble.gs.washington.edu/proj/genomedata/doc/1.2.2/genomedata.html * gevent: http://www.gevent.org/ * Google Wave API: http://wave-robot-python-client.googlecode.com/svn/trunk/pydocs/index.html * GSL Shell: http://www.nongnu.org/gsl-shell/ * Heapkeeper: http://heapkeeper.org/ * Hands-on Python Tutorial: http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/ * Hedge: http://documen.tician.de/hedge/ * Leo: http://leoeditor.com/ * Lino: http://lino.saffre-rumma.net/ * MeshPy: http://documen.tician.de/meshpy/ * mpmath: http://mpmath.googlecode.com/svn/trunk/doc/build/index.html * OpenEXR: http://excamera.com/articles/26/doc/index.html * OpenGDA: http://www.opengda.org/gdadoc/html/ * openWNS: http://docs.openwns.org/ * Paste: http://pythonpaste.org/script/ * Paver: http://paver.github.io/paver/ * Pioneers and Prominent Men of Utah: http://pioneers.rstebbing.com/ * PyCantonese: http://pycantonese.github.io/ * Pyccuracy: https://github.com/heynemann/pyccuracy/wiki/ * PyCuda: http://documen.tician.de/pycuda/ * Pyevolve: http://pyevolve.sourceforge.net/ * Pylo: http://documen.tician.de/pylo/ * PyMQI: http://pythonhosted.org/pymqi/ * PyPubSub: http://pubsub.sourceforge.net/ * pySPACE: http://pyspace.github.io/pyspace/ * Python: https://docs.python.org/ * python-apt: http://apt.alioth.debian.org/python-apt-doc/ * PyUblas: http://documen.tician.de/pyublas/ * Quex: http://quex.sourceforge.net/doc/html/main.html * Scapy: http://www.secdev.org/projects/scapy/doc/ * Segway: http://noble.gs.washington.edu/proj/segway/doc/1.1.0/segway.html * SimPy: http://simpy.readthedocs.org/en/latest/ * SymPy: http://docs.sympy.org/ * WTForms: http://wtforms.simplecodes.com/docs/ * z3c: http://www.ibiblio.org/paulcarduner/z3ctutorial/ Documentation using a customized version of the default theme ------------------------------------------------------------- * Advanced Generic Widgets: http://xoomer.virgilio.it/infinity77/AGW_Docs/index.html * Bazaar: http://doc.bazaar.canonical.com/en/ * CakePHP: http://book.cakephp.org/2.0/en/index.html * Chaco: http://docs.enthought.com/chaco/ * Chef: https://docs.chef.io/index.html * Djagios: http://djagios.org/ * EZ-Draw: http://pageperso.lif.univ-mrs.fr/~edouard.thiel/ez-draw/doc/en/html/ez-manual.html * GetFEM++: http://home.gna.org/getfem/ * Google or-tools: https://or-tools.googlecode.com/svn/trunk/documentation/user_manual/index.html * GPAW: https://wiki.fysik.dtu.dk/gpaw/ * Grok: http://grok.zope.org/doc/current/ * IFM: http://fluffybunny.memebot.com/ifm-docs/index.html * Kaa: http://api.freevo.org/kaa-base/ * LEPL: http://www.acooke.org/lepl/ * Mayavi: http://docs.enthought.com/mayavi/mayavi/ * NICOS: http://trac.frm2.tum.de/nicos/doc/nicos-master/index.html * NOC: http://redmine.nocproject.org/projects/noc * NumPy: http://docs.scipy.org/doc/numpy/reference/ * OpenCV: http://docs.opencv.org/ * Peach^3: http://peach3.nl/doc/latest/userdoc/ * Sage: http://sagemath.org/doc/ * SciPy: http://docs.scipy.org/doc/scipy/reference/ * simuPOP: http://simupop.sourceforge.net/manual_release/build/userGuide.html * Sprox: http://sprox.org/ * TurboGears: http://turbogears.org/2.0/docs/ * Varnish: https://www.varnish-cache.org/docs/ * Zentyal: http://doc.zentyal.org/ * Zope: http://docs.zope.org/zope2/index.html * zc.async: http://pythonhosted.org/zc.async/1.5.0/ Documentation using the sphinxdoc theme --------------------------------------- * Fityk: http://fityk.nieto.pl/ * MapServer: http://mapserver.org/ * Matplotlib: http://matplotlib.org/ * Music21: http://web.mit.edu/music21/doc/index.html * NetworkX: http://networkx.github.io/ * Pweave: http://mpastell.com/pweave/ * Pyre: http://docs.danse.us/pyre/sphinx/ * Pysparse: http://pysparse.sourceforge.net/ * PyTango: http://www.esrf.eu/computing/cs/tango/tango_doc/kernel_doc/pytango/latest/index.html * Python Wild Magic: http://vmlaker.github.io/pythonwildmagic/ * Reteisi: http://www.reteisi.org/contents.html * Sqlkit: http://sqlkit.argolinux.org/ * Tau: http://www.tango-controls.org/static/tau/latest/doc/html/index.html * Turbulenz: http://docs.turbulenz.com/ * WebFaction: http://docs.webfaction.com/ Documentation using another builtin theme ----------------------------------------- * C/C++ Development with Eclipse: http://eclipsebook.in/ (agogo) * Jinja: http://jinja.pocoo.org/ (scrolls) * jsFiddle: http://doc.jsfiddle.net/ (nature) * libLAS: http://www.liblas.org/ (nature) * MPipe: http://vmlaker.github.io/mpipe/ (sphinx13) * pip: https://pip.pypa.io/en/latest/ (sphinx_rtd_theme) * Pyramid web framework: http://docs.pylonsproject.org/projects/pyramid/en/latest/ (pyramid) * Programmieren mit PyGTK und Glade (German): http://www.florian-diesch.de/doc/python-und-glade/online/ (agogo) * Satchmo: http://docs.satchmoproject.com/en/latest/ (sphinx_rtd_theme) * Setuptools: http://pythonhosted.org/setuptools/ (nature) * Spring Python: http://docs.spring.io/spring-python/1.2.x/sphinx/html/ (nature) * sqlparse: http://python-sqlparse.googlecode.com/svn/docs/api/index.html (agogo) * Sylli: http://sylli.sourceforge.net/ (nature) * Tuleap Open ALM: https://tuleap.net/doc/en/ (nature) * Valence: http://docs.valence.desire2learn.com/ (haiku) Documentation using a custom theme/integrated in a site ------------------------------------------------------- * Blender: http://www.blender.org/documentation/250PythonDoc/ * Blinker: http://discorporate.us/projects/Blinker/docs/ * Ceph: http://ceph.com/docs/master/ * Classy: http://www.pocoo.org/projects/classy/ * DEAP: http://deap.gel.ulaval.ca/doc/0.8/index.html * Django: http://docs.djangoproject.com/ * Elemental: http://libelemental.org/documentation/dev/index.html * Enterprise Toolkit for Acrobat products: http://www.adobe.com/devnet-docs/acrobatetk/ * e-cidadania: http://e-cidadania.readthedocs.org/en/latest/ * Flask: http://flask.pocoo.org/docs/ * Flask-OpenID: http://pythonhosted.org/Flask-OpenID/ * Gameduino: http://excamera.com/sphinx/gameduino/ * GeoServer: http://docs.geoserver.org/ * Glashammer: http://glashammer.org/ * Istihza (Turkish Python documentation project): http://www.istihza.com/py2/icindekiler_python.html * Lasso: http://lassoguide.com/ * Manage documentation such as source code (fr): http://redaction-technique.org/ * MathJax: http://docs.mathjax.org/en/latest/ * MirrorBrain: http://mirrorbrain.org/docs/ * MyHDL: http://docs.myhdl.org/en/latest/ * nose: http://nose.readthedocs.org/en/latest/ * NoTex: https://notex.ch/overview/ * ObjectListView: http://objectlistview.sourceforge.net/python/ * Open ERP: http://doc.openerp.com/ * OpenCV: http://docs.opencv.org/ * Open Dylan: http://opendylan.org/documentation/ and also provides a `dylan domain `__ * OpenLayers: http://docs.openlayers.org/ * PyEphem: http://rhodesmill.org/pyephem/ * German Plone user manual: http://www.hasecke.com/plone-benutzerhandbuch/ * PSI4: http://sirius.chem.vt.edu/psi4manual/latest/index.html * Pylons: http://docs.pylonsproject.org/projects/pylons-webframework/en/latest/ * PyMOTW: http://pymotw.com/2/ * python-aspectlib: http://python-aspectlib.readthedocs.org/en/latest/ (`sphinx-py3doc-enhanced-theme`_) * QGIS: http://qgis.org/en/docs/index.html * qooxdoo: http://manual.qooxdoo.org/current/ * Roundup: http://www.roundup-tracker.org/ * Selenium: http://docs.seleniumhq.org/docs/ * Self: http://selflanguage.org/ * Substance D: http://docs.pylonsproject.org/projects/substanced/en/latest/ * Tablib: http://tablib.org/ * SQLAlchemy: http://www.sqlalchemy.org/docs/ * tinyTiM: http://tinytim.sourceforge.net/docs/2.0/ * Ubuntu packaging guide: http://packaging.ubuntu.com/html/ * Werkzeug: http://werkzeug.pocoo.org/docs/ * WFront: http://discorporate.us/projects/WFront/ .. _sphinx-py3doc-enhanced-theme: https://pypi.python.org/pypi/sphinx_py3doc_enhanced_theme Homepages and other non-documentation sites ------------------------------------------- * Applied Mathematics at the Stellenbosch University: http://dip.sun.ac.za/ * A personal page: http://www.dehlia.in/ * Benoit Boissinot: http://bboissin.appspot.com/ * lunarsite: http://lunaryorn.de/ * Red Hot Chili Python: http://redhotchilipython.com/ * The Wine Cellar Book: http://www.thewinecellarbook.com/doc/en/ * UC Berkeley Advanced Control Systems course: http://www.me.berkeley.edu/ME233/sp14/ * VOR: http://www.vor-cycling.be/ Books produced using Sphinx --------------------------- * "The ``repoze.bfg`` Web Application Framework": http://www.amazon.com/repoze-bfg-Web-Application-Framework-Version/dp/0615345379 * A Theoretical Physics Reference book: http://theoretical-physics.net/ * "Simple and Steady Way of Learning for Software Engineering" (in Japanese): http://www.amazon.co.jp/dp/477414259X/ * "Expert Python Programming": http://www.packtpub.com/expert-python-programming/book * "Expert Python Programming" (Japanese translation): http://www.amazon.co.jp/dp/4048686291/ * "Pomodoro Technique Illustrated" (Japanese translation): http://www.amazon.co.jp/dp/4048689525/ * "Python Professional Programming" (in Japanese): http://www.amazon.co.jp/dp/4798032948/ * "Die Wahrheit des Sehens. Der DEKALOG von Krzysztof Kieślowski": http://www.hasecke.eu/Dekalog/ * The "Varnish Book": https://www.varnish-software.com/static/book/ * "Learning Sphinx" (in Japanese): http://www.oreilly.co.jp/books/9784873116488/ * "LassoGuide": http://www.lassosoft.com/Lasso-Documentation * "Software-Dokumentation mit Sphinx": http://www.amazon.de/dp/1497448689/ Thesis using Sphinx ------------------- * "A Web-Based System for Comparative Analysis of OpenStreetMap Data by the Use of CouchDB": https://www.yumpu.com/et/document/view/11722645/masterthesis-markusmayr-0542042 Sphinx-1.3.6/LICENSE0000664000175000017500000003203212660074372015136 0ustar vagrantvagrant00000000000000License for Sphinx ================== Copyright (c) 2007-2016 by the Sphinx team (see AUTHORS file). All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Licenses for incorporated software ================================== The pgen2 package, included in this distribution under the name sphinx.pycode.pgen2, is available in the Python 2.6 distribution under the PSF license agreement for Python: ---------------------------------------------------------------------- Copyright © 2001-2008 Python Software Foundation; All Rights Reserved. PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 -------------------------------------------- 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using Python 2.6 software in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python 2.6 alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright © 2001-2008 Python Software Foundation; All Rights Reserved" are retained in Python 2.6 alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python 2.6 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python 2.6. 4. PSF is making Python 2.6 available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 2.6 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 2.6 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 2.6, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using Python 2.6, Licensee agrees to be bound by the terms and conditions of this License Agreement. ---------------------------------------------------------------------- The included smartypants module, included as sphinx.util.smartypants, is available under the following license: ---------------------------------------------------------------------- SmartyPants_ license:: Copyright (c) 2003 John Gruber (http://daringfireball.net/) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name "SmartyPants" nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. This software is provided by the copyright holders and contributors "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage. smartypants.py license:: smartypants.py is a derivative work of SmartyPants. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. This software is provided by the copyright holders and contributors "as is" and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage. ---------------------------------------------------------------------- The ElementTree package, included in this distribution in test/etree13, is available under the following license: ---------------------------------------------------------------------- The ElementTree toolkit is Copyright (c) 1999-2007 by Fredrik Lundh By obtaining, using, and/or copying this software and/or its associated documentation, you agree that you have read, understood, and will comply with the following terms and conditions: Permission to use, copy, modify, and distribute this software and its associated documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appears in all copies, and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Secret Labs AB or the author not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. ---------------------------------------------------------------------- The included JQuery JavaScript library is available under the MIT license: ---------------------------------------------------------------------- Copyright (c) 2008 John Resig, http://jquery.com/ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ---------------------------------------------------------------------- The included Underscore JavaScript library is available under the MIT license: ---------------------------------------------------------------------- Copyright (c) 2009 Jeremy Ashkenas, DocumentCloud Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ------------------------------------------------------------------------------- The included implementation of NumpyDocstring._parse_numpydoc_see_also_section was derived from code under the following license: ------------------------------------------------------------------------------- Copyright (C) 2008 Stefan van der Walt , Pauli Virtanen Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ------------------------------------------------------------------------------- Sphinx-1.3.6/README.rst0000664000175000017500000000377312501733751015627 0ustar vagrantvagrant00000000000000================= README for Sphinx ================= This is the Sphinx documentation generator, see http://sphinx-doc.org/. Installing ========== Install from PyPI to use stable version:: pip install -U sphinx Install from PyPI to use beta version:: pip install -U --pre sphinx Install from newest dev version in stable branch:: pip install git+https://github.com/sphinx-doc/sphinx@stable Install from newest dev version in master branch:: pip install git+https://github.com/sphinx-doc/sphinx Install from cloned source:: pip install . Install from cloned source as editable:: pip install -e . Reading the docs ================ After installing:: cd doc make html Then, direct your browser to ``_build/html/index.html``. Or read them online at . Testing ======= To run the tests with the interpreter available as ``python``, use:: make test If you want to use a different interpreter, e.g. ``python3``, use:: PYTHON=python3 make test Continuous testing runs on travis: .. image:: https://travis-ci.org/sphinx-doc/sphinx.svg?branch=master :target: https://travis-ci.org/sphinx-doc/sphinx Contributing ============ #. Check for open issues or open a fresh issue to start a discussion around a feature idea or a bug. #. If you feel uncomfortable or uncertain about an issue or your changes, feel free to email sphinx-dev@googlegroups.com. #. Fork the repository on GitHub https://github.com/sphinx-doc/sphinx to start making your changes to the **master** branch for next major version, or **stable** branch for next minor version. #. Write a test which shows that the bug was fixed or that the feature works as expected. Use ``make test`` to run the test suite. #. Send a pull request and bug the maintainer until it gets merged and published. Make sure to add yourself to AUTHORS and the change to CHANGES . Sphinx-1.3.6/babel.cfg0000664000175000017500000000107612477663665015702 0ustar vagrantvagrant00000000000000# How setup this file # http://babel.edgewall.org/wiki/Documentation/setup.html # this file description: # http://babel.edgewall.org/wiki/Documentation/messages.html#extraction-method-mapping-and-configuration # Extraction from Python source files [python: **.py] encoding = utf-8 # Extraction from Jinja2 HTML templates [jinja2: **/themes/**.html] encoding = utf-8 ignore_tags = script,style include_attrs = alt title summary # Extraction from Jinja2 XML templates [jinja2: **/themes/**.xml] # Extraction from JavaScript files [javascript: **.js] [javascript: **.js_t] Sphinx-1.3.6/setup.cfg0000664000175000017500000000112012665046155015747 0ustar vagrantvagrant00000000000000[egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 [aliases] release = egg_info -RDb '' upload = upload --sign --identity=36580288 [extract_messages] mapping_file = babel.cfg output_file = sphinx/locale/sphinx.pot keywords = _ l_ lazy_gettext [update_catalog] input_file = sphinx/locale/sphinx.pot domain = sphinx output_dir = sphinx/locale/ [compile_catalog] domain = sphinx directory = sphinx/locale/ [wheel] universal = 1 [flake8] max-line-length = 95 ignore = E113,E116,E221,E226,E241,E251 exclude = ez_setup.py,utils/*,tests/*,build/*,sphinx/search/*,sphinx/pycode/pgen2/* Sphinx-1.3.6/sphinx-autogen.py0000775000175000017500000000057212660074372017463 0ustar vagrantvagrant00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Sphinx - Python documentation toolchain ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import sys if __name__ == '__main__': from sphinx.ext.autosummary.generate import main sys.exit(main(sys.argv)) Sphinx-1.3.6/sphinx/0000775000175000017500000000000012665046155015445 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/ext/0000775000175000017500000000000012665046155016245 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/ext/autosummary/0000775000175000017500000000000012665046155020633 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/ext/autosummary/__init__.py0000664000175000017500000004610512665045070022745 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.autosummary ~~~~~~~~~~~~~~~~~~~~~~ Sphinx extension that adds an autosummary:: directive, which can be used to generate function/method/attribute/etc. summary lists, similar to those output eg. by Epydoc and other API doc generation tools. An :autolink: role is also provided. autosummary directive --------------------- The autosummary directive has the form:: .. autosummary:: :nosignatures: :toctree: generated/ module.function_1 module.function_2 ... and it generates an output table (containing signatures, optionally) ======================== ============================================= module.function_1(args) Summary line from the docstring of function_1 module.function_2(args) Summary line from the docstring ... ======================== ============================================= If the :toctree: option is specified, files matching the function names are inserted to the toctree with the given prefix: generated/module.function_1 generated/module.function_2 ... Note: The file names contain the module:: or currentmodule:: prefixes. .. seealso:: autosummary_generate.py autolink role ------------- The autolink role functions as ``:obj:`` when the name referred can be resolved to a Python object, and otherwise it becomes simple emphasis. This can be used as the default role to make links 'smart'. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import os import re import sys import inspect import posixpath from types import ModuleType from six import text_type from docutils.parsers.rst import directives from docutils.statemachine import ViewList from docutils import nodes import sphinx from sphinx import addnodes from sphinx.util.compat import Directive from sphinx.pycode import ModuleAnalyzer, PycodeError from sphinx.ext.autodoc import Options # -- autosummary_toc node ------------------------------------------------------ class autosummary_toc(nodes.comment): pass def process_autosummary_toc(app, doctree): """Insert items described in autosummary:: to the TOC tree, but do not generate the toctree:: list. """ env = app.builder.env crawled = {} def crawl_toc(node, depth=1): crawled[node] = True for j, subnode in enumerate(node): try: if (isinstance(subnode, autosummary_toc) and isinstance(subnode[0], addnodes.toctree)): env.note_toctree(env.docname, subnode[0]) continue except IndexError: continue if not isinstance(subnode, nodes.section): continue if subnode not in crawled: crawl_toc(subnode, depth+1) crawl_toc(doctree) def autosummary_toc_visit_html(self, node): """Hide autosummary toctree list in HTML output.""" raise nodes.SkipNode def autosummary_noop(self, node): pass # -- autosummary_table node ---------------------------------------------------- class autosummary_table(nodes.comment): pass def autosummary_table_visit_html(self, node): """Make the first column of the table non-breaking.""" try: tbody = node[0][0][-1] for row in tbody: col1_entry = row[0] par = col1_entry[0] for j, subnode in enumerate(list(par)): if isinstance(subnode, nodes.Text): new_text = text_type(subnode.astext()) new_text = new_text.replace(u" ", u"\u00a0") par[j] = nodes.Text(new_text) except IndexError: pass # -- autodoc integration ------------------------------------------------------- class FakeDirective: env = {} genopt = Options() def get_documenter(obj, parent): """Get an autodoc.Documenter class suitable for documenting the given object. *obj* is the Python object to be documented, and *parent* is an another Python object (e.g. a module or a class) to which *obj* belongs to. """ from sphinx.ext.autodoc import AutoDirective, DataDocumenter, \ ModuleDocumenter if inspect.ismodule(obj): # ModuleDocumenter.can_document_member always returns False return ModuleDocumenter # Construct a fake documenter for *parent* if parent is not None: parent_doc_cls = get_documenter(parent, None) else: parent_doc_cls = ModuleDocumenter if hasattr(parent, '__name__'): parent_doc = parent_doc_cls(FakeDirective(), parent.__name__) else: parent_doc = parent_doc_cls(FakeDirective(), "") # Get the corrent documenter class for *obj* classes = [cls for cls in AutoDirective._registry.values() if cls.can_document_member(obj, '', False, parent_doc)] if classes: classes.sort(key=lambda cls: cls.priority) return classes[-1] else: return DataDocumenter # -- .. autosummary:: ---------------------------------------------------------- class Autosummary(Directive): """ Pretty table containing short signatures and summaries of functions etc. autosummary can also optionally generate a hidden toctree:: node. """ required_arguments = 0 optional_arguments = 0 final_argument_whitespace = False has_content = True option_spec = { 'toctree': directives.unchanged, 'nosignatures': directives.flag, 'template': directives.unchanged, } def warn(self, msg): self.warnings.append(self.state.document.reporter.warning( msg, line=self.lineno)) def run(self): self.env = env = self.state.document.settings.env self.genopt = Options() self.warnings = [] self.result = ViewList() names = [x.strip().split()[0] for x in self.content if x.strip() and re.search(r'^[~a-zA-Z_]', x.strip()[0])] items = self.get_items(names) nodes = self.get_table(items) if 'toctree' in self.options: dirname = posixpath.dirname(env.docname) tree_prefix = self.options['toctree'].strip() docnames = [] for name, sig, summary, real_name in items: docname = posixpath.join(tree_prefix, real_name) docname = posixpath.normpath(posixpath.join(dirname, docname)) if docname not in env.found_docs: self.warn('toctree references unknown document %r' % docname) docnames.append(docname) tocnode = addnodes.toctree() tocnode['includefiles'] = docnames tocnode['entries'] = [(None, docn) for docn in docnames] tocnode['maxdepth'] = -1 tocnode['glob'] = None tocnode = autosummary_toc('', '', tocnode) nodes.append(tocnode) return self.warnings + nodes def get_items(self, names): """Try to import the given names, and return a list of ``[(name, signature, summary_string, real_name), ...]``. """ env = self.state.document.settings.env prefixes = get_import_prefixes_from_env(env) items = [] max_item_chars = 50 for name in names: display_name = name if name.startswith('~'): name = name[1:] display_name = name.split('.')[-1] try: real_name, obj, parent, modname = import_by_name(name, prefixes=prefixes) except ImportError: self.warn('failed to import %s' % name) items.append((name, '', '', name)) continue self.result = ViewList() # initialize for each documenter full_name = real_name if not isinstance(obj, ModuleType): # give explicitly separated module name, so that members # of inner classes can be documented full_name = modname + '::' + full_name[len(modname)+1:] # NB. using full_name here is important, since Documenters # handle module prefixes slightly differently documenter = get_documenter(obj, parent)(self, full_name) if not documenter.parse_name(): self.warn('failed to parse name %s' % real_name) items.append((display_name, '', '', real_name)) continue if not documenter.import_object(): self.warn('failed to import object %s' % real_name) items.append((display_name, '', '', real_name)) continue if documenter.options.members and not documenter.check_module(): continue # try to also get a source code analyzer for attribute docs try: documenter.analyzer = ModuleAnalyzer.for_module( documenter.get_real_modname()) # parse right now, to get PycodeErrors on parsing (results will # be cached anyway) documenter.analyzer.find_attr_docs() except PycodeError as err: documenter.env.app.debug( '[autodoc] module analyzer failed: %s', err) # no source file -- e.g. for builtin and C modules documenter.analyzer = None # -- Grab the signature sig = documenter.format_signature() if not sig: sig = '' else: max_chars = max(10, max_item_chars - len(display_name)) sig = mangle_signature(sig, max_chars=max_chars) sig = sig.replace('*', r'\*') # -- Grab the summary documenter.add_content(None) doc = list(documenter.process_doc([self.result.data])) while doc and not doc[0].strip(): doc.pop(0) # If there's a blank line, then we can assume the first sentence / # paragraph has ended, so anything after shouldn't be part of the # summary for i, piece in enumerate(doc): if not piece.strip(): doc = doc[:i] break # Try to find the "first sentence", which may span multiple lines m = re.search(r"^([A-Z].*?\.)(?:\s|$)", " ".join(doc).strip()) if m: summary = m.group(1).strip() elif doc: summary = doc[0].strip() else: summary = '' items.append((display_name, sig, summary, real_name)) return items def get_table(self, items): """Generate a proper list of table nodes for autosummary:: directive. *items* is a list produced by :meth:`get_items`. """ table_spec = addnodes.tabular_col_spec() table_spec['spec'] = 'll' table = autosummary_table('') real_table = nodes.table('', classes=['longtable']) table.append(real_table) group = nodes.tgroup('', cols=2) real_table.append(group) group.append(nodes.colspec('', colwidth=10)) group.append(nodes.colspec('', colwidth=90)) body = nodes.tbody('') group.append(body) def append_row(*column_texts): row = nodes.row('') for text in column_texts: node = nodes.paragraph('') vl = ViewList() vl.append(text, '') self.state.nested_parse(vl, 0, node) try: if isinstance(node[0], nodes.paragraph): node = node[0] except IndexError: pass row.append(nodes.entry('', node)) body.append(row) for name, sig, summary, real_name in items: qualifier = 'obj' if 'nosignatures' not in self.options: col1 = ':%s:`%s <%s>`\ %s' % (qualifier, name, real_name, sig) else: col1 = ':%s:`%s <%s>`' % (qualifier, name, real_name) col2 = summary append_row(col1, col2) return [table_spec, table] def mangle_signature(sig, max_chars=30): """Reformat a function signature to a more compact form.""" s = re.sub(r"^\((.*)\)$", r"\1", sig).strip() # Strip strings (which can contain things that confuse the code below) s = re.sub(r"\\\\", "", s) s = re.sub(r"\\'", "", s) s = re.sub(r"'[^']*'", "", s) # Parse the signature to arguments + options args = [] opts = [] opt_re = re.compile(r"^(.*, |)([a-zA-Z0-9_*]+)=") while s: m = opt_re.search(s) if not m: # The rest are arguments args = s.split(', ') break opts.insert(0, m.group(2)) s = m.group(1)[:-2] # Produce a more compact signature sig = limited_join(", ", args, max_chars=max_chars-2) if opts: if not sig: sig = "[%s]" % limited_join(", ", opts, max_chars=max_chars-4) elif len(sig) < max_chars - 4 - 2 - 3: sig += "[, %s]" % limited_join(", ", opts, max_chars=max_chars-len(sig)-4-2) return u"(%s)" % sig def limited_join(sep, items, max_chars=30, overflow_marker="..."): """Join a number of strings to one, limiting the length to *max_chars*. If the string overflows this limit, replace the last fitting item by *overflow_marker*. Returns: joined_string """ full_str = sep.join(items) if len(full_str) < max_chars: return full_str n_chars = 0 n_items = 0 for j, item in enumerate(items): n_chars += len(item) + len(sep) if n_chars < max_chars - len(overflow_marker): n_items += 1 else: break return sep.join(list(items[:n_items]) + [overflow_marker]) # -- Importing items ----------------------------------------------------------- def get_import_prefixes_from_env(env): """ Obtain current Python import prefixes (for `import_by_name`) from ``document.env`` """ prefixes = [None] currmodule = env.ref_context.get('py:module') if currmodule: prefixes.insert(0, currmodule) currclass = env.ref_context.get('py:class') if currclass: if currmodule: prefixes.insert(0, currmodule + "." + currclass) else: prefixes.insert(0, currclass) return prefixes def import_by_name(name, prefixes=[None]): """Import a Python object that has the given *name*, under one of the *prefixes*. The first name that succeeds is used. """ tried = [] for prefix in prefixes: try: if prefix: prefixed_name = '.'.join([prefix, name]) else: prefixed_name = name obj, parent, modname = _import_by_name(prefixed_name) return prefixed_name, obj, parent, modname except ImportError: tried.append(prefixed_name) raise ImportError('no module named %s' % ' or '.join(tried)) def _import_by_name(name): """Import a Python object given its full name.""" try: name_parts = name.split('.') # try first interpret `name` as MODNAME.OBJ modname = '.'.join(name_parts[:-1]) if modname: try: __import__(modname) mod = sys.modules[modname] return getattr(mod, name_parts[-1]), mod, modname except (ImportError, IndexError, AttributeError): pass # ... then as MODNAME, MODNAME.OBJ1, MODNAME.OBJ1.OBJ2, ... last_j = 0 modname = None for j in reversed(range(1, len(name_parts)+1)): last_j = j modname = '.'.join(name_parts[:j]) try: __import__(modname) except ImportError: continue if modname in sys.modules: break if last_j < len(name_parts): parent = None obj = sys.modules[modname] for obj_name in name_parts[last_j:]: parent = obj obj = getattr(obj, obj_name) return obj, parent, modname else: return sys.modules[modname], None, modname except (ValueError, ImportError, AttributeError, KeyError) as e: raise ImportError(*e.args) # -- :autolink: (smart default role) ------------------------------------------- def autolink_role(typ, rawtext, etext, lineno, inliner, options={}, content=[]): """Smart linking role. Expands to ':obj:`text`' if `text` is an object that can be imported; otherwise expands to '*text*'. """ env = inliner.document.settings.env r = env.get_domain('py').role('obj')( 'obj', rawtext, etext, lineno, inliner, options, content) pnode = r[0][0] prefixes = get_import_prefixes_from_env(env) try: name, obj, parent, modname = import_by_name(pnode['reftarget'], prefixes) except ImportError: content = pnode[0] r[0][0] = nodes.emphasis(rawtext, content[0].astext(), classes=content['classes']) return r def process_generate_options(app): genfiles = app.config.autosummary_generate if genfiles and not hasattr(genfiles, '__len__'): env = app.builder.env genfiles = [env.doc2path(x, base=None) for x in env.found_docs if os.path.isfile(env.doc2path(x))] if not genfiles: return from sphinx.ext.autosummary.generate import generate_autosummary_docs ext = app.config.source_suffix[0] genfiles = [genfile + (not genfile.endswith(ext) and ext or '') for genfile in genfiles] generate_autosummary_docs(genfiles, builder=app.builder, warn=app.warn, info=app.info, suffix=ext, base_path=app.srcdir) def setup(app): # I need autodoc app.setup_extension('sphinx.ext.autodoc') app.add_node(autosummary_toc, html=(autosummary_toc_visit_html, autosummary_noop), latex=(autosummary_noop, autosummary_noop), text=(autosummary_noop, autosummary_noop), man=(autosummary_noop, autosummary_noop), texinfo=(autosummary_noop, autosummary_noop)) app.add_node(autosummary_table, html=(autosummary_table_visit_html, autosummary_noop), latex=(autosummary_noop, autosummary_noop), text=(autosummary_noop, autosummary_noop), man=(autosummary_noop, autosummary_noop), texinfo=(autosummary_noop, autosummary_noop)) app.add_directive('autosummary', Autosummary) app.add_role('autolink', autolink_role) app.connect('doctree-read', process_autosummary_toc) app.connect('builder-inited', process_generate_options) app.add_config_value('autosummary_generate', [], True) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/autosummary/templates/0000775000175000017500000000000012665046155022631 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/ext/autosummary/templates/autosummary/0000775000175000017500000000000012665046155025217 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/ext/autosummary/templates/autosummary/module.rst0000664000175000017500000000117312477663665027254 0ustar vagrantvagrant00000000000000{{ fullname }} {{ underline }} .. automodule:: {{ fullname }} {% block functions %} {% if functions %} .. rubric:: Functions .. autosummary:: {% for item in functions %} {{ item }} {%- endfor %} {% endif %} {% endblock %} {% block classes %} {% if classes %} .. rubric:: Classes .. autosummary:: {% for item in classes %} {{ item }} {%- endfor %} {% endif %} {% endblock %} {% block exceptions %} {% if exceptions %} .. rubric:: Exceptions .. autosummary:: {% for item in exceptions %} {{ item }} {%- endfor %} {% endif %} {% endblock %} Sphinx-1.3.6/sphinx/ext/autosummary/templates/autosummary/class.rst0000664000175000017500000000101712477663665027071 0ustar vagrantvagrant00000000000000{{ fullname }} {{ underline }} .. currentmodule:: {{ module }} .. autoclass:: {{ objname }} {% block methods %} .. automethod:: __init__ {% if methods %} .. rubric:: Methods .. autosummary:: {% for item in methods %} ~{{ name }}.{{ item }} {%- endfor %} {% endif %} {% endblock %} {% block attributes %} {% if attributes %} .. rubric:: Attributes .. autosummary:: {% for item in attributes %} ~{{ name }}.{{ item }} {%- endfor %} {% endif %} {% endblock %} Sphinx-1.3.6/sphinx/ext/autosummary/templates/autosummary/base.rst0000664000175000017500000000014612477663665026700 0ustar vagrantvagrant00000000000000{{ fullname }} {{ underline }} .. currentmodule:: {{ module }} .. auto{{ objtype }}:: {{ objname }} Sphinx-1.3.6/sphinx/ext/autosummary/generate.py0000664000175000017500000002667312660074372023012 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.autosummary.generate ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Usable as a library or script to generate automatic RST source files for items referred to in autosummary:: directives. Each generated RST file contains a single auto*:: directive which extracts the docstring of the referred item. Example Makefile rule:: generate: sphinx-autogen -o source/generated source/*.rst :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from __future__ import print_function import os import re import sys import pydoc import optparse import codecs from jinja2 import FileSystemLoader, TemplateNotFound from jinja2.sandbox import SandboxedEnvironment from sphinx import package_dir from sphinx.ext.autosummary import import_by_name, get_documenter from sphinx.jinja2glue import BuiltinTemplateLoader from sphinx.util.osutil import ensuredir from sphinx.util.inspect import safe_getattr # Add documenters to AutoDirective registry from sphinx.ext.autodoc import add_documenter, \ ModuleDocumenter, ClassDocumenter, ExceptionDocumenter, DataDocumenter, \ FunctionDocumenter, MethodDocumenter, AttributeDocumenter, \ InstanceAttributeDocumenter add_documenter(ModuleDocumenter) add_documenter(ClassDocumenter) add_documenter(ExceptionDocumenter) add_documenter(DataDocumenter) add_documenter(FunctionDocumenter) add_documenter(MethodDocumenter) add_documenter(AttributeDocumenter) add_documenter(InstanceAttributeDocumenter) def main(argv=sys.argv): usage = """%prog [OPTIONS] SOURCEFILE ...""" p = optparse.OptionParser(usage.strip()) p.add_option("-o", "--output-dir", action="store", type="string", dest="output_dir", default=None, help="Directory to place all output in") p.add_option("-s", "--suffix", action="store", type="string", dest="suffix", default="rst", help="Default suffix for files (default: %default)") p.add_option("-t", "--templates", action="store", type="string", dest="templates", default=None, help="Custom template directory (default: %default)") options, args = p.parse_args(argv[1:]) if len(args) < 1: p.error('no input files given') generate_autosummary_docs(args, options.output_dir, "." + options.suffix, template_dir=options.templates) def _simple_info(msg): print(msg) def _simple_warn(msg): print('WARNING: ' + msg, file=sys.stderr) # -- Generating output --------------------------------------------------------- def generate_autosummary_docs(sources, output_dir=None, suffix='.rst', warn=_simple_warn, info=_simple_info, base_path=None, builder=None, template_dir=None): showed_sources = list(sorted(sources)) if len(showed_sources) > 20: showed_sources = showed_sources[:10] + ['...'] + showed_sources[-10:] info('[autosummary] generating autosummary for: %s' % ', '.join(showed_sources)) if output_dir: info('[autosummary] writing to %s' % output_dir) if base_path is not None: sources = [os.path.join(base_path, filename) for filename in sources] # create our own templating environment template_dirs = [os.path.join(package_dir, 'ext', 'autosummary', 'templates')] if builder is not None: # allow the user to override the templates template_loader = BuiltinTemplateLoader() template_loader.init(builder, dirs=template_dirs) else: if template_dir: template_dirs.insert(0, template_dir) template_loader = FileSystemLoader(template_dirs) template_env = SandboxedEnvironment(loader=template_loader) # read items = find_autosummary_in_files(sources) # keep track of new files new_files = [] # write for name, path, template_name in sorted(set(items), key=str): if path is None: # The corresponding autosummary:: directive did not have # a :toctree: option continue path = output_dir or os.path.abspath(path) ensuredir(path) try: name, obj, parent, mod_name = import_by_name(name) except ImportError as e: warn('[autosummary] failed to import %r: %s' % (name, e)) continue fn = os.path.join(path, name + suffix) # skip it if it exists if os.path.isfile(fn): continue new_files.append(fn) with open(fn, 'w') as f: doc = get_documenter(obj, parent) if template_name is not None: template = template_env.get_template(template_name) else: try: template = template_env.get_template('autosummary/%s.rst' % doc.objtype) except TemplateNotFound: template = template_env.get_template('autosummary/base.rst') def get_members(obj, typ, include_public=[]): items = [] for name in dir(obj): try: documenter = get_documenter(safe_getattr(obj, name), obj) except AttributeError: continue if documenter.objtype == typ: items.append(name) public = [x for x in items if x in include_public or not x.startswith('_')] return public, items ns = {} if doc.objtype == 'module': ns['members'] = dir(obj) ns['functions'], ns['all_functions'] = \ get_members(obj, 'function') ns['classes'], ns['all_classes'] = \ get_members(obj, 'class') ns['exceptions'], ns['all_exceptions'] = \ get_members(obj, 'exception') elif doc.objtype == 'class': ns['members'] = dir(obj) ns['methods'], ns['all_methods'] = \ get_members(obj, 'method', ['__init__']) ns['attributes'], ns['all_attributes'] = \ get_members(obj, 'attribute') parts = name.split('.') if doc.objtype in ('method', 'attribute'): mod_name = '.'.join(parts[:-2]) cls_name = parts[-2] obj_name = '.'.join(parts[-2:]) ns['class'] = cls_name else: mod_name, obj_name = '.'.join(parts[:-1]), parts[-1] ns['fullname'] = name ns['module'] = mod_name ns['objname'] = obj_name ns['name'] = parts[-1] ns['objtype'] = doc.objtype ns['underline'] = len(name) * '=' rendered = template.render(**ns) f.write(rendered) # descend recursively to new files if new_files: generate_autosummary_docs(new_files, output_dir=output_dir, suffix=suffix, warn=warn, info=info, base_path=base_path, builder=builder, template_dir=template_dir) # -- Finding documented entries in files --------------------------------------- def find_autosummary_in_files(filenames): """Find out what items are documented in source/*.rst. See `find_autosummary_in_lines`. """ documented = [] for filename in filenames: with codecs.open(filename, 'r', encoding='utf-8', errors='ignore') as f: lines = f.read().splitlines() documented.extend(find_autosummary_in_lines(lines, filename=filename)) return documented def find_autosummary_in_docstring(name, module=None, filename=None): """Find out what items are documented in the given object's docstring. See `find_autosummary_in_lines`. """ try: real_name, obj, parent, modname = import_by_name(name) lines = pydoc.getdoc(obj).splitlines() return find_autosummary_in_lines(lines, module=name, filename=filename) except AttributeError: pass except ImportError as e: print("Failed to import '%s': %s" % (name, e)) except SystemExit as e: print("Failed to import '%s'; the module executes module level " "statement and it might call sys.exit()." % name) return [] def find_autosummary_in_lines(lines, module=None, filename=None): """Find out what items appear in autosummary:: directives in the given lines. Returns a list of (name, toctree, template) where *name* is a name of an object and *toctree* the :toctree: path of the corresponding autosummary directive (relative to the root of the file name), and *template* the value of the :template: option. *toctree* and *template* ``None`` if the directive does not have the corresponding options set. """ autosummary_re = re.compile(r'^(\s*)\.\.\s+autosummary::\s*') automodule_re = re.compile( r'^\s*\.\.\s+automodule::\s*([A-Za-z0-9_.]+)\s*$') module_re = re.compile( r'^\s*\.\.\s+(current)?module::\s*([a-zA-Z0-9_.]+)\s*$') autosummary_item_re = re.compile(r'^\s+(~?[_a-zA-Z][a-zA-Z0-9_.]*)\s*.*?') toctree_arg_re = re.compile(r'^\s+:toctree:\s*(.*?)\s*$') template_arg_re = re.compile(r'^\s+:template:\s*(.*?)\s*$') documented = [] toctree = None template = None current_module = module in_autosummary = False base_indent = "" for line in lines: if in_autosummary: m = toctree_arg_re.match(line) if m: toctree = m.group(1) if filename: toctree = os.path.join(os.path.dirname(filename), toctree) continue m = template_arg_re.match(line) if m: template = m.group(1).strip() continue if line.strip().startswith(':'): continue # skip options m = autosummary_item_re.match(line) if m: name = m.group(1).strip() if name.startswith('~'): name = name[1:] if current_module and \ not name.startswith(current_module + '.'): name = "%s.%s" % (current_module, name) documented.append((name, toctree, template)) continue if not line.strip() or line.startswith(base_indent + " "): continue in_autosummary = False m = autosummary_re.match(line) if m: in_autosummary = True base_indent = m.group(1) toctree = None template = None continue m = automodule_re.search(line) if m: current_module = m.group(1).strip() # recurse into the automodule docstring documented.extend(find_autosummary_in_docstring( current_module, filename=filename)) continue m = module_re.match(line) if m: current_module = m.group(2) continue return documented if __name__ == '__main__': main() Sphinx-1.3.6/sphinx/ext/__init__.py0000664000175000017500000000035012660074372020351 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext ~~~~~~~~~~ Contains Sphinx features not activated by default. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ Sphinx-1.3.6/sphinx/ext/doctest.py0000664000175000017500000004156512660074372020274 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.doctest ~~~~~~~~~~~~~~~~~~ Mimic doctest by automatically executing code snippets and checking their results. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from __future__ import absolute_import import re import sys import time import codecs from os import path import doctest from six import itervalues, StringIO, binary_type, text_type, PY2 from docutils import nodes from docutils.parsers.rst import directives import sphinx from sphinx.builders import Builder from sphinx.util import force_decode from sphinx.util.nodes import set_source_info from sphinx.util.compat import Directive from sphinx.util.console import bold from sphinx.util.osutil import fs_encoding blankline_re = re.compile(r'^\s*', re.MULTILINE) doctestopt_re = re.compile(r'#\s*doctest:.+$', re.MULTILINE) if PY2: def doctest_encode(text, encoding): if isinstance(text, text_type): text = text.encode(encoding) if text.startswith(codecs.BOM_UTF8): text = text[len(codecs.BOM_UTF8):] return text else: def doctest_encode(text, encoding): return text # set up the necessary directives class TestDirective(Directive): """ Base class for doctest-related directives. """ has_content = True required_arguments = 0 optional_arguments = 1 final_argument_whitespace = True def run(self): # use ordinary docutils nodes for test code: they get special attributes # so that our builder recognizes them, and the other builders are happy. code = '\n'.join(self.content) test = None if self.name == 'doctest': if '' in code: # convert s to ordinary blank lines for presentation test = code code = blankline_re.sub('', code) if doctestopt_re.search(code): if not test: test = code code = doctestopt_re.sub('', code) nodetype = nodes.literal_block if self.name in ('testsetup', 'testcleanup') or 'hide' in self.options: nodetype = nodes.comment if self.arguments: groups = [x.strip() for x in self.arguments[0].split(',')] else: groups = ['default'] node = nodetype(code, code, testnodetype=self.name, groups=groups) set_source_info(self, node) if test is not None: # only save if it differs from code node['test'] = test if self.name == 'testoutput': # don't try to highlight output node['language'] = 'none' node['options'] = {} if self.name in ('doctest', 'testoutput') and 'options' in self.options: # parse doctest-like output comparison flags option_strings = self.options['options'].replace(',', ' ').split() for option in option_strings: if (option[0] not in '+-' or option[1:] not in doctest.OPTIONFLAGS_BY_NAME): # XXX warn? continue flag = doctest.OPTIONFLAGS_BY_NAME[option[1:]] node['options'][flag] = (option[0] == '+') return [node] class TestsetupDirective(TestDirective): option_spec = {} class TestcleanupDirective(TestDirective): option_spec = {} class DoctestDirective(TestDirective): option_spec = { 'hide': directives.flag, 'options': directives.unchanged, } class TestcodeDirective(TestDirective): option_spec = { 'hide': directives.flag, } class TestoutputDirective(TestDirective): option_spec = { 'hide': directives.flag, 'options': directives.unchanged, } parser = doctest.DocTestParser() # helper classes class TestGroup(object): def __init__(self, name): self.name = name self.setup = [] self.tests = [] self.cleanup = [] def add_code(self, code, prepend=False): if code.type == 'testsetup': if prepend: self.setup.insert(0, code) else: self.setup.append(code) elif code.type == 'testcleanup': self.cleanup.append(code) elif code.type == 'doctest': self.tests.append([code]) elif code.type == 'testcode': self.tests.append([code, None]) elif code.type == 'testoutput': if self.tests and len(self.tests[-1]) == 2: self.tests[-1][1] = code else: raise RuntimeError('invalid TestCode type') def __repr__(self): return 'TestGroup(name=%r, setup=%r, cleanup=%r, tests=%r)' % ( self.name, self.setup, self.cleanup, self.tests) class TestCode(object): def __init__(self, code, type, lineno, options=None): self.code = code self.type = type self.lineno = lineno self.options = options or {} def __repr__(self): return 'TestCode(%r, %r, %r, options=%r)' % ( self.code, self.type, self.lineno, self.options) class SphinxDocTestRunner(doctest.DocTestRunner): def summarize(self, out, verbose=None): string_io = StringIO() old_stdout = sys.stdout sys.stdout = string_io try: res = doctest.DocTestRunner.summarize(self, verbose) finally: sys.stdout = old_stdout out(string_io.getvalue()) return res def _DocTestRunner__patched_linecache_getlines(self, filename, module_globals=None): # this is overridden from DocTestRunner adding the try-except below m = self._DocTestRunner__LINECACHE_FILENAME_RE.match(filename) if m and m.group('name') == self.test.name: try: example = self.test.examples[int(m.group('examplenum'))] # because we compile multiple doctest blocks with the same name # (viz. the group name) this might, for outer stack frames in a # traceback, get the wrong test which might not have enough examples except IndexError: pass else: return example.source.splitlines(True) return self.save_linecache_getlines(filename, module_globals) # the new builder -- use sphinx-build.py -b doctest to run class DocTestBuilder(Builder): """ Runs test snippets in the documentation. """ name = 'doctest' def init(self): # default options self.opt = doctest.DONT_ACCEPT_TRUE_FOR_1 | doctest.ELLIPSIS | \ doctest.IGNORE_EXCEPTION_DETAIL # HACK HACK HACK # doctest compiles its snippets with type 'single'. That is nice # for doctest examples but unusable for multi-statement code such # as setup code -- to be able to use doctest error reporting with # that code nevertheless, we monkey-patch the "compile" it uses. doctest.compile = self.compile sys.path[0:0] = self.config.doctest_path self.type = 'single' self.total_failures = 0 self.total_tries = 0 self.setup_failures = 0 self.setup_tries = 0 self.cleanup_failures = 0 self.cleanup_tries = 0 date = time.strftime('%Y-%m-%d %H:%M:%S') self.outfile = codecs.open(path.join(self.outdir, 'output.txt'), 'w', encoding='utf-8') self.outfile.write('''\ Results of doctest builder run on %s ==================================%s ''' % (date, '='*len(date))) def _out(self, text): self.info(text, nonl=True) self.outfile.write(text) def _warn_out(self, text): if self.app.quiet or self.app.warningiserror: self.warn(text) else: self.info(text, nonl=True) if isinstance(text, binary_type): text = force_decode(text, None) self.outfile.write(text) def get_target_uri(self, docname, typ=None): return '' def get_outdated_docs(self): return self.env.found_docs def finish(self): # write executive summary def s(v): return v != 1 and 's' or '' repl = (self.total_tries, s(self.total_tries), self.total_failures, s(self.total_failures), self.setup_failures, s(self.setup_failures), self.cleanup_failures, s(self.cleanup_failures)) self._out(''' Doctest summary =============== %5d test%s %5d failure%s in tests %5d failure%s in setup code %5d failure%s in cleanup code ''' % repl) self.outfile.close() if self.total_failures or self.setup_failures or self.cleanup_failures: self.app.statuscode = 1 def write(self, build_docnames, updated_docnames, method='update'): if build_docnames is None: build_docnames = sorted(self.env.all_docs) self.info(bold('running tests...')) for docname in build_docnames: # no need to resolve the doctree doctree = self.env.get_doctree(docname) self.test_doc(docname, doctree) def test_doc(self, docname, doctree): groups = {} add_to_all_groups = [] self.setup_runner = SphinxDocTestRunner(verbose=False, optionflags=self.opt) self.test_runner = SphinxDocTestRunner(verbose=False, optionflags=self.opt) self.cleanup_runner = SphinxDocTestRunner(verbose=False, optionflags=self.opt) self.test_runner._fakeout = self.setup_runner._fakeout self.cleanup_runner._fakeout = self.setup_runner._fakeout if self.config.doctest_test_doctest_blocks: def condition(node): return (isinstance(node, (nodes.literal_block, nodes.comment)) and 'testnodetype' in node) or \ isinstance(node, nodes.doctest_block) else: def condition(node): return isinstance(node, (nodes.literal_block, nodes.comment)) \ and 'testnodetype' in node for node in doctree.traverse(condition): source = 'test' in node and node['test'] or node.astext() if not source: self.warn('no code/output in %s block at %s:%s' % (node.get('testnodetype', 'doctest'), self.env.doc2path(docname), node.line)) code = TestCode(source, type=node.get('testnodetype', 'doctest'), lineno=node.line, options=node.get('options')) node_groups = node.get('groups', ['default']) if '*' in node_groups: add_to_all_groups.append(code) continue for groupname in node_groups: if groupname not in groups: groups[groupname] = TestGroup(groupname) groups[groupname].add_code(code) for code in add_to_all_groups: for group in itervalues(groups): group.add_code(code) if self.config.doctest_global_setup: code = TestCode(self.config.doctest_global_setup, 'testsetup', lineno=0) for group in itervalues(groups): group.add_code(code, prepend=True) if self.config.doctest_global_cleanup: code = TestCode(self.config.doctest_global_cleanup, 'testcleanup', lineno=0) for group in itervalues(groups): group.add_code(code) if not groups: return self._out('\nDocument: %s\n----------%s\n' % (docname, '-'*len(docname))) for group in itervalues(groups): self.test_group(group, self.env.doc2path(docname, base=None)) # Separately count results from setup code res_f, res_t = self.setup_runner.summarize(self._out, verbose=False) self.setup_failures += res_f self.setup_tries += res_t if self.test_runner.tries: res_f, res_t = self.test_runner.summarize(self._out, verbose=True) self.total_failures += res_f self.total_tries += res_t if self.cleanup_runner.tries: res_f, res_t = self.cleanup_runner.summarize(self._out, verbose=True) self.cleanup_failures += res_f self.cleanup_tries += res_t def compile(self, code, name, type, flags, dont_inherit): return compile(code, name, self.type, flags, dont_inherit) def test_group(self, group, filename): if PY2: filename_str = filename.encode(fs_encoding) else: filename_str = filename ns = {} def run_setup_cleanup(runner, testcodes, what): examples = [] for testcode in testcodes: examples.append(doctest.Example( doctest_encode(testcode.code, self.env.config.source_encoding), '', lineno=testcode.lineno)) if not examples: return True # simulate a doctest with the code sim_doctest = doctest.DocTest(examples, {}, '%s (%s code)' % (group.name, what), filename_str, 0, None) sim_doctest.globs = ns old_f = runner.failures self.type = 'exec' # the snippet may contain multiple statements runner.run(sim_doctest, out=self._warn_out, clear_globs=False) if runner.failures > old_f: return False return True # run the setup code if not run_setup_cleanup(self.setup_runner, group.setup, 'setup'): # if setup failed, don't run the group return # run the tests for code in group.tests: if len(code) == 1: # ordinary doctests (code/output interleaved) try: test = parser.get_doctest( doctest_encode(code[0].code, self.env.config.source_encoding), {}, group.name, filename_str, code[0].lineno) except Exception: self.warn('ignoring invalid doctest code: %r' % code[0].code, '%s:%s' % (filename, code[0].lineno)) continue if not test.examples: continue for example in test.examples: # apply directive's comparison options new_opt = code[0].options.copy() new_opt.update(example.options) example.options = new_opt self.type = 'single' # as for ordinary doctests else: # testcode and output separate output = code[1] and code[1].code or '' options = code[1] and code[1].options or {} # disable processing as it is not needed options[doctest.DONT_ACCEPT_BLANKLINE] = True # find out if we're testing an exception m = parser._EXCEPTION_RE.match(output) if m: exc_msg = m.group('msg') else: exc_msg = None example = doctest.Example( doctest_encode(code[0].code, self.env.config.source_encoding), output, exc_msg=exc_msg, lineno=code[0].lineno, options=options) test = doctest.DocTest([example], {}, group.name, filename_str, code[0].lineno, None) self.type = 'exec' # multiple statements again # DocTest.__init__ copies the globs namespace, which we don't want test.globs = ns # also don't clear the globs namespace after running the doctest self.test_runner.run(test, out=self._warn_out, clear_globs=False) # run the cleanup run_setup_cleanup(self.cleanup_runner, group.cleanup, 'cleanup') def setup(app): app.add_directive('testsetup', TestsetupDirective) app.add_directive('testcleanup', TestcleanupDirective) app.add_directive('doctest', DoctestDirective) app.add_directive('testcode', TestcodeDirective) app.add_directive('testoutput', TestoutputDirective) app.add_builder(DocTestBuilder) # this config value adds to sys.path app.add_config_value('doctest_path', [], False) app.add_config_value('doctest_test_doctest_blocks', 'default', False) app.add_config_value('doctest_global_setup', '', False) app.add_config_value('doctest_global_cleanup', '', False) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/linkcode.py0000664000175000017500000000427112660074372020410 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.linkcode ~~~~~~~~~~~~~~~~~~~ Add external links to module code in Python object descriptions. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from docutils import nodes import sphinx from sphinx import addnodes from sphinx.locale import _ from sphinx.errors import SphinxError class LinkcodeError(SphinxError): category = "linkcode error" def doctree_read(app, doctree): env = app.builder.env resolve_target = getattr(env.config, 'linkcode_resolve', None) if not callable(env.config.linkcode_resolve): raise LinkcodeError( "Function `linkcode_resolve` is not given in conf.py") domain_keys = dict( py=['module', 'fullname'], c=['names'], cpp=['names'], js=['object', 'fullname'], ) for objnode in doctree.traverse(addnodes.desc): domain = objnode.get('domain') uris = set() for signode in objnode: if not isinstance(signode, addnodes.desc_signature): continue # Convert signode to a specified format info = {} for key in domain_keys.get(domain, []): value = signode.get(key) if not value: value = '' info[key] = value if not info: continue # Call user code to resolve the link uri = resolve_target(domain, info) if not uri: # no source continue if uri in uris or not uri: # only one link per name, please continue uris.add(uri) onlynode = addnodes.only(expr='html') onlynode += nodes.reference('', '', internal=False, refuri=uri) onlynode[0] += nodes.inline('', _('[source]'), classes=['viewcode-link']) signode += onlynode def setup(app): app.connect('doctree-read', doctree_read) app.add_config_value('linkcode_resolve', None, '') return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/jsmath.py0000664000175000017500000000405712660074372020110 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.jsmath ~~~~~~~~~~~~~~~~~ Set up everything for use of JSMath to display math in HTML via JavaScript. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from docutils import nodes import sphinx from sphinx.application import ExtensionError from sphinx.ext.mathbase import setup_math as mathbase_setup def html_visit_math(self, node): self.body.append(self.starttag(node, 'span', '', CLASS='math')) self.body.append(self.encode(node['latex']) + '') raise nodes.SkipNode def html_visit_displaymath(self, node): if node['nowrap']: self.body.append(self.starttag(node, 'div', CLASS='math')) self.body.append(self.encode(node['latex'])) self.body.append('') raise nodes.SkipNode for i, part in enumerate(node['latex'].split('\n\n')): part = self.encode(part) if i == 0: # necessary to e.g. set the id property correctly if node['number']: self.body.append('(%s)' % node['number']) self.body.append(self.starttag(node, 'div', CLASS='math')) else: # but only once! self.body.append('
') if '&' in part or '\\\\' in part: self.body.append('\\begin{split}' + part + '\\end{split}') else: self.body.append(part) self.body.append('
\n') raise nodes.SkipNode def builder_inited(app): if not app.config.jsmath_path: raise ExtensionError('jsmath_path config value must be set for the ' 'jsmath extension to work') app.add_javascript(app.config.jsmath_path) def setup(app): mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None)) app.add_config_value('jsmath_path', '', False) app.connect('builder-inited', builder_inited) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/pngmath.py0000664000175000017500000002113512665045070020252 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.pngmath ~~~~~~~~~~~~~~~~~~ Render math in HTML via dvipng. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re import codecs import shutil import tempfile import posixpath from os import path from subprocess import Popen, PIPE from hashlib import sha1 from six import text_type from docutils import nodes import sphinx from sphinx.errors import SphinxError from sphinx.util.png import read_png_depth, write_png_depth from sphinx.util.osutil import ensuredir, ENOENT, cd from sphinx.util.pycompat import sys_encoding from sphinx.ext.mathbase import setup_math as mathbase_setup, wrap_displaymath class MathExtError(SphinxError): category = 'Math extension error' def __init__(self, msg, stderr=None, stdout=None): if stderr: msg += '\n[stderr]\n' + stderr.decode(sys_encoding, 'replace') if stdout: msg += '\n[stdout]\n' + stdout.decode(sys_encoding, 'replace') SphinxError.__init__(self, msg) DOC_HEAD = r''' \documentclass[12pt]{article} \usepackage[utf8x]{inputenc} \usepackage{amsmath} \usepackage{amsthm} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{bm} \pagestyle{empty} ''' DOC_BODY = r''' \begin{document} %s \end{document} ''' DOC_BODY_PREVIEW = r''' \usepackage[active]{preview} \begin{document} \begin{preview} %s \end{preview} \end{document} ''' depth_re = re.compile(br'\[\d+ depth=(-?\d+)\]') def render_math(self, math): """Render the LaTeX math expression *math* using latex and dvipng. Return the filename relative to the built document and the "depth", that is, the distance of image bottom and baseline in pixels, if the option to use preview_latex is switched on. Error handling may seem strange, but follows a pattern: if LaTeX or dvipng aren't available, only a warning is generated (since that enables people on machines without these programs to at least build the rest of the docs successfully). If the programs are there, however, they may not fail since that indicates a problem in the math source. """ use_preview = self.builder.config.pngmath_use_preview latex = DOC_HEAD + self.builder.config.pngmath_latex_preamble latex += (use_preview and DOC_BODY_PREVIEW or DOC_BODY) % math shasum = "%s.png" % sha1(latex.encode('utf-8')).hexdigest() relfn = posixpath.join(self.builder.imgpath, 'math', shasum) outfn = path.join(self.builder.outdir, self.builder.imagedir, 'math', shasum) if path.isfile(outfn): depth = read_png_depth(outfn) return relfn, depth # if latex or dvipng has failed once, don't bother to try again if hasattr(self.builder, '_mathpng_warned_latex') or \ hasattr(self.builder, '_mathpng_warned_dvipng'): return None, None # use only one tempdir per build -- the use of a directory is cleaner # than using temporary files, since we can clean up everything at once # just removing the whole directory (see cleanup_tempdir) if not hasattr(self.builder, '_mathpng_tempdir'): tempdir = self.builder._mathpng_tempdir = tempfile.mkdtemp() else: tempdir = self.builder._mathpng_tempdir tf = codecs.open(path.join(tempdir, 'math.tex'), 'w', 'utf-8') tf.write(latex) tf.close() # build latex command; old versions of latex don't have the # --output-directory option, so we have to manually chdir to the # temp dir to run it. ltx_args = [self.builder.config.pngmath_latex, '--interaction=nonstopmode'] # add custom args from the config file ltx_args.extend(self.builder.config.pngmath_latex_args) ltx_args.append('math.tex') with cd(tempdir): try: p = Popen(ltx_args, stdout=PIPE, stderr=PIPE) except OSError as err: if err.errno != ENOENT: # No such file or directory raise self.builder.warn('LaTeX command %r cannot be run (needed for math ' 'display), check the pngmath_latex setting' % self.builder.config.pngmath_latex) self.builder._mathpng_warned_latex = True return None, None stdout, stderr = p.communicate() if p.returncode != 0: raise MathExtError('latex exited with error', stderr, stdout) ensuredir(path.dirname(outfn)) # use some standard dvipng arguments dvipng_args = [self.builder.config.pngmath_dvipng] dvipng_args += ['-o', outfn, '-T', 'tight', '-z9'] # add custom ones from config value dvipng_args.extend(self.builder.config.pngmath_dvipng_args) if use_preview: dvipng_args.append('--depth') # last, the input file name dvipng_args.append(path.join(tempdir, 'math.dvi')) try: p = Popen(dvipng_args, stdout=PIPE, stderr=PIPE) except OSError as err: if err.errno != ENOENT: # No such file or directory raise self.builder.warn('dvipng command %r cannot be run (needed for math ' 'display), check the pngmath_dvipng setting' % self.builder.config.pngmath_dvipng) self.builder._mathpng_warned_dvipng = True return None, None stdout, stderr = p.communicate() if p.returncode != 0: raise MathExtError('dvipng exited with error', stderr, stdout) depth = None if use_preview: for line in stdout.splitlines(): m = depth_re.match(line) if m: depth = int(m.group(1)) write_png_depth(outfn, depth) break return relfn, depth def cleanup_tempdir(app, exc): if exc: return if not hasattr(app.builder, '_mathpng_tempdir'): return try: shutil.rmtree(app.builder._mathpng_tempdir) except Exception: pass def get_tooltip(self, node): if self.builder.config.pngmath_add_tooltips: return ' alt="%s"' % self.encode(node['latex']).strip() return '' def html_visit_math(self, node): try: fname, depth = render_math(self, '$'+node['latex']+'$') except MathExtError as exc: msg = text_type(exc) sm = nodes.system_message(msg, type='WARNING', level=2, backrefs=[], source=node['latex']) sm.walkabout(self) self.builder.warn('display latex %r: ' % node['latex'] + msg) raise nodes.SkipNode if fname is None: # something failed -- use text-only as a bad substitute self.body.append('%s' % self.encode(node['latex']).strip()) else: c = ('') raise nodes.SkipNode def html_visit_displaymath(self, node): if node['nowrap']: latex = node['latex'] else: latex = wrap_displaymath(node['latex'], None) try: fname, depth = render_math(self, latex) except MathExtError as exc: sm = nodes.system_message(str(exc), type='WARNING', level=2, backrefs=[], source=node['latex']) sm.walkabout(self) self.builder.warn('inline latex %r: ' % node['latex'] + str(exc)) raise nodes.SkipNode self.body.append(self.starttag(node, 'div', CLASS='math')) self.body.append('

') if node['number']: self.body.append('(%s)' % node['number']) if fname is None: # something failed -- use text-only as a bad substitute self.body.append('%s

\n' % self.encode(node['latex']).strip()) else: self.body.append(('

\n') raise nodes.SkipNode def setup(app): mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None)) app.add_config_value('pngmath_dvipng', 'dvipng', 'html') app.add_config_value('pngmath_latex', 'latex', 'html') app.add_config_value('pngmath_use_preview', False, 'html') app.add_config_value('pngmath_dvipng_args', ['-gamma', '1.5', '-D', '110', '-bg', 'Transparent'], 'html') app.add_config_value('pngmath_latex_args', [], 'html') app.add_config_value('pngmath_latex_preamble', '', 'html') app.add_config_value('pngmath_add_tooltips', True, 'html') app.connect('build-finished', cleanup_tempdir) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/mathjax.py0000664000175000017500000000563312660074372020257 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.mathjax ~~~~~~~~~~~~~~~~~~ Allow `MathJax `_ to be used to display math in Sphinx's HTML writer -- requires the MathJax JavaScript library on your webserver/computer. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from docutils import nodes import sphinx from sphinx.application import ExtensionError from sphinx.ext.mathbase import setup_math as mathbase_setup def html_visit_math(self, node): self.body.append(self.starttag(node, 'span', '', CLASS='math')) self.body.append(self.builder.config.mathjax_inline[0] + self.encode(node['latex']) + self.builder.config.mathjax_inline[1] + '') raise nodes.SkipNode def html_visit_displaymath(self, node): self.body.append(self.starttag(node, 'div', CLASS='math')) if node['nowrap']: self.body.append(self.builder.config.mathjax_display[0] + self.encode(node['latex']) + self.builder.config.mathjax_display[1]) self.body.append('') raise nodes.SkipNode parts = [prt for prt in node['latex'].split('\n\n') if prt.strip()] for i, part in enumerate(parts): part = self.encode(part) if i == 0: # necessary to e.g. set the id property correctly if node['number']: self.body.append('(%s)' % node['number']) if '&' in part or '\\\\' in part: self.body.append(self.builder.config.mathjax_display[0] + '\\begin{split}' + part + '\\end{split}' + self.builder.config.mathjax_display[1]) else: self.body.append(self.builder.config.mathjax_display[0] + part + self.builder.config.mathjax_display[1]) self.body.append('\n') raise nodes.SkipNode def builder_inited(app): if not app.config.mathjax_path: raise ExtensionError('mathjax_path config value must be set for the ' 'mathjax extension to work') app.add_javascript(app.config.mathjax_path) def setup(app): mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None)) # more information for mathjax secure url is here: # http://docs.mathjax.org/en/latest/start.html#secure-access-to-the-cdn app.add_config_value('mathjax_path', 'https://cdn.mathjax.org/mathjax/latest/MathJax.js?' 'config=TeX-AMS-MML_HTMLorMML', False) app.add_config_value('mathjax_inline', [r'\(', r'\)'], 'html') app.add_config_value('mathjax_display', [r'\[', r'\]'], 'html') app.connect('builder-inited', builder_inited) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/napoleon/0000775000175000017500000000000012665046155020060 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/ext/napoleon/__init__.py0000664000175000017500000003311412665045070022166 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.napoleon ~~~~~~~~~~~~~~~~~~~ Support for NumPy and Google style docstrings. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import sys from six import PY2, iteritems import sphinx from sphinx.ext.napoleon.docstring import GoogleDocstring, NumpyDocstring class Config(object): """Sphinx napoleon extension settings in `conf.py`. Listed below are all the settings used by napoleon and their default values. These settings can be changed in the Sphinx `conf.py` file. Make sure that both "sphinx.ext.autodoc" and "sphinx.ext.napoleon" are enabled in `conf.py`:: # conf.py # Add any Sphinx extension module names here, as strings extensions = ['sphinx.ext.autodoc', 'sphinx.ext.napoleon'] # Napoleon settings napoleon_google_docstring = True napoleon_numpy_docstring = True napoleon_include_private_with_doc = False napoleon_include_special_with_doc = True napoleon_use_admonition_for_examples = False napoleon_use_admonition_for_notes = False napoleon_use_admonition_for_references = False napoleon_use_ivar = False napoleon_use_param = True napoleon_use_rtype = True .. _Google style: http://google-styleguide.googlecode.com/svn/trunk/pyguide.html .. _NumPy style: https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt Attributes ---------- napoleon_google_docstring : bool, defaults to True True to parse `Google style`_ docstrings. False to disable support for Google style docstrings. napoleon_numpy_docstring : bool, defaults to True True to parse `NumPy style`_ docstrings. False to disable support for NumPy style docstrings. napoleon_include_private_with_doc : bool, defaults to False True to include private members (like ``_membername``) with docstrings in the documentation. False to fall back to Sphinx's default behavior. **If True**:: def _included(self): \"\"\" This will be included in the docs because it has a docstring \"\"\" pass def _skipped(self): # This will NOT be included in the docs pass napoleon_include_special_with_doc : bool, defaults to True True to include special members (like ``__membername__``) with docstrings in the documentation. False to fall back to Sphinx's default behavior. **If True**:: def __str__(self): \"\"\" This will be included in the docs because it has a docstring \"\"\" return unicode(self).encode('utf-8') def __unicode__(self): # This will NOT be included in the docs return unicode(self.__class__.__name__) napoleon_use_admonition_for_examples : bool, defaults to False True to use the ``.. admonition::`` directive for the **Example** and **Examples** sections. False to use the ``.. rubric::`` directive instead. One may look better than the other depending on what HTML theme is used. This `NumPy style`_ snippet will be converted as follows:: Example ------- This is just a quick example **If True**:: .. admonition:: Example This is just a quick example **If False**:: .. rubric:: Example This is just a quick example napoleon_use_admonition_for_notes : bool, defaults to False True to use the ``.. admonition::`` directive for **Notes** sections. False to use the ``.. rubric::`` directive instead. Note ---- The singular **Note** section will always be converted to a ``.. note::`` directive. See Also -------- :attr:`napoleon_use_admonition_for_examples` napoleon_use_admonition_for_references : bool, defaults to False True to use the ``.. admonition::`` directive for **References** sections. False to use the ``.. rubric::`` directive instead. See Also -------- :attr:`napoleon_use_admonition_for_examples` napoleon_use_ivar : bool, defaults to False True to use the ``:ivar:`` role for instance variables. False to use the ``.. attribute::`` directive instead. This `NumPy style`_ snippet will be converted as follows:: Attributes ---------- attr1 : int Description of `attr1` **If True**:: :ivar attr1: Description of `attr1` :vartype attr1: int **If False**:: .. attribute:: attr1 *int* Description of `attr1` napoleon_use_param : bool, defaults to True True to use a ``:param:`` role for each function parameter. False to use a single ``:parameters:`` role for all the parameters. This `NumPy style`_ snippet will be converted as follows:: Parameters ---------- arg1 : str Description of `arg1` arg2 : int, optional Description of `arg2`, defaults to 0 **If True**:: :param arg1: Description of `arg1` :type arg1: str :param arg2: Description of `arg2`, defaults to 0 :type arg2: int, optional **If False**:: :parameters: * **arg1** (*str*) -- Description of `arg1` * **arg2** (*int, optional*) -- Description of `arg2`, defaults to 0 napoleon_use_rtype : bool, defaults to True True to use the ``:rtype:`` role for the return type. False to output the return type inline with the description. This `NumPy style`_ snippet will be converted as follows:: Returns ------- bool True if successful, False otherwise **If True**:: :returns: True if successful, False otherwise :rtype: bool **If False**:: :returns: *bool* -- True if successful, False otherwise """ _config_values = { 'napoleon_google_docstring': (True, 'env'), 'napoleon_numpy_docstring': (True, 'env'), 'napoleon_include_private_with_doc': (False, 'env'), 'napoleon_include_special_with_doc': (True, 'env'), 'napoleon_use_admonition_for_examples': (False, 'env'), 'napoleon_use_admonition_for_notes': (False, 'env'), 'napoleon_use_admonition_for_references': (False, 'env'), 'napoleon_use_ivar': (False, 'env'), 'napoleon_use_param': (True, 'env'), 'napoleon_use_rtype': (True, 'env'), } def __init__(self, **settings): for name, (default, rebuild) in iteritems(self._config_values): setattr(self, name, default) for name, value in iteritems(settings): setattr(self, name, value) def setup(app): """Sphinx extension setup function. When the extension is loaded, Sphinx imports this module and executes the ``setup()`` function, which in turn notifies Sphinx of everything the extension offers. Parameters ---------- app : sphinx.application.Sphinx Application object representing the Sphinx process See Also -------- The Sphinx documentation on `Extensions`_, the `Extension Tutorial`_, and the `Extension API`_. .. _Extensions: http://sphinx-doc.org/extensions.html .. _Extension Tutorial: http://sphinx-doc.org/ext/tutorial.html .. _Extension API: http://sphinx-doc.org/ext/appapi.html """ from sphinx.application import Sphinx if not isinstance(app, Sphinx): return # probably called by tests app.connect('autodoc-process-docstring', _process_docstring) app.connect('autodoc-skip-member', _skip_member) for name, (default, rebuild) in iteritems(Config._config_values): app.add_config_value(name, default, rebuild) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} def _process_docstring(app, what, name, obj, options, lines): """Process the docstring for a given python object. Called when autodoc has read and processed a docstring. `lines` is a list of docstring lines that `_process_docstring` modifies in place to change what Sphinx outputs. The following settings in conf.py control what styles of docstrings will be parsed: * ``napoleon_google_docstring`` -- parse Google style docstrings * ``napoleon_numpy_docstring`` -- parse NumPy style docstrings Parameters ---------- app : sphinx.application.Sphinx Application object representing the Sphinx process. what : str A string specifying the type of the object to which the docstring belongs. Valid values: "module", "class", "exception", "function", "method", "attribute". name : str The fully qualified name of the object. obj : module, class, exception, function, method, or attribute The object to which the docstring belongs. options : sphinx.ext.autodoc.Options The options given to the directive: an object with attributes inherited_members, undoc_members, show_inheritance and noindex that are True if the flag option of same name was given to the auto directive. lines : list of str The lines of the docstring, see above. .. note:: `lines` is modified *in place* """ result_lines = lines if app.config.napoleon_numpy_docstring: docstring = NumpyDocstring(result_lines, app.config, app, what, name, obj, options) result_lines = docstring.lines() if app.config.napoleon_google_docstring: docstring = GoogleDocstring(result_lines, app.config, app, what, name, obj, options) result_lines = docstring.lines() lines[:] = result_lines[:] def _skip_member(app, what, name, obj, skip, options): """Determine if private and special class members are included in docs. The following settings in conf.py determine if private and special class members are included in the generated documentation: * ``napoleon_include_private_with_doc`` -- include private members if they have docstrings * ``napoleon_include_special_with_doc`` -- include special members if they have docstrings Parameters ---------- app : sphinx.application.Sphinx Application object representing the Sphinx process what : str A string specifying the type of the object to which the member belongs. Valid values: "module", "class", "exception", "function", "method", "attribute". name : str The name of the member. obj : module, class, exception, function, method, or attribute. For example, if the member is the __init__ method of class A, then `obj` will be `A.__init__`. skip : bool A boolean indicating if autodoc will skip this member if `_skip_member` does not override the decision options : sphinx.ext.autodoc.Options The options given to the directive: an object with attributes inherited_members, undoc_members, show_inheritance and noindex that are True if the flag option of same name was given to the auto directive. Returns ------- bool True if the member should be skipped during creation of the docs, False if it should be included in the docs. """ has_doc = getattr(obj, '__doc__', False) is_member = (what == 'class' or what == 'exception' or what == 'module') if name != '__weakref__' and name != '__init__' and has_doc and is_member: cls_is_owner = False if what == 'class' or what == 'exception': if PY2: cls = getattr(obj, 'im_class', getattr(obj, '__objclass__', None)) cls_is_owner = (cls and hasattr(cls, name) and name in cls.__dict__) elif sys.version_info >= (3, 3): qualname = getattr(obj, '__qualname__', '') cls_path, _, _ = qualname.rpartition('.') if cls_path: try: if '.' in cls_path: import importlib import functools mod = importlib.import_module(obj.__module__) mod_path = cls_path.split('.') cls = functools.reduce(getattr, mod_path, mod) else: cls = obj.__globals__[cls_path] except: cls_is_owner = False else: cls_is_owner = (cls and hasattr(cls, name) and name in cls.__dict__) else: cls_is_owner = False else: cls_is_owner = True if what == 'module' or cls_is_owner: is_special = name.startswith('__') and name.endswith('__') is_private = not is_special and name.startswith('_') inc_special = app.config.napoleon_include_special_with_doc inc_private = app.config.napoleon_include_private_with_doc if (is_special and inc_special) or (is_private and inc_private): return False return skip Sphinx-1.3.6/sphinx/ext/napoleon/iterators.py0000664000175000017500000001651712660074372022455 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.napoleon.iterators ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A collection of helpful iterators. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import collections class peek_iter(object): """An iterator object that supports peeking ahead. Parameters ---------- o : iterable or callable `o` is interpreted very differently depending on the presence of `sentinel`. If `sentinel` is not given, then `o` must be a collection object which supports either the iteration protocol or the sequence protocol. If `sentinel` is given, then `o` must be a callable object. sentinel : any value, optional If given, the iterator will call `o` with no arguments for each call to its `next` method; if the value returned is equal to `sentinel`, :exc:`StopIteration` will be raised, otherwise the value will be returned. See Also -------- `peek_iter` can operate as a drop in replacement for the built-in `iter `_ function. Attributes ---------- sentinel The value used to indicate the iterator is exhausted. If `sentinel` was not given when the `peek_iter` was instantiated, then it will be set to a new object instance: ``object()``. """ def __init__(self, *args): """__init__(o, sentinel=None)""" self._iterable = iter(*args) self._cache = collections.deque() if len(args) == 2: self.sentinel = args[1] else: self.sentinel = object() def __iter__(self): return self def __next__(self, n=None): # note: prevent 2to3 to transform self.next() in next(self) which # causes an infinite loop ! return getattr(self, 'next')(n) def _fillcache(self, n): """Cache `n` items. If `n` is 0 or None, then 1 item is cached.""" if not n: n = 1 try: while len(self._cache) < n: self._cache.append(next(self._iterable)) except StopIteration: while len(self._cache) < n: self._cache.append(self.sentinel) def has_next(self): """Determine if iterator is exhausted. Returns ------- bool True if iterator has more items, False otherwise. Note ---- Will never raise :exc:`StopIteration`. """ return self.peek() != self.sentinel def next(self, n=None): """Get the next item or `n` items of the iterator. Parameters ---------- n : int or None The number of items to retrieve. Defaults to None. Returns ------- item or list of items The next item or `n` items of the iterator. If `n` is None, the item itself is returned. If `n` is an int, the items will be returned in a list. If `n` is 0, an empty list is returned. Raises ------ StopIteration Raised if the iterator is exhausted, even if `n` is 0. """ self._fillcache(n) if not n: if self._cache[0] == self.sentinel: raise StopIteration if n is None: result = self._cache.popleft() else: result = [] else: if self._cache[n - 1] == self.sentinel: raise StopIteration result = [self._cache.popleft() for i in range(n)] return result def peek(self, n=None): """Preview the next item or `n` items of the iterator. The iterator is not advanced when peek is called. Returns ------- item or list of items The next item or `n` items of the iterator. If `n` is None, the item itself is returned. If `n` is an int, the items will be returned in a list. If `n` is 0, an empty list is returned. If the iterator is exhausted, `peek_iter.sentinel` is returned, or placed as the last item in the returned list. Note ---- Will never raise :exc:`StopIteration`. """ self._fillcache(n) if n is None: result = self._cache[0] else: result = [self._cache[i] for i in range(n)] return result class modify_iter(peek_iter): """An iterator object that supports modifying items as they are returned. Parameters ---------- o : iterable or callable `o` is interpreted very differently depending on the presence of `sentinel`. If `sentinel` is not given, then `o` must be a collection object which supports either the iteration protocol or the sequence protocol. If `sentinel` is given, then `o` must be a callable object. sentinel : any value, optional If given, the iterator will call `o` with no arguments for each call to its `next` method; if the value returned is equal to `sentinel`, :exc:`StopIteration` will be raised, otherwise the value will be returned. modifier : callable, optional The function that will be used to modify each item returned by the iterator. `modifier` should take a single argument and return a single value. Defaults to ``lambda x: x``. If `sentinel` is not given, `modifier` must be passed as a keyword argument. Attributes ---------- modifier : callable `modifier` is called with each item in `o` as it is iterated. The return value of `modifier` is returned in lieu of the item. Values returned by `peek` as well as `next` are affected by `modifier`. However, `modify_iter.sentinel` is never passed through `modifier`; it will always be returned from `peek` unmodified. Example ------- >>> a = [" A list ", ... " of strings ", ... " with ", ... " extra ", ... " whitespace. "] >>> modifier = lambda s: s.strip().replace('with', 'without') >>> for s in modify_iter(a, modifier=modifier): ... print('"%s"' % s) "A list" "of strings" "without" "extra" "whitespace." """ def __init__(self, *args, **kwargs): """__init__(o, sentinel=None, modifier=lambda x: x)""" if 'modifier' in kwargs: self.modifier = kwargs['modifier'] elif len(args) > 2: self.modifier = args[2] args = args[:2] else: self.modifier = lambda x: x if not callable(self.modifier): raise TypeError('modify_iter(o, modifier): ' 'modifier must be callable') super(modify_iter, self).__init__(*args) def _fillcache(self, n): """Cache `n` modified items. If `n` is 0 or None, 1 item is cached. Each item returned by the iterator is passed through the `modify_iter.modified` function before being cached. """ if not n: n = 1 try: while len(self._cache) < n: self._cache.append(self.modifier(next(self._iterable))) except StopIteration: while len(self._cache) < n: self._cache.append(self.sentinel) Sphinx-1.3.6/sphinx/ext/napoleon/docstring.py0000664000175000017500000007514112665045070022431 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.napoleon.docstring ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Classes for docstring parsing and formatting. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import collections import inspect import re from six import string_types from six.moves import range from sphinx.ext.napoleon.iterators import modify_iter from sphinx.util.pycompat import UnicodeMixin _directive_regex = re.compile(r'\.\. \S+::') _google_section_regex = re.compile(r'^(\s|\w)+:\s*$') _google_typed_arg_regex = re.compile(r'\s*(.+?)\s*\(\s*(.+?)\s*\)') _numpy_section_regex = re.compile(r'^[=\-`:\'"~^_*+#<>]{2,}\s*$') _xref_regex = re.compile(r'(:\w+:\S+:`.+?`|:\S+:`.+?`|`.+?`)') class GoogleDocstring(UnicodeMixin): """Parse Google style docstrings. Convert Google style docstrings to reStructuredText. Parameters ---------- docstring : str or list of str The docstring to parse, given either as a string or split into individual lines. config : sphinx.ext.napoleon.Config or sphinx.config.Config, optional The configuration settings to use. If not given, defaults to the config object on `app`; or if `app` is not given defaults to the a new `sphinx.ext.napoleon.Config` object. See Also -------- :class:`sphinx.ext.napoleon.Config` Other Parameters ---------------- app : sphinx.application.Sphinx, optional Application object representing the Sphinx process. what : str, optional A string specifying the type of the object to which the docstring belongs. Valid values: "module", "class", "exception", "function", "method", "attribute". name : str, optional The fully qualified name of the object. obj : module, class, exception, function, method, or attribute The object to which the docstring belongs. options : sphinx.ext.autodoc.Options, optional The options given to the directive: an object with attributes inherited_members, undoc_members, show_inheritance and noindex that are True if the flag option of same name was given to the auto directive. Example ------- >>> from sphinx.ext.napoleon import Config >>> config = Config(napoleon_use_param=True, napoleon_use_rtype=True) >>> docstring = '''One line summary. ... ... Extended description. ... ... Args: ... arg1(int): Description of `arg1` ... arg2(str): Description of `arg2` ... Returns: ... str: Description of return value. ... ''' >>> print(GoogleDocstring(docstring, config)) One line summary. Extended description. :param arg1: Description of `arg1` :type arg1: int :param arg2: Description of `arg2` :type arg2: str :returns: Description of return value. :rtype: str """ def __init__(self, docstring, config=None, app=None, what='', name='', obj=None, options=None): self._config = config self._app = app if not self._config: from sphinx.ext.napoleon import Config self._config = self._app and self._app.config or Config() if not what: if inspect.isclass(obj): what = 'class' elif inspect.ismodule(obj): what = 'module' elif isinstance(obj, collections.Callable): what = 'function' else: what = 'object' self._what = what self._name = name self._obj = obj self._opt = options if isinstance(docstring, string_types): docstring = docstring.splitlines() self._lines = docstring self._line_iter = modify_iter(docstring, modifier=lambda s: s.rstrip()) self._parsed_lines = [] self._is_in_section = False self._section_indent = 0 if not hasattr(self, '_directive_sections'): self._directive_sections = [] if not hasattr(self, '_sections'): self._sections = { 'args': self._parse_parameters_section, 'arguments': self._parse_parameters_section, 'attributes': self._parse_attributes_section, 'example': self._parse_examples_section, 'examples': self._parse_examples_section, 'keyword args': self._parse_keyword_arguments_section, 'keyword arguments': self._parse_keyword_arguments_section, 'methods': self._parse_methods_section, 'note': self._parse_note_section, 'notes': self._parse_notes_section, 'other parameters': self._parse_other_parameters_section, 'parameters': self._parse_parameters_section, 'return': self._parse_returns_section, 'returns': self._parse_returns_section, 'raises': self._parse_raises_section, 'references': self._parse_references_section, 'see also': self._parse_see_also_section, 'warning': self._parse_warning_section, 'warnings': self._parse_warning_section, 'warns': self._parse_warns_section, 'yield': self._parse_yields_section, 'yields': self._parse_yields_section, } self._parse() def __unicode__(self): """Return the parsed docstring in reStructuredText format. Returns ------- unicode Unicode version of the docstring. """ return u'\n'.join(self.lines()) def lines(self): """Return the parsed lines of the docstring in reStructuredText format. Returns ------- list of str The lines of the docstring in a list. """ return self._parsed_lines def _consume_indented_block(self, indent=1): lines = [] line = self._line_iter.peek() while(not self._is_section_break() and (not line or self._is_indented(line, indent))): lines.append(next(self._line_iter)) line = self._line_iter.peek() return lines def _consume_contiguous(self): lines = [] while (self._line_iter.has_next() and self._line_iter.peek() and not self._is_section_header()): lines.append(next(self._line_iter)) return lines def _consume_empty(self): lines = [] line = self._line_iter.peek() while self._line_iter.has_next() and not line: lines.append(next(self._line_iter)) line = self._line_iter.peek() return lines def _consume_field(self, parse_type=True, prefer_type=False): line = next(self._line_iter) before, colon, after = self._partition_field_on_colon(line) _name, _type, _desc = before, '', after if parse_type: match = _google_typed_arg_regex.match(before) if match: _name = match.group(1) _type = match.group(2) if _name[:2] == '**': _name = r'\*\*'+_name[2:] elif _name[:1] == '*': _name = r'\*'+_name[1:] if prefer_type and not _type: _type, _name = _name, _type indent = self._get_indent(line) + 1 _desc = [_desc] + self._dedent(self._consume_indented_block(indent)) _desc = self.__class__(_desc, self._config).lines() return _name, _type, _desc def _consume_fields(self, parse_type=True, prefer_type=False): self._consume_empty() fields = [] while not self._is_section_break(): _name, _type, _desc = self._consume_field(parse_type, prefer_type) if _name or _type or _desc: fields.append((_name, _type, _desc,)) return fields def _consume_returns_section(self): lines = self._dedent(self._consume_to_next_section()) if lines: before, colon, after = self._partition_field_on_colon(lines[0]) _name, _type, _desc = '', '', lines if colon: if after: _desc = [after] + lines[1:] else: _desc = lines[1:] match = _google_typed_arg_regex.match(before) if match: _name = match.group(1) _type = match.group(2) else: _type = before _desc = self.__class__(_desc, self._config).lines() return [(_name, _type, _desc,)] else: return [] def _consume_usage_section(self): lines = self._dedent(self._consume_to_next_section()) return lines def _consume_section_header(self): section = next(self._line_iter) stripped_section = section.strip(':') if stripped_section.lower() in self._sections: section = stripped_section return section def _consume_to_next_section(self): self._consume_empty() lines = [] while not self._is_section_break(): lines.append(next(self._line_iter)) return lines + self._consume_empty() def _dedent(self, lines, full=False): if full: return [line.lstrip() for line in lines] else: min_indent = self._get_min_indent(lines) return [line[min_indent:] for line in lines] def _format_admonition(self, admonition, lines): lines = self._strip_empty(lines) if len(lines) == 1: return ['.. %s:: %s' % (admonition, lines[0].strip()), ''] elif lines: lines = self._indent(self._dedent(lines), 3) return ['.. %s::' % admonition, ''] + lines + [''] else: return ['.. %s::' % admonition, ''] def _format_block(self, prefix, lines, padding=None): if lines: if padding is None: padding = ' ' * len(prefix) result_lines = [] for i, line in enumerate(lines): if i == 0: result_lines.append((prefix + line).rstrip()) elif line: result_lines.append(padding + line) else: result_lines.append('') return result_lines else: return [prefix] def _format_field(self, _name, _type, _desc): separator = any([s for s in _desc]) and ' --' or '' if _name: if _type: if '`' in _type: field = ['**%s** (%s)%s' % (_name, _type, separator)] else: field = ['**%s** (*%s*)%s' % (_name, _type, separator)] else: field = ['**%s**%s' % (_name, separator)] elif _type: if '`' in _type: field = ['%s%s' % (_type, separator)] else: field = ['*%s*%s' % (_type, separator)] else: field = [] return field + _desc def _format_fields(self, field_type, fields): field_type = ':%s:' % field_type.strip() padding = ' ' * len(field_type) multi = len(fields) > 1 lines = [] for _name, _type, _desc in fields: field = self._format_field(_name, _type, _desc) if multi: if lines: lines.extend(self._format_block(padding + ' * ', field)) else: lines.extend(self._format_block(field_type + ' * ', field)) else: lines.extend(self._format_block(field_type + ' ', field)) return lines def _get_current_indent(self, peek_ahead=0): line = self._line_iter.peek(peek_ahead + 1)[peek_ahead] while line != self._line_iter.sentinel: if line: return self._get_indent(line) peek_ahead += 1 line = self._line_iter.peek(peek_ahead + 1)[peek_ahead] return 0 def _get_indent(self, line): for i, s in enumerate(line): if not s.isspace(): return i return len(line) def _get_min_indent(self, lines): min_indent = None for line in lines: if line: indent = self._get_indent(line) if min_indent is None: min_indent = indent elif indent < min_indent: min_indent = indent return min_indent or 0 def _indent(self, lines, n=4): return [(' ' * n) + line for line in lines] def _is_indented(self, line, indent=1): for i, s in enumerate(line): if i >= indent: return True elif not s.isspace(): return False return False def _is_section_header(self): section = self._line_iter.peek().lower() match = _google_section_regex.match(section) if match and section.strip(':') in self._sections: header_indent = self._get_indent(section) section_indent = self._get_current_indent(peek_ahead=1) return section_indent > header_indent elif self._directive_sections: if _directive_regex.match(section): for directive_section in self._directive_sections: if section.startswith(directive_section): return True return False def _is_section_break(self): line = self._line_iter.peek() return (not self._line_iter.has_next() or self._is_section_header() or (self._is_in_section and line and not self._is_indented(line, self._section_indent))) def _parse(self): self._parsed_lines = self._consume_empty() while self._line_iter.has_next(): if self._is_section_header(): try: section = self._consume_section_header() self._is_in_section = True self._section_indent = self._get_current_indent() if _directive_regex.match(section): lines = [section] + self._consume_to_next_section() else: lines = self._sections[section.lower()](section) finally: self._is_in_section = False self._section_indent = 0 else: if not self._parsed_lines: lines = self._consume_contiguous() + self._consume_empty() else: lines = self._consume_to_next_section() self._parsed_lines.extend(lines) def _parse_attributes_section(self, section): lines = [] for _name, _type, _desc in self._consume_fields(): if self._config.napoleon_use_ivar: field = ':ivar %s: ' % _name lines.extend(self._format_block(field, _desc)) if _type: lines.append(':vartype %s: %s' % (_name, _type)) else: lines.append('.. attribute:: ' + _name) if _type: lines.append('') if '`' in _type: lines.append(' %s' % _type) else: lines.append(' *%s*' % _type) if _desc: lines.extend([''] + self._indent(_desc, 3)) lines.append('') if self._config.napoleon_use_ivar: lines.append('') return lines def _parse_examples_section(self, section): use_admonition = self._config.napoleon_use_admonition_for_examples return self._parse_generic_section(section, use_admonition) def _parse_usage_section(self, section): header = ['.. rubric:: Usage:', ''] block = ['.. code-block:: python', ''] lines = self._consume_usage_section() lines = self._indent(lines, 3) return header + block + lines + [''] def _parse_generic_section(self, section, use_admonition): lines = self._strip_empty(self._consume_to_next_section()) lines = self._dedent(lines) if use_admonition: header = '.. admonition:: %s' % section lines = self._indent(lines, 3) else: header = '.. rubric:: %s' % section if lines: return [header, ''] + lines + [''] else: return [header, ''] def _parse_keyword_arguments_section(self, section): return self._format_fields('Keyword Arguments', self._consume_fields()) def _parse_methods_section(self, section): lines = [] for _name, _, _desc in self._consume_fields(parse_type=False): lines.append('.. method:: %s' % _name) if _desc: lines.extend([''] + self._indent(_desc, 3)) lines.append('') return lines def _parse_note_section(self, section): lines = self._consume_to_next_section() return self._format_admonition('note', lines) def _parse_notes_section(self, section): use_admonition = self._config.napoleon_use_admonition_for_notes return self._parse_generic_section('Notes', use_admonition) def _parse_other_parameters_section(self, section): return self._format_fields('Other Parameters', self._consume_fields()) def _parse_parameters_section(self, section): fields = self._consume_fields() if self._config.napoleon_use_param: lines = [] for _name, _type, _desc in fields: field = ':param %s: ' % _name lines.extend(self._format_block(field, _desc)) if _type: lines.append(':type %s: %s' % (_name, _type)) return lines + [''] else: return self._format_fields('Parameters', fields) def _parse_raises_section(self, section): fields = self._consume_fields(parse_type=False, prefer_type=True) field_type = ':raises:' padding = ' ' * len(field_type) multi = len(fields) > 1 lines = [] for _, _type, _desc in fields: has_desc = any(_desc) sep = (_desc and _desc[0]) and ' -- ' or '' if _type: has_refs = '`' in _type or ':' in _type has_space = any(c in ' \t\n\v\f ' for c in _type) if not has_refs and not has_space: _type = ':exc:`%s`%s' % (_type, sep) elif has_desc and has_space: _type = '*%s*%s' % (_type, sep) else: _type = '%s%s' % (_type, sep) if has_desc: field = [_type] + _desc else: field = [_type] else: field = _desc if multi: if lines: lines.extend(self._format_block(padding + ' * ', field)) else: lines.extend(self._format_block(field_type + ' * ', field)) else: lines.extend(self._format_block(field_type + ' ', field)) if lines and lines[-1]: lines.append('') return lines def _parse_references_section(self, section): use_admonition = self._config.napoleon_use_admonition_for_references return self._parse_generic_section('References', use_admonition) def _parse_returns_section(self, section): fields = self._consume_returns_section() multi = len(fields) > 1 if multi: use_rtype = False else: use_rtype = self._config.napoleon_use_rtype lines = [] for _name, _type, _desc in fields: if use_rtype: field = self._format_field(_name, '', _desc) else: field = self._format_field(_name, _type, _desc) if multi: if lines: lines.extend(self._format_block(' * ', field)) else: lines.extend(self._format_block(':returns: * ', field)) else: lines.extend(self._format_block(':returns: ', field)) if _type and use_rtype: lines.append(':rtype: %s' % _type) lines.append('') return lines def _parse_see_also_section(self, section): lines = self._consume_to_next_section() return self._format_admonition('seealso', lines) def _parse_warning_section(self, section): lines = self._consume_to_next_section() return self._format_admonition('warning', lines) def _parse_warns_section(self, section): return self._format_fields('Warns', self._consume_fields()) def _parse_yields_section(self, section): fields = self._consume_returns_section() return self._format_fields('Yields', fields) def _partition_field_on_colon(self, line): before_colon = [] after_colon = [] colon = '' found_colon = False for i, source in enumerate(_xref_regex.split(line)): if found_colon: after_colon.append(source) else: if (i % 2) == 0 and ":" in source: found_colon = True before, colon, after = source.partition(":") before_colon.append(before) after_colon.append(after) else: before_colon.append(source) return ("".join(before_colon).strip(), colon, "".join(after_colon).strip()) def _strip_empty(self, lines): if lines: start = -1 for i, line in enumerate(lines): if line: start = i break if start == -1: lines = [] end = -1 for i in reversed(range(len(lines))): line = lines[i] if line: end = i break if start > 0 or end + 1 < len(lines): lines = lines[start:end + 1] return lines class NumpyDocstring(GoogleDocstring): """Parse NumPy style docstrings. Convert NumPy style docstrings to reStructuredText. Parameters ---------- docstring : str or list of str The docstring to parse, given either as a string or split into individual lines. config : sphinx.ext.napoleon.Config or sphinx.config.Config, optional The configuration settings to use. If not given, defaults to the config object on `app`; or if `app` is not given defaults to the a new `sphinx.ext.napoleon.Config` object. See Also -------- :class:`sphinx.ext.napoleon.Config` Other Parameters ---------------- app : sphinx.application.Sphinx, optional Application object representing the Sphinx process. what : str, optional A string specifying the type of the object to which the docstring belongs. Valid values: "module", "class", "exception", "function", "method", "attribute". name : str, optional The fully qualified name of the object. obj : module, class, exception, function, method, or attribute The object to which the docstring belongs. options : sphinx.ext.autodoc.Options, optional The options given to the directive: an object with attributes inherited_members, undoc_members, show_inheritance and noindex that are True if the flag option of same name was given to the auto directive. Example ------- >>> from sphinx.ext.napoleon import Config >>> config = Config(napoleon_use_param=True, napoleon_use_rtype=True) >>> docstring = '''One line summary. ... ... Extended description. ... ... Parameters ... ---------- ... arg1 : int ... Description of `arg1` ... arg2 : str ... Description of `arg2` ... Returns ... ------- ... str ... Description of return value. ... ''' >>> print(NumpyDocstring(docstring, config)) One line summary. Extended description. :param arg1: Description of `arg1` :type arg1: int :param arg2: Description of `arg2` :type arg2: str :returns: Description of return value. :rtype: str Methods ------- __str__() Return the parsed docstring in reStructuredText format. Returns ------- str UTF-8 encoded version of the docstring. __unicode__() Return the parsed docstring in reStructuredText format. Returns ------- unicode Unicode version of the docstring. lines() Return the parsed lines of the docstring in reStructuredText format. Returns ------- list of str The lines of the docstring in a list. """ def __init__(self, docstring, config=None, app=None, what='', name='', obj=None, options=None): self._directive_sections = ['.. index::'] super(NumpyDocstring, self).__init__(docstring, config, app, what, name, obj, options) def _consume_field(self, parse_type=True, prefer_type=False): line = next(self._line_iter) if parse_type: _name, _, _type = self._partition_field_on_colon(line) else: _name, _type = line, '' _name, _type = _name.strip(), _type.strip() if prefer_type and not _type: _type, _name = _name, _type indent = self._get_indent(line) _desc = self._dedent(self._consume_indented_block(indent + 1)) _desc = self.__class__(_desc, self._config).lines() return _name, _type, _desc def _consume_returns_section(self): return self._consume_fields(prefer_type=True) def _consume_section_header(self): section = next(self._line_iter) if not _directive_regex.match(section): # Consume the header underline next(self._line_iter) return section def _is_section_break(self): line1, line2 = self._line_iter.peek(2) return (not self._line_iter.has_next() or self._is_section_header() or ['', ''] == [line1, line2] or (self._is_in_section and line1 and not self._is_indented(line1, self._section_indent))) def _is_section_header(self): section, underline = self._line_iter.peek(2) section = section.lower() if section in self._sections and isinstance(underline, string_types): return bool(_numpy_section_regex.match(underline)) elif self._directive_sections: if _directive_regex.match(section): for directive_section in self._directive_sections: if section.startswith(directive_section): return True return False _name_rgx = re.compile(r"^\s*(:(?P\w+):`(?P[a-zA-Z0-9_.-]+)`|" r" (?P[a-zA-Z0-9_.-]+))\s*", re.X) def _parse_see_also_section(self, section): lines = self._consume_to_next_section() try: return self._parse_numpydoc_see_also_section(lines) except ValueError: return self._format_admonition('seealso', lines) def _parse_numpydoc_see_also_section(self, content): """ Derived from the NumpyDoc implementation of _parse_see_also. See Also -------- func_name : Descriptive text continued text another_func_name : Descriptive text func_name1, func_name2, :meth:`func_name`, func_name3 """ items = [] def parse_item_name(text): """Match ':role:`name`' or 'name'""" m = self._name_rgx.match(text) if m: g = m.groups() if g[1] is None: return g[3], None else: return g[2], g[1] raise ValueError("%s is not a item name" % text) def push_item(name, rest): if not name: return name, role = parse_item_name(name) items.append((name, list(rest), role)) del rest[:] current_func = None rest = [] for line in content: if not line.strip(): continue m = self._name_rgx.match(line) if m and line[m.end():].strip().startswith(':'): push_item(current_func, rest) current_func, line = line[:m.end()], line[m.end():] rest = [line.split(':', 1)[1].strip()] if not rest[0]: rest = [] elif not line.startswith(' '): push_item(current_func, rest) current_func = None if ',' in line: for func in line.split(','): if func.strip(): push_item(func, []) elif line.strip(): current_func = line elif current_func is not None: rest.append(line.strip()) push_item(current_func, rest) if not items: return [] roles = { 'method': 'meth', 'meth': 'meth', 'function': 'func', 'func': 'func', 'class': 'class', 'exception': 'exc', 'exc': 'exc', 'object': 'obj', 'obj': 'obj', 'module': 'mod', 'mod': 'mod', 'data': 'data', 'constant': 'const', 'const': 'const', 'attribute': 'attr', 'attr': 'attr' } if self._what is None: func_role = 'obj' else: func_role = roles.get(self._what, '') lines = [] last_had_desc = True for func, desc, role in items: if role: link = ':%s:`%s`' % (role, func) elif func_role: link = ':%s:`%s`' % (func_role, func) else: link = "`%s`_" % func if desc or last_had_desc: lines += [''] lines += [link] else: lines[-1] += ", %s" % link if desc: lines += self._indent([' '.join(desc)]) last_had_desc = True else: last_had_desc = False lines += [''] return self._format_admonition('seealso', lines) Sphinx-1.3.6/sphinx/ext/intersphinx.py0000664000175000017500000002575412665045070021202 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.intersphinx ~~~~~~~~~~~~~~~~~~~~~~ Insert links to objects documented in remote Sphinx documentation. This works as follows: * Each Sphinx HTML build creates a file named "objects.inv" that contains a mapping from object names to URIs relative to the HTML set's root. * Projects using the Intersphinx extension can specify links to such mapping files in the `intersphinx_mapping` config value. The mapping will then be used to resolve otherwise missing references to objects into links to the other documentation. * By default, the mapping file is assumed to be at the same location as the rest of the documentation; however, the location of the mapping file can also be specified individually, e.g. if the docs should be buildable without Internet access. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import time import zlib import codecs import posixpath from os import path import re from six import iteritems from six.moves.urllib import request from docutils import nodes from docutils.utils import relative_path import sphinx from sphinx.locale import _ from sphinx.builders.html import INVENTORY_FILENAME handlers = [request.ProxyHandler(), request.HTTPRedirectHandler(), request.HTTPHandler()] try: handlers.append(request.HTTPSHandler) except AttributeError: pass request.install_opener(request.build_opener(*handlers)) UTF8StreamReader = codecs.lookup('utf-8')[2] def read_inventory_v1(f, uri, join): f = UTF8StreamReader(f) invdata = {} line = next(f) projname = line.rstrip()[11:] line = next(f) version = line.rstrip()[11:] for line in f: name, type, location = line.rstrip().split(None, 2) location = join(uri, location) # version 1 did not add anchors to the location if type == 'mod': type = 'py:module' location += '#module-' + name else: type = 'py:' + type location += '#' + name invdata.setdefault(type, {})[name] = (projname, version, location, '-') return invdata def read_inventory_v2(f, uri, join, bufsize=16*1024): invdata = {} line = f.readline() projname = line.rstrip()[11:].decode('utf-8') line = f.readline() version = line.rstrip()[11:].decode('utf-8') line = f.readline().decode('utf-8') if 'zlib' not in line: raise ValueError def read_chunks(): decompressor = zlib.decompressobj() for chunk in iter(lambda: f.read(bufsize), b''): yield decompressor.decompress(chunk) yield decompressor.flush() def split_lines(iter): buf = b'' for chunk in iter: buf += chunk lineend = buf.find(b'\n') while lineend != -1: yield buf[:lineend].decode('utf-8') buf = buf[lineend+1:] lineend = buf.find(b'\n') assert not buf for line in split_lines(read_chunks()): # be careful to handle names with embedded spaces correctly m = re.match(r'(?x)(.+?)\s+(\S*:\S*)\s+(\S+)\s+(\S+)\s+(.*)', line.rstrip()) if not m: continue name, type, prio, location, dispname = m.groups() if type == 'py:module' and type in invdata and \ name in invdata[type]: # due to a bug in 1.1 and below, # two inventory entries are created # for Python modules, and the first # one is correct continue if location.endswith(u'$'): location = location[:-1] + name location = join(uri, location) invdata.setdefault(type, {})[name] = (projname, version, location, dispname) return invdata def fetch_inventory(app, uri, inv): """Fetch, parse and return an intersphinx inventory file.""" # both *uri* (base URI of the links to generate) and *inv* (actual # location of the inventory file) can be local or remote URIs localuri = uri.find('://') == -1 join = localuri and path.join or posixpath.join try: if inv.find('://') != -1: f = request.urlopen(inv) else: f = open(path.join(app.srcdir, inv), 'rb') except Exception as err: app.warn('intersphinx inventory %r not fetchable due to ' '%s: %s' % (inv, err.__class__, err)) return try: line = f.readline().rstrip().decode('utf-8') try: if line == '# Sphinx inventory version 1': invdata = read_inventory_v1(f, uri, join) elif line == '# Sphinx inventory version 2': invdata = read_inventory_v2(f, uri, join) else: raise ValueError f.close() except ValueError: f.close() raise ValueError('unknown or unsupported inventory version') except Exception as err: app.warn('intersphinx inventory %r not readable due to ' '%s: %s' % (inv, err.__class__.__name__, err)) else: return invdata def load_mappings(app): """Load all intersphinx mappings into the environment.""" now = int(time.time()) cache_time = now - app.config.intersphinx_cache_limit * 86400 env = app.builder.env if not hasattr(env, 'intersphinx_cache'): env.intersphinx_cache = {} env.intersphinx_inventory = {} env.intersphinx_named_inventory = {} cache = env.intersphinx_cache update = False for key, value in iteritems(app.config.intersphinx_mapping): if isinstance(value, tuple): # new format name, (uri, inv) = key, value if not name.isalnum(): app.warn('intersphinx identifier %r is not alphanumeric' % name) else: # old format, no name name, uri, inv = None, key, value # we can safely assume that the uri<->inv mapping is not changed # during partial rebuilds since a changed intersphinx_mapping # setting will cause a full environment reread if not isinstance(inv, tuple): invs = (inv, ) else: invs = inv for inv in invs: if not inv: inv = posixpath.join(uri, INVENTORY_FILENAME) # decide whether the inventory must be read: always read local # files; remote ones only if the cache time is expired if '://' not in inv or uri not in cache \ or cache[uri][1] < cache_time: app.info('loading intersphinx inventory from %s...' % inv) invdata = fetch_inventory(app, uri, inv) if invdata: cache[uri] = (name, now, invdata) update = True break if update: env.intersphinx_inventory = {} env.intersphinx_named_inventory = {} # Duplicate values in different inventories will shadow each # other; which one will override which can vary between builds # since they are specified using an unordered dict. To make # it more consistent, we sort the named inventories and then # add the unnamed inventories last. This means that the # unnamed inventories will shadow the named ones but the named # ones can still be accessed when the name is specified. cached_vals = list(cache.values()) named_vals = sorted(v for v in cached_vals if v[0]) unnamed_vals = [v for v in cached_vals if not v[0]] for name, _x, invdata in named_vals + unnamed_vals: if name: env.intersphinx_named_inventory[name] = invdata for type, objects in iteritems(invdata): env.intersphinx_inventory.setdefault( type, {}).update(objects) def missing_reference(app, env, node, contnode): """Attempt to resolve a missing reference via intersphinx references.""" target = node['reftarget'] if node['reftype'] == 'any': # we search anything! objtypes = ['%s:%s' % (domain.name, objtype) for domain in env.domains.values() for objtype in domain.object_types] domain = None else: domain = node.get('refdomain') if not domain: # only objects in domains are in the inventory return objtypes = env.domains[domain].objtypes_for_role(node['reftype']) if not objtypes: return objtypes = ['%s:%s' % (domain, objtype) for objtype in objtypes] to_try = [(env.intersphinx_inventory, target)] in_set = None if ':' in target: # first part may be the foreign doc set name setname, newtarget = target.split(':', 1) if setname in env.intersphinx_named_inventory: in_set = setname to_try.append((env.intersphinx_named_inventory[setname], newtarget)) for inventory, target in to_try: for objtype in objtypes: if objtype not in inventory or target not in inventory[objtype]: continue proj, version, uri, dispname = inventory[objtype][target] if '://' not in uri and node.get('refdoc'): # get correct path in case of subdirectories uri = path.join(relative_path(node['refdoc'], env.srcdir), uri) newnode = nodes.reference('', '', internal=False, refuri=uri, reftitle=_('(in %s v%s)') % (proj, version)) if node.get('refexplicit'): # use whatever title was given newnode.append(contnode) elif dispname == '-' or \ (domain == 'std' and node['reftype'] == 'keyword'): # use whatever title was given, but strip prefix title = contnode.astext() if in_set and title.startswith(in_set+':'): newnode.append(contnode.__class__(title[len(in_set)+1:], title[len(in_set)+1:])) else: newnode.append(contnode) else: # else use the given display name (used for :ref:) newnode.append(contnode.__class__(dispname, dispname)) return newnode # at least get rid of the ':' in the target if no explicit title given if in_set is not None and not node.get('refexplicit', True): if len(contnode) and isinstance(contnode[0], nodes.Text): contnode[0] = nodes.Text(newtarget, contnode[0].rawsource) def setup(app): app.add_config_value('intersphinx_mapping', {}, True) app.add_config_value('intersphinx_cache_limit', 5, False) app.connect('missing-reference', missing_reference) app.connect('builder-inited', load_mappings) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/autodoc.py0000664000175000017500000016467112665045077020276 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.autodoc ~~~~~~~~~~~~~~~~~~ Automatically insert docstrings for functions, classes or whole modules into the doctree, thus avoiding duplication between docstrings and documentation for those who like elaborate docstrings. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re import sys import inspect import traceback from types import FunctionType, BuiltinFunctionType, MethodType from six import PY2, iterkeys, iteritems, itervalues, text_type, class_types, \ string_types from docutils import nodes from docutils.utils import assemble_option_dict from docutils.statemachine import ViewList import sphinx from sphinx.util import rpartition, force_decode from sphinx.locale import _ from sphinx.pycode import ModuleAnalyzer, PycodeError from sphinx.application import ExtensionError from sphinx.util.nodes import nested_parse_with_titles from sphinx.util.compat import Directive from sphinx.util.inspect import getargspec, isdescriptor, safe_getmembers, \ safe_getattr, object_description, is_builtin_class_method from sphinx.util.docstrings import prepare_docstring #: extended signature RE: with explicit module name separated by :: py_ext_sig_re = re.compile( r'''^ ([\w.]+::)? # explicit module name ([\w.]+\.)? # module and/or class name(s) (\w+) \s* # thing name (?: \((.*)\) # optional: arguments (?:\s* -> \s* (.*))? # return annotation )? $ # and nothing more ''', re.VERBOSE) class DefDict(dict): """A dict that returns a default on nonexisting keys.""" def __init__(self, default): dict.__init__(self) self.default = default def __getitem__(self, key): try: return dict.__getitem__(self, key) except KeyError: return self.default def __bool__(self): # docutils check "if option_spec" return True __nonzero__ = __bool__ # for python2 compatibility def identity(x): return x class Options(dict): """A dict/attribute hybrid that returns None on nonexisting keys.""" def __getattr__(self, name): try: return self[name.replace('_', '-')] except KeyError: return None class _MockModule(object): """Used by autodoc_mock_imports.""" def __init__(self, *args, **kwargs): pass def __call__(self, *args, **kwargs): return _MockModule() @classmethod def __getattr__(cls, name): if name in ('__file__', '__path__'): return '/dev/null' elif name[0] == name[0].upper(): # Not very good, we assume Uppercase names are classes... mocktype = type(name, (), {}) mocktype.__module__ = __name__ return mocktype else: return _MockModule() def mock_import(modname): if '.' in modname: pkg, _n, mods = modname.rpartition('.') mock_import(pkg) mod = _MockModule() sys.modules[modname] = mod return mod ALL = object() INSTANCEATTR = object() def members_option(arg): """Used to convert the :members: option to auto directives.""" if arg is None: return ALL return [x.strip() for x in arg.split(',')] def members_set_option(arg): """Used to convert the :members: option to auto directives.""" if arg is None: return ALL return set(x.strip() for x in arg.split(',')) SUPPRESS = object() def annotation_option(arg): if arg is None: # suppress showing the representation of the object return SUPPRESS else: return arg def bool_option(arg): """Used to convert flag options to auto directives. (Instead of directives.flag(), which returns None). """ return True class AutodocReporter(object): """ A reporter replacement that assigns the correct source name and line number to a system message, as recorded in a ViewList. """ def __init__(self, viewlist, reporter): self.viewlist = viewlist self.reporter = reporter def __getattr__(self, name): return getattr(self.reporter, name) def system_message(self, level, message, *children, **kwargs): if 'line' in kwargs and 'source' not in kwargs: try: source, line = self.viewlist.items[kwargs['line']] except IndexError: pass else: kwargs['source'] = source kwargs['line'] = line return self.reporter.system_message(level, message, *children, **kwargs) def debug(self, *args, **kwargs): if self.reporter.debug_flag: return self.system_message(0, *args, **kwargs) def info(self, *args, **kwargs): return self.system_message(1, *args, **kwargs) def warning(self, *args, **kwargs): return self.system_message(2, *args, **kwargs) def error(self, *args, **kwargs): return self.system_message(3, *args, **kwargs) def severe(self, *args, **kwargs): return self.system_message(4, *args, **kwargs) # Some useful event listener factories for autodoc-process-docstring. def cut_lines(pre, post=0, what=None): """Return a listener that removes the first *pre* and last *post* lines of every docstring. If *what* is a sequence of strings, only docstrings of a type in *what* will be processed. Use like this (e.g. in the ``setup()`` function of :file:`conf.py`):: from sphinx.ext.autodoc import cut_lines app.connect('autodoc-process-docstring', cut_lines(4, what=['module'])) This can (and should) be used in place of :confval:`automodule_skip_lines`. """ def process(app, what_, name, obj, options, lines): if what and what_ not in what: return del lines[:pre] if post: # remove one trailing blank line. if lines and not lines[-1]: lines.pop(-1) del lines[-post:] # make sure there is a blank line at the end if lines and lines[-1]: lines.append('') return process def between(marker, what=None, keepempty=False, exclude=False): """Return a listener that either keeps, or if *exclude* is True excludes, lines between lines that match the *marker* regular expression. If no line matches, the resulting docstring would be empty, so no change will be made unless *keepempty* is true. If *what* is a sequence of strings, only docstrings of a type in *what* will be processed. """ marker_re = re.compile(marker) def process(app, what_, name, obj, options, lines): if what and what_ not in what: return deleted = 0 delete = not exclude orig_lines = lines[:] for i, line in enumerate(orig_lines): if delete: lines.pop(i - deleted) deleted += 1 if marker_re.match(line): delete = not delete if delete: lines.pop(i - deleted) deleted += 1 if not lines and not keepempty: lines[:] = orig_lines # make sure there is a blank line at the end if lines and lines[-1]: lines.append('') return process def formatargspec(*argspec): return inspect.formatargspec(*argspec, formatvalue=lambda x: '=' + object_description(x)) class Documenter(object): """ A Documenter knows how to autodocument a single object type. When registered with the AutoDirective, it will be used to document objects of that type when needed by autodoc. Its *objtype* attribute selects what auto directive it is assigned to (the directive name is 'auto' + objtype), and what directive it generates by default, though that can be overridden by an attribute called *directivetype*. A Documenter has an *option_spec* that works like a docutils directive's; in fact, it will be used to parse an auto directive's options that matches the documenter. """ #: name by which the directive is called (auto...) and the default #: generated directive name objtype = 'object' #: indentation by which to indent the directive content content_indent = u' ' #: priority if multiple documenters return True from can_document_member priority = 0 #: order if autodoc_member_order is set to 'groupwise' member_order = 0 #: true if the generated content may contain titles titles_allowed = False option_spec = {'noindex': bool_option} @staticmethod def get_attr(obj, name, *defargs): """getattr() override for types such as Zope interfaces.""" for typ, func in iteritems(AutoDirective._special_attrgetters): if isinstance(obj, typ): return func(obj, name, *defargs) return safe_getattr(obj, name, *defargs) @classmethod def can_document_member(cls, member, membername, isattr, parent): """Called to see if a member can be documented by this documenter.""" raise NotImplementedError('must be implemented in subclasses') def __init__(self, directive, name, indent=u''): self.directive = directive self.env = directive.env self.options = directive.genopt self.name = name self.indent = indent # the module and object path within the module, and the fully # qualified name (all set after resolve_name succeeds) self.modname = None self.module = None self.objpath = None self.fullname = None # extra signature items (arguments and return annotation, # also set after resolve_name succeeds) self.args = None self.retann = None # the object to document (set after import_object succeeds) self.object = None self.object_name = None # the parent/owner of the object to document self.parent = None # the module analyzer to get at attribute docs, or None self.analyzer = None def add_line(self, line, source, *lineno): """Append one line of generated reST to the output.""" self.directive.result.append(self.indent + line, source, *lineno) def resolve_name(self, modname, parents, path, base): """Resolve the module and name of the object to document given by the arguments and the current module/class. Must return a pair of the module name and a chain of attributes; for example, it would return ``('zipfile', ['ZipFile', 'open'])`` for the ``zipfile.ZipFile.open`` method. """ raise NotImplementedError('must be implemented in subclasses') def parse_name(self): """Determine what module to import and what attribute to document. Returns True and sets *self.modname*, *self.objpath*, *self.fullname*, *self.args* and *self.retann* if parsing and resolving was successful. """ # first, parse the definition -- auto directives for classes and # functions can contain a signature which is then used instead of # an autogenerated one try: explicit_modname, path, base, args, retann = \ py_ext_sig_re.match(self.name).groups() except AttributeError: self.directive.warn('invalid signature for auto%s (%r)' % (self.objtype, self.name)) return False # support explicit module and class name separation via :: if explicit_modname is not None: modname = explicit_modname[:-2] parents = path and path.rstrip('.').split('.') or [] else: modname = None parents = [] self.modname, self.objpath = \ self.resolve_name(modname, parents, path, base) if not self.modname: return False self.args = args self.retann = retann self.fullname = (self.modname or '') + \ (self.objpath and '.' + '.'.join(self.objpath) or '') return True def import_object(self): """Import the object given by *self.modname* and *self.objpath* and set it as *self.object*. Returns True if successful, False if an error occurred. """ dbg = self.env.app.debug if self.objpath: dbg('[autodoc] from %s import %s', self.modname, '.'.join(self.objpath)) try: dbg('[autodoc] import %s', self.modname) for modname in self.env.config.autodoc_mock_imports: dbg('[autodoc] adding a mock module %s!', self.modname) mock_import(modname) __import__(self.modname) parent = None obj = self.module = sys.modules[self.modname] dbg('[autodoc] => %r', obj) for part in self.objpath: parent = obj dbg('[autodoc] getattr(_, %r)', part) obj = self.get_attr(obj, part) dbg('[autodoc] => %r', obj) self.object_name = part self.parent = parent self.object = obj return True # this used to only catch SyntaxError, ImportError and AttributeError, # but importing modules with side effects can raise all kinds of errors except (Exception, SystemExit) as e: if self.objpath: errmsg = 'autodoc: failed to import %s %r from module %r' % \ (self.objtype, '.'.join(self.objpath), self.modname) else: errmsg = 'autodoc: failed to import %s %r' % \ (self.objtype, self.fullname) if isinstance(e, SystemExit): errmsg += ('; the module executes module level statement ' + 'and it might call sys.exit().') else: errmsg += '; the following exception was raised:\n%s' % \ traceback.format_exc() if PY2: errmsg = errmsg.decode('utf-8') dbg(errmsg) self.directive.warn(errmsg) self.env.note_reread() return False def get_real_modname(self): """Get the real module name of an object to document. It can differ from the name of the module through which the object was imported. """ return self.get_attr(self.object, '__module__', None) or self.modname def check_module(self): """Check if *self.object* is really defined in the module given by *self.modname*. """ if self.options.imported_members: return True modname = self.get_attr(self.object, '__module__', None) if modname and modname != self.modname: return False return True def format_args(self): """Format the argument signature of *self.object*. Should return None if the object does not have a signature. """ return None def format_name(self): """Format the name of *self.object*. This normally should be something that can be parsed by the generated directive, but doesn't need to be (Sphinx will display it unparsed then). """ # normally the name doesn't contain the module (except for module # directives of course) return '.'.join(self.objpath) or self.modname def format_signature(self): """Format the signature (arguments and return annotation) of the object. Let the user process it via the ``autodoc-process-signature`` event. """ if self.args is not None: # signature given explicitly args = "(%s)" % self.args else: # try to introspect the signature try: args = self.format_args() except Exception as err: self.directive.warn('error while formatting arguments for ' '%s: %s' % (self.fullname, err)) args = None retann = self.retann result = self.env.app.emit_firstresult( 'autodoc-process-signature', self.objtype, self.fullname, self.object, self.options, args, retann) if result: args, retann = result if args is not None: return args + (retann and (' -> %s' % retann) or '') else: return '' def add_directive_header(self, sig): """Add the directive header and options to the generated content.""" domain = getattr(self, 'domain', 'py') directive = getattr(self, 'directivetype', self.objtype) name = self.format_name() sourcename = self.get_sourcename() self.add_line(u'.. %s:%s:: %s%s' % (domain, directive, name, sig), sourcename) if self.options.noindex: self.add_line(u' :noindex:', sourcename) if self.objpath: # Be explicit about the module, this is necessary since .. class:: # etc. don't support a prepended module name self.add_line(u' :module: %s' % self.modname, sourcename) def get_doc(self, encoding=None, ignore=1): """Decode and return lines of the docstring(s) for the object.""" docstring = self.get_attr(self.object, '__doc__', None) # make sure we have Unicode docstrings, then sanitize and split # into lines if isinstance(docstring, text_type): return [prepare_docstring(docstring, ignore)] elif isinstance(docstring, str): # this will not trigger on Py3 return [prepare_docstring(force_decode(docstring, encoding), ignore)] # ... else it is something strange, let's ignore it return [] def process_doc(self, docstrings): """Let the user process the docstrings before adding them.""" for docstringlines in docstrings: if self.env.app: # let extensions preprocess docstrings self.env.app.emit('autodoc-process-docstring', self.objtype, self.fullname, self.object, self.options, docstringlines) for line in docstringlines: yield line def get_sourcename(self): if self.analyzer: # prevent encoding errors when the file name is non-ASCII if not isinstance(self.analyzer.srcname, text_type): filename = text_type(self.analyzer.srcname, sys.getfilesystemencoding(), 'replace') else: filename = self.analyzer.srcname return u'%s:docstring of %s' % (filename, self.fullname) return u'docstring of %s' % self.fullname def add_content(self, more_content, no_docstring=False): """Add content from docstrings, attribute documentation and user.""" # set sourcename and add content from attribute documentation sourcename = self.get_sourcename() if self.analyzer: attr_docs = self.analyzer.find_attr_docs() if self.objpath: key = ('.'.join(self.objpath[:-1]), self.objpath[-1]) if key in attr_docs: no_docstring = True docstrings = [attr_docs[key]] for i, line in enumerate(self.process_doc(docstrings)): self.add_line(line, sourcename, i) # add content from docstrings if not no_docstring: encoding = self.analyzer and self.analyzer.encoding docstrings = self.get_doc(encoding) if not docstrings: # append at least a dummy docstring, so that the event # autodoc-process-docstring is fired and can add some # content if desired docstrings.append([]) for i, line in enumerate(self.process_doc(docstrings)): self.add_line(line, sourcename, i) # add additional content (e.g. from document), if present if more_content: for line, src in zip(more_content.data, more_content.items): self.add_line(line, src[0], src[1]) def get_object_members(self, want_all): """Return `(members_check_module, members)` where `members` is a list of `(membername, member)` pairs of the members of *self.object*. If *want_all* is True, return all members. Else, only return those members given by *self.options.members* (which may also be none). """ analyzed_member_names = set() if self.analyzer: attr_docs = self.analyzer.find_attr_docs() namespace = '.'.join(self.objpath) for item in iteritems(attr_docs): if item[0][0] == namespace: analyzed_member_names.add(item[0][1]) if not want_all: if not self.options.members: return False, [] # specific members given members = [] for mname in self.options.members: try: members.append((mname, self.get_attr(self.object, mname))) except AttributeError: if mname not in analyzed_member_names: self.directive.warn('missing attribute %s in object %s' % (mname, self.fullname)) elif self.options.inherited_members: # safe_getmembers() uses dir() which pulls in members from all # base classes members = safe_getmembers(self.object, attr_getter=self.get_attr) else: # __dict__ contains only the members directly defined in # the class (but get them via getattr anyway, to e.g. get # unbound method objects instead of function objects); # using list(iterkeys()) because apparently there are objects for which # __dict__ changes while getting attributes try: obj_dict = self.get_attr(self.object, '__dict__') except AttributeError: members = [] else: members = [(mname, self.get_attr(self.object, mname, None)) for mname in list(iterkeys(obj_dict))] membernames = set(m[0] for m in members) # add instance attributes from the analyzer for aname in analyzed_member_names: if aname not in membernames and \ (want_all or aname in self.options.members): members.append((aname, INSTANCEATTR)) return False, sorted(members) def filter_members(self, members, want_all): """Filter the given member list. Members are skipped if - they are private (except if given explicitly or the private-members option is set) - they are special methods (except if given explicitly or the special-members option is set) - they are undocumented (except if the undoc-members option is set) The user can override the skipping decision by connecting to the ``autodoc-skip-member`` event. """ ret = [] # search for members in source code too namespace = '.'.join(self.objpath) # will be empty for modules if self.analyzer: attr_docs = self.analyzer.find_attr_docs() else: attr_docs = {} # process members and determine which to skip for (membername, member) in members: # if isattr is True, the member is documented as an attribute isattr = False doc = self.get_attr(member, '__doc__', None) # if the member __doc__ is the same as self's __doc__, it's just # inherited and therefore not the member's doc cls = self.get_attr(member, '__class__', None) if cls: cls_doc = self.get_attr(cls, '__doc__', None) if cls_doc == doc: doc = None has_doc = bool(doc) keep = False if want_all and membername.startswith('__') and \ membername.endswith('__') and len(membername) > 4: # special __methods__ if self.options.special_members is ALL and \ membername != '__doc__': keep = has_doc or self.options.undoc_members elif self.options.special_members and \ self.options.special_members is not ALL and \ membername in self.options.special_members: keep = has_doc or self.options.undoc_members elif want_all and membername.startswith('_'): # ignore members whose name starts with _ by default keep = self.options.private_members and \ (has_doc or self.options.undoc_members) elif (namespace, membername) in attr_docs: # keep documented attributes keep = True isattr = True else: # ignore undocumented members if :undoc-members: is not given keep = has_doc or self.options.undoc_members # give the user a chance to decide whether this member # should be skipped if self.env.app: # let extensions preprocess docstrings skip_user = self.env.app.emit_firstresult( 'autodoc-skip-member', self.objtype, membername, member, not keep, self.options) if skip_user is not None: keep = not skip_user if keep: ret.append((membername, member, isattr)) return ret def document_members(self, all_members=False): """Generate reST for member documentation. If *all_members* is True, do all members, else those given by *self.options.members*. """ # set current namespace for finding members self.env.temp_data['autodoc:module'] = self.modname if self.objpath: self.env.temp_data['autodoc:class'] = self.objpath[0] want_all = all_members or self.options.inherited_members or \ self.options.members is ALL # find out which members are documentable members_check_module, members = self.get_object_members(want_all) # remove members given by exclude-members if self.options.exclude_members: members = [(membername, member) for (membername, member) in members if membername not in self.options.exclude_members] # document non-skipped members memberdocumenters = [] for (mname, member, isattr) in self.filter_members(members, want_all): classes = [cls for cls in itervalues(AutoDirective._registry) if cls.can_document_member(member, mname, isattr, self)] if not classes: # don't know how to document this member continue # prefer the documenter with the highest priority classes.sort(key=lambda cls: cls.priority) # give explicitly separated module name, so that members # of inner classes can be documented full_mname = self.modname + '::' + \ '.'.join(self.objpath + [mname]) documenter = classes[-1](self.directive, full_mname, self.indent) memberdocumenters.append((documenter, isattr)) member_order = self.options.member_order or \ self.env.config.autodoc_member_order if member_order == 'groupwise': # sort by group; relies on stable sort to keep items in the # same group sorted alphabetically memberdocumenters.sort(key=lambda e: e[0].member_order) elif member_order == 'bysource' and self.analyzer: # sort by source order, by virtue of the module analyzer tagorder = self.analyzer.tagorder def keyfunc(entry): fullname = entry[0].name.split('::')[1] return tagorder.get(fullname, len(tagorder)) memberdocumenters.sort(key=keyfunc) for documenter, isattr in memberdocumenters: documenter.generate( all_members=True, real_modname=self.real_modname, check_module=members_check_module and not isattr) # reset current objects self.env.temp_data['autodoc:module'] = None self.env.temp_data['autodoc:class'] = None def generate(self, more_content=None, real_modname=None, check_module=False, all_members=False): """Generate reST for the object given by *self.name*, and possibly for its members. If *more_content* is given, include that content. If *real_modname* is given, use that module name to find attribute docs. If *check_module* is True, only generate if the object is defined in the module name it is imported from. If *all_members* is True, document all members. """ if not self.parse_name(): # need a module to import self.directive.warn( 'don\'t know which module to import for autodocumenting ' '%r (try placing a "module" or "currentmodule" directive ' 'in the document, or giving an explicit module name)' % self.name) return # now, import the module and get object to document if not self.import_object(): return # If there is no real module defined, figure out which to use. # The real module is used in the module analyzer to look up the module # where the attribute documentation would actually be found in. # This is used for situations where you have a module that collects the # functions and classes of internal submodules. self.real_modname = real_modname or self.get_real_modname() # try to also get a source code analyzer for attribute docs try: self.analyzer = ModuleAnalyzer.for_module(self.real_modname) # parse right now, to get PycodeErrors on parsing (results will # be cached anyway) self.analyzer.find_attr_docs() except PycodeError as err: self.env.app.debug('[autodoc] module analyzer failed: %s', err) # no source file -- e.g. for builtin and C modules self.analyzer = None # at least add the module.__file__ as a dependency if hasattr(self.module, '__file__') and self.module.__file__: self.directive.filename_set.add(self.module.__file__) else: self.directive.filename_set.add(self.analyzer.srcname) # check __module__ of object (for members not given explicitly) if check_module: if not self.check_module(): return sourcename = self.get_sourcename() # make sure that the result starts with an empty line. This is # necessary for some situations where another directive preprocesses # reST and no starting newline is present self.add_line(u'', sourcename) # format the object's signature, if any sig = self.format_signature() # generate the directive header and options, if applicable self.add_directive_header(sig) self.add_line(u'', sourcename) # e.g. the module directive doesn't have content self.indent += self.content_indent # add all content (from docstrings, attribute docs etc.) self.add_content(more_content) # document members, if possible self.document_members(all_members) class ModuleDocumenter(Documenter): """ Specialized Documenter subclass for modules. """ objtype = 'module' content_indent = u'' titles_allowed = True option_spec = { 'members': members_option, 'undoc-members': bool_option, 'noindex': bool_option, 'inherited-members': bool_option, 'show-inheritance': bool_option, 'synopsis': identity, 'platform': identity, 'deprecated': bool_option, 'member-order': identity, 'exclude-members': members_set_option, 'private-members': bool_option, 'special-members': members_option, 'imported-members': bool_option, } @classmethod def can_document_member(cls, member, membername, isattr, parent): # don't document submodules automatically return False def resolve_name(self, modname, parents, path, base): if modname is not None: self.directive.warn('"::" in automodule name doesn\'t make sense') return (path or '') + base, [] def parse_name(self): ret = Documenter.parse_name(self) if self.args or self.retann: self.directive.warn('signature arguments or return annotation ' 'given for automodule %s' % self.fullname) return ret def add_directive_header(self, sig): Documenter.add_directive_header(self, sig) sourcename = self.get_sourcename() # add some module-specific options if self.options.synopsis: self.add_line( u' :synopsis: ' + self.options.synopsis, sourcename) if self.options.platform: self.add_line( u' :platform: ' + self.options.platform, sourcename) if self.options.deprecated: self.add_line(u' :deprecated:', sourcename) def get_object_members(self, want_all): if want_all: if not hasattr(self.object, '__all__'): # for implicit module members, check __module__ to avoid # documenting imported objects return True, safe_getmembers(self.object) else: memberlist = self.object.__all__ # Sometimes __all__ is broken... if not isinstance(memberlist, (list, tuple)) or not \ all(isinstance(entry, string_types) for entry in memberlist): self.directive.warn( '__all__ should be a list of strings, not %r ' '(in module %s) -- ignoring __all__' % (memberlist, self.fullname)) # fall back to all members return True, safe_getmembers(self.object) else: memberlist = self.options.members or [] ret = [] for mname in memberlist: try: ret.append((mname, safe_getattr(self.object, mname))) except AttributeError: self.directive.warn( 'missing attribute mentioned in :members: or __all__: ' 'module %s, attribute %s' % ( safe_getattr(self.object, '__name__', '???'), mname)) return False, ret class ModuleLevelDocumenter(Documenter): """ Specialized Documenter subclass for objects on module level (functions, classes, data/constants). """ def resolve_name(self, modname, parents, path, base): if modname is None: if path: modname = path.rstrip('.') else: # if documenting a toplevel object without explicit module, # it can be contained in another auto directive ... modname = self.env.temp_data.get('autodoc:module') # ... or in the scope of a module directive if not modname: modname = self.env.ref_context.get('py:module') # ... else, it stays None, which means invalid return modname, parents + [base] class ClassLevelDocumenter(Documenter): """ Specialized Documenter subclass for objects on class level (methods, attributes). """ def resolve_name(self, modname, parents, path, base): if modname is None: if path: mod_cls = path.rstrip('.') else: mod_cls = None # if documenting a class-level object without path, # there must be a current class, either from a parent # auto directive ... mod_cls = self.env.temp_data.get('autodoc:class') # ... or from a class directive if mod_cls is None: mod_cls = self.env.ref_context.get('py:class') # ... if still None, there's no way to know if mod_cls is None: return None, [] modname, cls = rpartition(mod_cls, '.') parents = [cls] # if the module name is still missing, get it like above if not modname: modname = self.env.temp_data.get('autodoc:module') if not modname: modname = self.env.ref_context.get('py:module') # ... else, it stays None, which means invalid return modname, parents + [base] class DocstringSignatureMixin(object): """ Mixin for FunctionDocumenter and MethodDocumenter to provide the feature of reading the signature from the docstring. """ def _find_signature(self, encoding=None): docstrings = self.get_doc(encoding) self._new_docstrings = docstrings[:] result = None for i, doclines in enumerate(docstrings): # no lines in docstring, no match if not doclines: continue # match first line of docstring against signature RE match = py_ext_sig_re.match(doclines[0]) if not match: continue exmod, path, base, args, retann = match.groups() # the base name must match ours valid_names = [self.objpath[-1]] if isinstance(self, ClassDocumenter): valid_names.append('__init__') if hasattr(self.object, '__mro__'): valid_names.extend(cls.__name__ for cls in self.object.__mro__) if base not in valid_names: continue # re-prepare docstring to ignore more leading indentation self._new_docstrings[i] = prepare_docstring('\n'.join(doclines[1:])) result = args, retann # don't look any further break return result def get_doc(self, encoding=None, ignore=1): lines = getattr(self, '_new_docstrings', None) if lines is not None: return lines return Documenter.get_doc(self, encoding, ignore) def format_signature(self): if self.args is None and self.env.config.autodoc_docstring_signature: # only act if a signature is not explicitly given already, and if # the feature is enabled result = self._find_signature() if result is not None: self.args, self.retann = result return Documenter.format_signature(self) class DocstringStripSignatureMixin(DocstringSignatureMixin): """ Mixin for AttributeDocumenter to provide the feature of stripping any function signature from the docstring. """ def format_signature(self): if self.args is None and self.env.config.autodoc_docstring_signature: # only act if a signature is not explicitly given already, and if # the feature is enabled result = self._find_signature() if result is not None: # Discarding _args is a only difference with # DocstringSignatureMixin.format_signature. # Documenter.format_signature use self.args value to format. _args, self.retann = result return Documenter.format_signature(self) class FunctionDocumenter(DocstringSignatureMixin, ModuleLevelDocumenter): """ Specialized Documenter subclass for functions. """ objtype = 'function' member_order = 30 @classmethod def can_document_member(cls, member, membername, isattr, parent): return isinstance(member, (FunctionType, BuiltinFunctionType)) def format_args(self): if inspect.isbuiltin(self.object) or \ inspect.ismethoddescriptor(self.object): # cannot introspect arguments of a C function or method return None try: argspec = getargspec(self.object) except TypeError: if (is_builtin_class_method(self.object, '__new__') and is_builtin_class_method(self.object, '__init__')): raise TypeError('%r is a builtin class' % self.object) # if a class should be documented as function (yay duck # typing) we try to use the constructor signature as function # signature without the first argument. try: argspec = getargspec(self.object.__new__) except TypeError: argspec = getargspec(self.object.__init__) if argspec[0]: del argspec[0][0] args = formatargspec(*argspec) # escape backslashes for reST args = args.replace('\\', '\\\\') return args def document_members(self, all_members=False): pass class ClassDocumenter(DocstringSignatureMixin, ModuleLevelDocumenter): """ Specialized Documenter subclass for classes. """ objtype = 'class' member_order = 20 option_spec = { 'members': members_option, 'undoc-members': bool_option, 'noindex': bool_option, 'inherited-members': bool_option, 'show-inheritance': bool_option, 'member-order': identity, 'exclude-members': members_set_option, 'private-members': bool_option, 'special-members': members_option, } @classmethod def can_document_member(cls, member, membername, isattr, parent): return isinstance(member, class_types) def import_object(self): ret = ModuleLevelDocumenter.import_object(self) # if the class is documented under another name, document it # as data/attribute if ret: if hasattr(self.object, '__name__'): self.doc_as_attr = (self.objpath[-1] != self.object.__name__) else: self.doc_as_attr = True return ret def format_args(self): # for classes, the relevant signature is the __init__ method's initmeth = self.get_attr(self.object, '__init__', None) # classes without __init__ method, default __init__ or # __init__ written in C? if initmeth is None or \ is_builtin_class_method(self.object, '__init__') or \ not(inspect.ismethod(initmeth) or inspect.isfunction(initmeth)): return None try: argspec = getargspec(initmeth) except TypeError: # still not possible: happens e.g. for old-style classes # with __init__ in C return None if argspec[0] and argspec[0][0] in ('cls', 'self'): del argspec[0][0] return formatargspec(*argspec) def format_signature(self): if self.doc_as_attr: return '' return DocstringSignatureMixin.format_signature(self) def add_directive_header(self, sig): if self.doc_as_attr: self.directivetype = 'attribute' Documenter.add_directive_header(self, sig) # add inheritance info, if wanted if not self.doc_as_attr and self.options.show_inheritance: sourcename = self.get_sourcename() self.add_line(u'', sourcename) if hasattr(self.object, '__bases__') and len(self.object.__bases__): bases = [b.__module__ in ('__builtin__', 'builtins') and u':class:`%s`' % b.__name__ or u':class:`%s.%s`' % (b.__module__, b.__name__) for b in self.object.__bases__] self.add_line(_(u' Bases: %s') % ', '.join(bases), sourcename) def get_doc(self, encoding=None, ignore=1): lines = getattr(self, '_new_docstrings', None) if lines is not None: return lines content = self.env.config.autoclass_content docstrings = [] attrdocstring = self.get_attr(self.object, '__doc__', None) if attrdocstring: docstrings.append(attrdocstring) # for classes, what the "docstring" is can be controlled via a # config value; the default is only the class docstring if content in ('both', 'init'): initdocstring = self.get_attr( self.get_attr(self.object, '__init__', None), '__doc__') # for new-style classes, no __init__ means default __init__ if (initdocstring is not None and (initdocstring == object.__init__.__doc__ or # for pypy initdocstring.strip() == object.__init__.__doc__)): # for !pypy initdocstring = None if initdocstring: if content == 'init': docstrings = [initdocstring] else: docstrings.append(initdocstring) doc = [] for docstring in docstrings: if isinstance(docstring, text_type): doc.append(prepare_docstring(docstring, ignore)) elif isinstance(docstring, str): # this will not trigger on Py3 doc.append(prepare_docstring(force_decode(docstring, encoding), ignore)) return doc def add_content(self, more_content, no_docstring=False): if self.doc_as_attr: classname = safe_getattr(self.object, '__name__', None) if classname: content = ViewList( [_('alias of :class:`%s`') % classname], source='') ModuleLevelDocumenter.add_content(self, content, no_docstring=True) else: ModuleLevelDocumenter.add_content(self, more_content) def document_members(self, all_members=False): if self.doc_as_attr: return ModuleLevelDocumenter.document_members(self, all_members) class ExceptionDocumenter(ClassDocumenter): """ Specialized ClassDocumenter subclass for exceptions. """ objtype = 'exception' member_order = 10 # needs a higher priority than ClassDocumenter priority = 10 @classmethod def can_document_member(cls, member, membername, isattr, parent): return isinstance(member, class_types) and \ issubclass(member, BaseException) class DataDocumenter(ModuleLevelDocumenter): """ Specialized Documenter subclass for data items. """ objtype = 'data' member_order = 40 priority = -10 option_spec = dict(ModuleLevelDocumenter.option_spec) option_spec["annotation"] = annotation_option @classmethod def can_document_member(cls, member, membername, isattr, parent): return isinstance(parent, ModuleDocumenter) and isattr def add_directive_header(self, sig): ModuleLevelDocumenter.add_directive_header(self, sig) sourcename = self.get_sourcename() if not self.options.annotation: try: objrepr = object_description(self.object) except ValueError: pass else: self.add_line(u' :annotation: = ' + objrepr, sourcename) elif self.options.annotation is SUPPRESS: pass else: self.add_line(u' :annotation: %s' % self.options.annotation, sourcename) def document_members(self, all_members=False): pass class MethodDocumenter(DocstringSignatureMixin, ClassLevelDocumenter): """ Specialized Documenter subclass for methods (normal, static and class). """ objtype = 'method' member_order = 50 priority = 1 # must be more than FunctionDocumenter @classmethod def can_document_member(cls, member, membername, isattr, parent): return inspect.isroutine(member) and \ not isinstance(parent, ModuleDocumenter) def import_object(self): ret = ClassLevelDocumenter.import_object(self) if not ret: return ret # to distinguish classmethod/staticmethod obj = self.parent.__dict__.get(self.object_name) if isinstance(obj, classmethod): self.directivetype = 'classmethod' # document class and static members before ordinary ones self.member_order = self.member_order - 1 elif isinstance(obj, staticmethod): self.directivetype = 'staticmethod' # document class and static members before ordinary ones self.member_order = self.member_order - 1 else: self.directivetype = 'method' return ret def format_args(self): if inspect.isbuiltin(self.object) or \ inspect.ismethoddescriptor(self.object): # can never get arguments of a C function or method return None argspec = getargspec(self.object) if argspec[0] and argspec[0][0] in ('cls', 'self'): del argspec[0][0] args = formatargspec(*argspec) # escape backslashes for reST args = args.replace('\\', '\\\\') return args def document_members(self, all_members=False): pass class AttributeDocumenter(DocstringStripSignatureMixin, ClassLevelDocumenter): """ Specialized Documenter subclass for attributes. """ objtype = 'attribute' member_order = 60 option_spec = dict(ModuleLevelDocumenter.option_spec) option_spec["annotation"] = annotation_option # must be higher than the MethodDocumenter, else it will recognize # some non-data descriptors as methods priority = 10 method_types = (FunctionType, BuiltinFunctionType, MethodType) @classmethod def can_document_member(cls, member, membername, isattr, parent): isdatadesc = isdescriptor(member) and not \ isinstance(member, cls.method_types) and not \ type(member).__name__ in ("type", "method_descriptor", "instancemethod") return isdatadesc or (not isinstance(parent, ModuleDocumenter) and not inspect.isroutine(member) and not isinstance(member, class_types)) def document_members(self, all_members=False): pass def import_object(self): ret = ClassLevelDocumenter.import_object(self) if isdescriptor(self.object) and \ not isinstance(self.object, self.method_types): self._datadescriptor = True else: # if it's not a data descriptor self._datadescriptor = False return ret def get_real_modname(self): return self.get_attr(self.parent or self.object, '__module__', None) \ or self.modname def add_directive_header(self, sig): ClassLevelDocumenter.add_directive_header(self, sig) sourcename = self.get_sourcename() if not self.options.annotation: if not self._datadescriptor: try: objrepr = object_description(self.object) except ValueError: pass else: self.add_line(u' :annotation: = ' + objrepr, sourcename) elif self.options.annotation is SUPPRESS: pass else: self.add_line(u' :annotation: %s' % self.options.annotation, sourcename) def add_content(self, more_content, no_docstring=False): if not self._datadescriptor: # if it's not a data descriptor, its docstring is very probably the # wrong thing to display no_docstring = True ClassLevelDocumenter.add_content(self, more_content, no_docstring) class InstanceAttributeDocumenter(AttributeDocumenter): """ Specialized Documenter subclass for attributes that cannot be imported because they are instance attributes (e.g. assigned in __init__). """ objtype = 'instanceattribute' directivetype = 'attribute' member_order = 60 # must be higher than AttributeDocumenter priority = 11 @classmethod def can_document_member(cls, member, membername, isattr, parent): """This documents only INSTANCEATTR members.""" return isattr and (member is INSTANCEATTR) def import_object(self): """Never import anything.""" # disguise as an attribute self.objtype = 'attribute' self._datadescriptor = False return True def add_content(self, more_content, no_docstring=False): """Never try to get a docstring from the object.""" AttributeDocumenter.add_content(self, more_content, no_docstring=True) class AutoDirective(Directive): """ The AutoDirective class is used for all autodoc directives. It dispatches most of the work to one of the Documenters, which it selects through its *_registry* dictionary. The *_special_attrgetters* attribute is used to customize ``getattr()`` calls that the Documenters make; its entries are of the form ``type: getattr_function``. Note: When importing an object, all items along the import chain are accessed using the descendant's *_special_attrgetters*, thus this dictionary should include all necessary functions for accessing attributes of the parents. """ # a registry of objtype -> documenter class _registry = {} # a registry of type -> getattr function _special_attrgetters = {} # flags that can be given in autodoc_default_flags _default_flags = set([ 'members', 'undoc-members', 'inherited-members', 'show-inheritance', 'private-members', 'special-members', ]) # standard docutils directive settings has_content = True required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True # allow any options to be passed; the options are parsed further # by the selected Documenter option_spec = DefDict(identity) def warn(self, msg): self.warnings.append(self.reporter.warning(msg, line=self.lineno)) def run(self): self.filename_set = set() # a set of dependent filenames self.reporter = self.state.document.reporter self.env = self.state.document.settings.env self.warnings = [] self.result = ViewList() try: source, lineno = self.reporter.get_source_and_line(self.lineno) except AttributeError: source = lineno = None self.env.app.debug('[autodoc] %s:%s: input:\n%s', source, lineno, self.block_text) # find out what documenter to call objtype = self.name[4:] doc_class = self._registry[objtype] # add default flags for flag in self._default_flags: if flag not in doc_class.option_spec: continue negated = self.options.pop('no-' + flag, 'not given') is None if flag in self.env.config.autodoc_default_flags and \ not negated: self.options[flag] = None # process the options with the selected documenter's option_spec try: self.genopt = Options(assemble_option_dict( self.options.items(), doc_class.option_spec)) except (KeyError, ValueError, TypeError) as err: # an option is either unknown or has a wrong type msg = self.reporter.error('An option to %s is either unknown or ' 'has an invalid value: %s' % (self.name, err), line=self.lineno) return [msg] # generate the output documenter = doc_class(self, self.arguments[0]) documenter.generate(more_content=self.content) if not self.result: return self.warnings self.env.app.debug2('[autodoc] output:\n%s', '\n'.join(self.result)) # record all filenames as dependencies -- this will at least # partially make automatic invalidation possible for fn in self.filename_set: self.state.document.settings.record_dependencies.add(fn) # use a custom reporter that correctly assigns lines to source # filename/description and lineno old_reporter = self.state.memo.reporter self.state.memo.reporter = AutodocReporter(self.result, self.state.memo.reporter) if documenter.titles_allowed: node = nodes.section() # necessary so that the child nodes get the right source/line set node.document = self.state.document nested_parse_with_titles(self.state, self.result, node) else: node = nodes.paragraph() node.document = self.state.document self.state.nested_parse(self.result, 0, node) self.state.memo.reporter = old_reporter return self.warnings + node.children def add_documenter(cls): """Register a new Documenter.""" if not issubclass(cls, Documenter): raise ExtensionError('autodoc documenter %r must be a subclass ' 'of Documenter' % cls) # actually, it should be possible to override Documenters # if cls.objtype in AutoDirective._registry: # raise ExtensionError('autodoc documenter for %r is already ' # 'registered' % cls.objtype) AutoDirective._registry[cls.objtype] = cls def setup(app): app.add_autodocumenter(ModuleDocumenter) app.add_autodocumenter(ClassDocumenter) app.add_autodocumenter(ExceptionDocumenter) app.add_autodocumenter(DataDocumenter) app.add_autodocumenter(FunctionDocumenter) app.add_autodocumenter(MethodDocumenter) app.add_autodocumenter(AttributeDocumenter) app.add_autodocumenter(InstanceAttributeDocumenter) app.add_config_value('autoclass_content', 'class', True) app.add_config_value('autodoc_member_order', 'alphabetic', True) app.add_config_value('autodoc_default_flags', [], True) app.add_config_value('autodoc_docstring_signature', True, True) app.add_config_value('autodoc_mock_imports', [], True) app.add_event('autodoc-process-docstring') app.add_event('autodoc-process-signature') app.add_event('autodoc-skip-member') return {'version': sphinx.__display_version__, 'parallel_read_safe': True} class testcls: """test doc string""" def __getattr__(self, x): return x def __setattr__(self, x, y): """Attr setter.""" Sphinx-1.3.6/sphinx/ext/graphviz.py0000664000175000017500000002527612665045070020460 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.graphviz ~~~~~~~~~~~~~~~~~~~ Allow graphviz-formatted graphs to be included in Sphinx-generated documents inline. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re import codecs import posixpath from os import path from subprocess import Popen, PIPE from hashlib import sha1 from six import text_type from docutils import nodes from docutils.parsers.rst import directives from docutils.statemachine import ViewList import sphinx from sphinx.errors import SphinxError from sphinx.locale import _ from sphinx.util.osutil import ensuredir, ENOENT, EPIPE, EINVAL from sphinx.util.compat import Directive mapname_re = re.compile(r'

%s

\n''' % (fname, alt) self.body.append(svgtag) else: mapfile = open(outfn + '.map', 'rb') try: imgmap = mapfile.readlines() finally: mapfile.close() if len(imgmap) == 2: # nothing in image map (the lines are and ) self.body.append('%s\n' % (fname, alt, imgcss)) else: # has a map: get the name of the map and connect the parts mapname = mapname_re.match(imgmap[0].decode('utf-8')).group(1) self.body.append('%s\n' % (fname, alt, mapname, imgcss)) self.body.extend([item.decode('utf-8') for item in imgmap]) self.body.append('\n' % wrapper) raise nodes.SkipNode def html_visit_graphviz(self, node): render_dot_html(self, node, node['code'], node['options']) def render_dot_latex(self, node, code, options, prefix='graphviz'): try: fname, outfn = render_dot(self, code, options, 'pdf', prefix) except GraphvizError as exc: self.builder.warn('dot code %r: ' % code + str(exc)) raise nodes.SkipNode inline = node.get('inline', False) if inline: para_separator = '' else: para_separator = '\n' if fname is not None: self.body.append('%s\\includegraphics{%s}%s' % (para_separator, fname, para_separator)) raise nodes.SkipNode def latex_visit_graphviz(self, node): render_dot_latex(self, node, node['code'], node['options']) def render_dot_texinfo(self, node, code, options, prefix='graphviz'): try: fname, outfn = render_dot(self, code, options, 'png', prefix) except GraphvizError as exc: self.builder.warn('dot code %r: ' % code + str(exc)) raise nodes.SkipNode if fname is not None: self.body.append('@image{%s,,,[graphviz],png}\n' % fname[:-4]) raise nodes.SkipNode def texinfo_visit_graphviz(self, node): render_dot_texinfo(self, node, node['code'], node['options']) def text_visit_graphviz(self, node): if 'alt' in node.attributes: self.add_text(_('[graph: %s]') % node['alt']) else: self.add_text(_('[graph]')) raise nodes.SkipNode def man_visit_graphviz(self, node): if 'alt' in node.attributes: self.body.append(_('[graph: %s]') % node['alt']) else: self.body.append(_('[graph]')) raise nodes.SkipNode def setup(app): app.add_node(graphviz, html=(html_visit_graphviz, None), latex=(latex_visit_graphviz, None), texinfo=(texinfo_visit_graphviz, None), text=(text_visit_graphviz, None), man=(man_visit_graphviz, None)) app.add_directive('graphviz', Graphviz) app.add_directive('graph', GraphvizSimple) app.add_directive('digraph', GraphvizSimple) app.add_config_value('graphviz_dot', 'dot', 'html') app.add_config_value('graphviz_dot_args', [], 'html') app.add_config_value('graphviz_output_format', 'png', 'html') return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/coverage.py0000664000175000017500000002353512660074372020417 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.coverage ~~~~~~~~~~~~~~~~~~~ Check Python modules and C API for coverage. Mostly written by Josip Dzolonga for the Google Highly Open Participation contest. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re import glob import inspect from os import path from six import iteritems from six.moves import cPickle as pickle import sphinx from sphinx.builders import Builder from sphinx.util.inspect import safe_getattr # utility def write_header(f, text, char='-'): f.write(text + '\n') f.write(char * len(text) + '\n') def compile_regex_list(name, exps, warnfunc): lst = [] for exp in exps: try: lst.append(re.compile(exp)) except Exception: warnfunc('invalid regex %r in %s' % (exp, name)) return lst class CoverageBuilder(Builder): name = 'coverage' def init(self): self.c_sourcefiles = [] for pattern in self.config.coverage_c_path: pattern = path.join(self.srcdir, pattern) self.c_sourcefiles.extend(glob.glob(pattern)) self.c_regexes = [] for (name, exp) in self.config.coverage_c_regexes.items(): try: self.c_regexes.append((name, re.compile(exp))) except Exception: self.warn('invalid regex %r in coverage_c_regexes' % exp) self.c_ignorexps = {} for (name, exps) in iteritems(self.config.coverage_ignore_c_items): self.c_ignorexps[name] = compile_regex_list( 'coverage_ignore_c_items', exps, self.warn) self.mod_ignorexps = compile_regex_list( 'coverage_ignore_modules', self.config.coverage_ignore_modules, self.warn) self.cls_ignorexps = compile_regex_list( 'coverage_ignore_classes', self.config.coverage_ignore_classes, self.warn) self.fun_ignorexps = compile_regex_list( 'coverage_ignore_functions', self.config.coverage_ignore_functions, self.warn) def get_outdated_docs(self): return 'coverage overview' def write(self, *ignored): self.py_undoc = {} self.build_py_coverage() self.write_py_coverage() self.c_undoc = {} self.build_c_coverage() self.write_c_coverage() def build_c_coverage(self): # Fetch all the info from the header files c_objects = self.env.domaindata['c']['objects'] for filename in self.c_sourcefiles: undoc = set() f = open(filename, 'r') try: for line in f: for key, regex in self.c_regexes: match = regex.match(line) if match: name = match.groups()[0] if name not in c_objects: for exp in self.c_ignorexps.get(key, ()): if exp.match(name): break else: undoc.add((key, name)) continue finally: f.close() if undoc: self.c_undoc[filename] = undoc def write_c_coverage(self): output_file = path.join(self.outdir, 'c.txt') op = open(output_file, 'w') try: if self.config.coverage_write_headline: write_header(op, 'Undocumented C API elements', '=') op.write('\n') for filename, undoc in iteritems(self.c_undoc): write_header(op, filename) for typ, name in sorted(undoc): op.write(' * %-50s [%9s]\n' % (name, typ)) op.write('\n') finally: op.close() def build_py_coverage(self): objects = self.env.domaindata['py']['objects'] modules = self.env.domaindata['py']['modules'] skip_undoc = self.config.coverage_skip_undoc_in_source for mod_name in modules: ignore = False for exp in self.mod_ignorexps: if exp.match(mod_name): ignore = True break if ignore: continue try: mod = __import__(mod_name, fromlist=['foo']) except ImportError as err: self.warn('module %s could not be imported: %s' % (mod_name, err)) self.py_undoc[mod_name] = {'error': err} continue funcs = [] classes = {} for name, obj in inspect.getmembers(mod): # diverse module attributes are ignored: if name[0] == '_': # begins in an underscore continue if not hasattr(obj, '__module__'): # cannot be attributed to a module continue if obj.__module__ != mod_name: # is not defined in this module continue full_name = '%s.%s' % (mod_name, name) if inspect.isfunction(obj): if full_name not in objects: for exp in self.fun_ignorexps: if exp.match(name): break else: if skip_undoc and not obj.__doc__: continue funcs.append(name) elif inspect.isclass(obj): for exp in self.cls_ignorexps: if exp.match(name): break else: if full_name not in objects: if skip_undoc and not obj.__doc__: continue # not documented at all classes[name] = [] continue attrs = [] for attr_name in dir(obj): if attr_name not in obj.__dict__: continue try: attr = safe_getattr(obj, attr_name) except AttributeError: continue if not (inspect.ismethod(attr) or inspect.isfunction(attr)): continue if attr_name[0] == '_': # starts with an underscore, ignore it continue if skip_undoc and not attr.__doc__: # skip methods without docstring if wished continue full_attr_name = '%s.%s' % (full_name, attr_name) if full_attr_name not in objects: attrs.append(attr_name) if attrs: # some attributes are undocumented classes[name] = attrs self.py_undoc[mod_name] = {'funcs': funcs, 'classes': classes} def write_py_coverage(self): output_file = path.join(self.outdir, 'python.txt') op = open(output_file, 'w') failed = [] try: if self.config.coverage_write_headline: write_header(op, 'Undocumented Python objects', '=') keys = sorted(self.py_undoc.keys()) for name in keys: undoc = self.py_undoc[name] if 'error' in undoc: failed.append((name, undoc['error'])) else: if not undoc['classes'] and not undoc['funcs']: continue write_header(op, name) if undoc['funcs']: op.write('Functions:\n') op.writelines(' * %s\n' % x for x in undoc['funcs']) op.write('\n') if undoc['classes']: op.write('Classes:\n') for name, methods in sorted( iteritems(undoc['classes'])): if not methods: op.write(' * %s\n' % name) else: op.write(' * %s -- missing methods:\n\n' % name) op.writelines(' - %s\n' % x for x in methods) op.write('\n') if failed: write_header(op, 'Modules that failed to import') op.writelines(' * %s -- %s\n' % x for x in failed) finally: op.close() def finish(self): # dump the coverage data to a pickle file too picklepath = path.join(self.outdir, 'undoc.pickle') dumpfile = open(picklepath, 'wb') try: pickle.dump((self.py_undoc, self.c_undoc), dumpfile) finally: dumpfile.close() def setup(app): app.add_builder(CoverageBuilder) app.add_config_value('coverage_ignore_modules', [], False) app.add_config_value('coverage_ignore_functions', [], False) app.add_config_value('coverage_ignore_classes', [], False) app.add_config_value('coverage_c_path', [], False) app.add_config_value('coverage_c_regexes', {}, False) app.add_config_value('coverage_ignore_c_items', {}, False) app.add_config_value('coverage_write_headline', True, False) app.add_config_value('coverage_skip_undoc_in_source', False, False) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/viewcode.py0000664000175000017500000002030612665045070020420 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.viewcode ~~~~~~~~~~~~~~~~~~~ Add links to module code in Python object descriptions. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import traceback from six import iteritems, text_type from docutils import nodes import sphinx from sphinx import addnodes from sphinx.locale import _ from sphinx.pycode import ModuleAnalyzer from sphinx.util import get_full_modname from sphinx.util.nodes import make_refnode from sphinx.util.console import blue def _get_full_modname(app, modname, attribute): try: return get_full_modname(modname, attribute) except AttributeError: # sphinx.ext.viewcode can't follow class instance attribute # then AttributeError logging output only verbose mode. app.verbose('Didn\'t find %s in %s' % (attribute, modname)) return None except Exception as e: # sphinx.ext.viewcode follow python domain directives. # because of that, if there are no real modules exists that specified # by py:function or other directives, viewcode emits a lot of warnings. # It should be displayed only verbose mode. app.verbose(traceback.format_exc().rstrip()) app.verbose('viewcode can\'t import %s, failed with error "%s"' % (modname, e)) return None def doctree_read(app, doctree): env = app.builder.env if not hasattr(env, '_viewcode_modules'): env._viewcode_modules = {} def has_tag(modname, fullname, docname, refname): entry = env._viewcode_modules.get(modname, None) try: analyzer = ModuleAnalyzer.for_module(modname) except Exception: env._viewcode_modules[modname] = False return if not isinstance(analyzer.code, text_type): code = analyzer.code.decode(analyzer.encoding) else: code = analyzer.code if entry is None or entry[0] != code: analyzer.find_tags() entry = code, analyzer.tags, {}, refname env._viewcode_modules[modname] = entry elif entry is False: return _, tags, used, _ = entry if fullname in tags: used[fullname] = docname return True for objnode in doctree.traverse(addnodes.desc): if objnode.get('domain') != 'py': continue names = set() for signode in objnode: if not isinstance(signode, addnodes.desc_signature): continue modname = signode.get('module') fullname = signode.get('fullname') refname = modname if env.config.viewcode_import: modname = _get_full_modname(app, modname, fullname) if not modname: continue fullname = signode.get('fullname') if not has_tag(modname, fullname, env.docname, refname): continue if fullname in names: # only one link per name, please continue names.add(fullname) pagename = '_modules/' + modname.replace('.', '/') onlynode = addnodes.only(expr='html') onlynode += addnodes.pending_xref( '', reftype='viewcode', refdomain='std', refexplicit=False, reftarget=pagename, refid=fullname, refdoc=env.docname) onlynode[0] += nodes.inline('', _('[source]'), classes=['viewcode-link']) signode += onlynode def env_merge_info(app, env, docnames, other): if not hasattr(other, '_viewcode_modules'): return # create a _viewcode_modules dict on the main environment if not hasattr(env, '_viewcode_modules'): env._viewcode_modules = {} # now merge in the information from the subprocess env._viewcode_modules.update(other._viewcode_modules) def missing_reference(app, env, node, contnode): # resolve our "viewcode" reference nodes -- they need special treatment if node['reftype'] == 'viewcode': return make_refnode(app.builder, node['refdoc'], node['reftarget'], node['refid'], contnode) def collect_pages(app): env = app.builder.env if not hasattr(env, '_viewcode_modules'): return highlighter = app.builder.highlighter urito = app.builder.get_relative_uri modnames = set(env._viewcode_modules) # app.builder.info(' (%d module code pages)' % # len(env._viewcode_modules), nonl=1) for modname, entry in app.status_iterator( iteritems(env._viewcode_modules), 'highlighting module code... ', blue, len(env._viewcode_modules), lambda x: x[0]): if not entry: continue code, tags, used, refname = entry # construct a page name for the highlighted source pagename = '_modules/' + modname.replace('.', '/') # highlight the source using the builder's highlighter highlighted = highlighter.highlight_block(code, 'python', linenos=False) # split the code into lines lines = highlighted.splitlines() # split off wrap markup from the first line of the actual code before, after = lines[0].split('
')
        lines[0:1] = [before + '
', after]
        # nothing to do for the last line; it always starts with 
anyway # now that we have code lines (starting at index 1), insert anchors for # the collected tags (HACK: this only works if the tag boundaries are # properly nested!) maxindex = len(lines) - 1 for name, docname in iteritems(used): type, start, end = tags[name] backlink = urito(pagename, docname) + '#' + refname + '.' + name lines[start] = ( '
%s' % (name, backlink, _('[docs]')) + lines[start]) lines[min(end - 1, maxindex)] += '
' # try to find parents (for submodules) parents = [] parent = modname while '.' in parent: parent = parent.rsplit('.', 1)[0] if parent in modnames: parents.append({ 'link': urito(pagename, '_modules/' + parent.replace('.', '/')), 'title': parent}) parents.append({'link': urito(pagename, '_modules/index'), 'title': _('Module code')}) parents.reverse() # putting it all together context = { 'parents': parents, 'title': modname, 'body': (_('

Source code for %s

') % modname + '\n'.join(lines)), } yield (pagename, context, 'page.html') if not modnames: return html = ['\n'] # the stack logic is needed for using nested lists for submodules stack = [''] for modname in sorted(modnames): if modname.startswith(stack[-1]): stack.append(modname + '.') html.append('
    ') else: stack.pop() while not modname.startswith(stack[-1]): stack.pop() html.append('
') stack.append(modname + '.') html.append('
  • %s
  • \n' % ( urito('_modules/index', '_modules/' + modname.replace('.', '/')), modname)) html.append('' * (len(stack) - 1)) context = { 'title': _('Overview: module code'), 'body': (_('

    All modules for which code is available

    ') + ''.join(html)), } yield ('_modules/index', context, 'page.html') def setup(app): app.add_config_value('viewcode_import', True, False) app.connect('doctree-read', doctree_read) app.connect('env-merge-info', env_merge_info) app.connect('html-collect-pages', collect_pages) app.connect('missing-reference', missing_reference) # app.add_config_value('viewcode_include_modules', [], 'env') # app.add_config_value('viewcode_exclude_modules', [], 'env') return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/inheritance_diagram.py0000664000175000017500000003457612665045070022606 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- r""" sphinx.ext.inheritance_diagram ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Defines a docutils directive for inserting inheritance diagrams. Provide the directive with one or more classes or modules (separated by whitespace). For modules, all of the classes in that module will be used. Example:: Given the following classes: class A: pass class B(A): pass class C(A): pass class D(B, C): pass class E(B): pass .. inheritance-diagram: D E Produces a graph like the following: A / \ B C / \ / E D The graph is inserted as a PNG+image map into HTML and a PDF in LaTeX. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re import sys import inspect try: from hashlib import md5 except ImportError: from md5 import md5 from six import text_type from six.moves import builtins from docutils import nodes from docutils.parsers.rst import directives import sphinx from sphinx.ext.graphviz import render_dot_html, render_dot_latex, \ render_dot_texinfo from sphinx.pycode import ModuleAnalyzer from sphinx.util import force_decode from sphinx.util.compat import Directive class_sig_re = re.compile(r'''^([\w.]*\.)? # module names (\w+) \s* $ # class/final module name ''', re.VERBOSE) class InheritanceException(Exception): pass class InheritanceGraph(object): """ Given a list of classes, determines the set of classes that they inherit from all the way to the root "object", and then is able to generate a graphviz dot graph from them. """ def __init__(self, class_names, currmodule, show_builtins=False, private_bases=False, parts=0): """*class_names* is a list of child classes to show bases from. If *show_builtins* is True, then Python builtins will be shown in the graph. """ self.class_names = class_names classes = self._import_classes(class_names, currmodule) self.class_info = self._class_info(classes, show_builtins, private_bases, parts) if not self.class_info: raise InheritanceException('No classes found for ' 'inheritance diagram') def _import_class_or_module(self, name, currmodule): """Import a class using its fully-qualified *name*.""" try: path, base = class_sig_re.match(name).groups() except (AttributeError, ValueError): raise InheritanceException('Invalid class or module %r specified ' 'for inheritance diagram' % name) fullname = (path or '') + base path = (path and path.rstrip('.') or '') # two possibilities: either it is a module, then import it try: __import__(fullname) todoc = sys.modules[fullname] except ImportError: # else it is a class, then import the module if not path: if currmodule: # try the current module path = currmodule else: raise InheritanceException( 'Could not import class %r specified for ' 'inheritance diagram' % base) try: __import__(path) todoc = getattr(sys.modules[path], base) except (ImportError, AttributeError): raise InheritanceException( 'Could not import class or module %r specified for ' 'inheritance diagram' % (path + '.' + base)) # If a class, just return it if inspect.isclass(todoc): return [todoc] elif inspect.ismodule(todoc): classes = [] for cls in todoc.__dict__.values(): if inspect.isclass(cls) and cls.__module__ == todoc.__name__: classes.append(cls) return classes raise InheritanceException('%r specified for inheritance diagram is ' 'not a class or module' % name) def _import_classes(self, class_names, currmodule): """Import a list of classes.""" classes = [] for name in class_names: classes.extend(self._import_class_or_module(name, currmodule)) return classes def _class_info(self, classes, show_builtins, private_bases, parts): """Return name and bases for all classes that are ancestors of *classes*. *parts* gives the number of dotted name parts that is removed from the displayed node names. """ all_classes = {} py_builtins = vars(builtins).values() def recurse(cls): if not show_builtins and cls in py_builtins: return if not private_bases and cls.__name__.startswith('_'): return nodename = self.class_name(cls, parts) fullname = self.class_name(cls, 0) # Use first line of docstring as tooltip, if available tooltip = None try: if cls.__doc__: enc = ModuleAnalyzer.for_module(cls.__module__).encoding doc = cls.__doc__.strip().split("\n")[0] if not isinstance(doc, text_type): doc = force_decode(doc, enc) if doc: tooltip = '"%s"' % doc.replace('"', '\\"') except Exception: # might raise AttributeError for strange classes pass baselist = [] all_classes[cls] = (nodename, fullname, baselist, tooltip) for base in cls.__bases__: if not show_builtins and base in py_builtins: continue if not private_bases and base.__name__.startswith('_'): continue baselist.append(self.class_name(base, parts)) if base not in all_classes: recurse(base) for cls in classes: recurse(cls) return list(all_classes.values()) def class_name(self, cls, parts=0): """Given a class object, return a fully-qualified name. This works for things I've tested in matplotlib so far, but may not be completely general. """ module = cls.__module__ if module in ('__builtin__', 'builtins'): fullname = cls.__name__ else: fullname = '%s.%s' % (module, cls.__name__) if parts == 0: return fullname name_parts = fullname.split('.') return '.'.join(name_parts[-parts:]) def get_all_class_names(self): """Get all of the class names involved in the graph.""" return [fullname for (_, fullname, _, _) in self.class_info] # These are the default attrs for graphviz default_graph_attrs = { 'rankdir': 'LR', 'size': '"8.0, 12.0"', } default_node_attrs = { 'shape': 'box', 'fontsize': 10, 'height': 0.25, 'fontname': '"Vera Sans, DejaVu Sans, Liberation Sans, ' 'Arial, Helvetica, sans"', 'style': '"setlinewidth(0.5)"', } default_edge_attrs = { 'arrowsize': 0.5, 'style': '"setlinewidth(0.5)"', } def _format_node_attrs(self, attrs): return ','.join(['%s=%s' % x for x in sorted(attrs.items())]) def _format_graph_attrs(self, attrs): return ''.join(['%s=%s;\n' % x for x in sorted(attrs.items())]) def generate_dot(self, name, urls={}, env=None, graph_attrs={}, node_attrs={}, edge_attrs={}): """Generate a graphviz dot graph from the classes that were passed in to __init__. *name* is the name of the graph. *urls* is a dictionary mapping class names to HTTP URLs. *graph_attrs*, *node_attrs*, *edge_attrs* are dictionaries containing key/value pairs to pass on as graphviz properties. """ g_attrs = self.default_graph_attrs.copy() n_attrs = self.default_node_attrs.copy() e_attrs = self.default_edge_attrs.copy() g_attrs.update(graph_attrs) n_attrs.update(node_attrs) e_attrs.update(edge_attrs) if env: g_attrs.update(env.config.inheritance_graph_attrs) n_attrs.update(env.config.inheritance_node_attrs) e_attrs.update(env.config.inheritance_edge_attrs) res = [] res.append('digraph %s {\n' % name) res.append(self._format_graph_attrs(g_attrs)) for name, fullname, bases, tooltip in sorted(self.class_info): # Write the node this_node_attrs = n_attrs.copy() if fullname in urls: this_node_attrs['URL'] = '"%s"' % urls[fullname] this_node_attrs['target'] = '"_top"' if tooltip: this_node_attrs['tooltip'] = tooltip res.append(' "%s" [%s];\n' % (name, self._format_node_attrs(this_node_attrs))) # Write the edges for base_name in bases: res.append(' "%s" -> "%s" [%s];\n' % (base_name, name, self._format_node_attrs(e_attrs))) res.append('}\n') return ''.join(res) class inheritance_diagram(nodes.General, nodes.Element): """ A docutils node to use as a placeholder for the inheritance diagram. """ pass class InheritanceDiagram(Directive): """ Run when the inheritance_diagram directive is first encountered. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = { 'parts': directives.nonnegative_int, 'private-bases': directives.flag, } def run(self): node = inheritance_diagram() node.document = self.state.document env = self.state.document.settings.env class_names = self.arguments[0].split() class_role = env.get_domain('py').role('class') # Store the original content for use as a hash node['parts'] = self.options.get('parts', 0) node['content'] = ', '.join(class_names) # Create a graph starting with the list of classes try: graph = InheritanceGraph( class_names, env.ref_context.get('py:module'), parts=node['parts'], private_bases='private-bases' in self.options) except InheritanceException as err: return [node.document.reporter.warning(err.args[0], line=self.lineno)] # Create xref nodes for each target of the graph's image map and # add them to the doc tree so that Sphinx can resolve the # references to real URLs later. These nodes will eventually be # removed from the doctree after we're done with them. for name in graph.get_all_class_names(): refnodes, x = class_role( 'class', ':class:`%s`' % name, name, 0, self.state) node.extend(refnodes) # Store the graph object so we can use it to generate the # dot file later node['graph'] = graph return [node] def get_graph_hash(node): encoded = (node['content'] + str(node['parts'])).encode('utf-8') return md5(encoded).hexdigest()[-10:] def html_visit_inheritance_diagram(self, node): """ Output the graph for HTML. This will insert a PNG with clickable image map. """ graph = node['graph'] graph_hash = get_graph_hash(node) name = 'inheritance%s' % graph_hash # Create a mapping from fully-qualified class names to URLs. graphviz_output_format = self.builder.env.config.graphviz_output_format.upper() current_filename = self.builder.current_docname + self.builder.out_suffix urls = {} for child in node: if child.get('refuri') is not None: if graphviz_output_format == 'SVG': urls[child['reftitle']] = "../" + child.get('refuri') else: urls[child['reftitle']] = child.get('refuri') elif child.get('refid') is not None: if graphviz_output_format == 'SVG': urls[child['reftitle']] = '../' + current_filename + '#' + child.get('refid') else: urls[child['reftitle']] = '#' + child.get('refid') dotcode = graph.generate_dot(name, urls, env=self.builder.env) render_dot_html(self, node, dotcode, [], 'inheritance', 'inheritance', alt='Inheritance diagram of ' + node['content']) raise nodes.SkipNode def latex_visit_inheritance_diagram(self, node): """ Output the graph for LaTeX. This will insert a PDF. """ graph = node['graph'] graph_hash = get_graph_hash(node) name = 'inheritance%s' % graph_hash dotcode = graph.generate_dot(name, env=self.builder.env, graph_attrs={'size': '"6.0,6.0"'}) render_dot_latex(self, node, dotcode, [], 'inheritance') raise nodes.SkipNode def texinfo_visit_inheritance_diagram(self, node): """ Output the graph for Texinfo. This will insert a PNG. """ graph = node['graph'] graph_hash = get_graph_hash(node) name = 'inheritance%s' % graph_hash dotcode = graph.generate_dot(name, env=self.builder.env, graph_attrs={'size': '"6.0,6.0"'}) render_dot_texinfo(self, node, dotcode, [], 'inheritance') raise nodes.SkipNode def skip(self, node): raise nodes.SkipNode def setup(app): app.setup_extension('sphinx.ext.graphviz') app.add_node( inheritance_diagram, latex=(latex_visit_inheritance_diagram, None), html=(html_visit_inheritance_diagram, None), text=(skip, None), man=(skip, None), texinfo=(texinfo_visit_inheritance_diagram, None)) app.add_directive('inheritance-diagram', InheritanceDiagram) app.add_config_value('inheritance_graph_attrs', {}, False), app.add_config_value('inheritance_node_attrs', {}, False), app.add_config_value('inheritance_edge_attrs', {}, False), return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/ifconfig.py0000664000175000017500000000460712660074372020407 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.ifconfig ~~~~~~~~~~~~~~~~~~~ Provides the ``ifconfig`` directive that allows to write documentation that is included depending on configuration variables. Usage:: .. ifconfig:: releaselevel in ('alpha', 'beta', 'rc') This stuff is only included in the built docs for unstable versions. The argument for ``ifconfig`` is a plain Python expression, evaluated in the namespace of the project configuration (that is, all variables from ``conf.py`` are available.) :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from docutils import nodes import sphinx from sphinx.util.nodes import set_source_info from sphinx.util.compat import Directive class ifconfig(nodes.Element): pass class IfConfig(Directive): has_content = True required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {} def run(self): node = ifconfig() node.document = self.state.document set_source_info(self, node) node['expr'] = self.arguments[0] self.state.nested_parse(self.content, self.content_offset, node, match_titles=1) return [node] def process_ifconfig_nodes(app, doctree, docname): ns = dict((k, app.config[k]) for k in app.config.values) ns.update(app.config.__dict__.copy()) ns['builder'] = app.builder.name for node in doctree.traverse(ifconfig): try: res = eval(node['expr'], ns) except Exception as err: # handle exceptions in a clean fashion from traceback import format_exception_only msg = ''.join(format_exception_only(err.__class__, err)) newnode = doctree.reporter.error('Exception occured in ' 'ifconfig expression: \n%s' % msg, base_node=node) node.replace_self(newnode) else: if not res: node.replace_self([]) else: node.replace_self(node.children) def setup(app): app.add_node(ifconfig) app.add_directive('ifconfig', IfConfig) app.connect('doctree-resolved', process_ifconfig_nodes) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/extlinks.py0000664000175000017500000000422212660074372020455 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.extlinks ~~~~~~~~~~~~~~~~~~~ Extension to save typing and prevent hard-coding of base URLs in the reST files. This adds a new config value called ``extlinks`` that is created like this:: extlinks = {'exmpl': ('http://example.com/%s.html', prefix), ...} Now you can use e.g. :exmpl:`foo` in your documents. This will create a link to ``http://example.com/foo.html``. The link caption depends on the *prefix* value given: - If it is ``None``, the caption will be the full URL. - If it is a string (empty or not), the caption will be the prefix prepended to the role content. You can also give an explicit caption, e.g. :exmpl:`Foo `. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from six import iteritems from docutils import nodes, utils import sphinx from sphinx.util.nodes import split_explicit_title def make_link_role(base_url, prefix): def role(typ, rawtext, text, lineno, inliner, options={}, content=[]): text = utils.unescape(text) has_explicit_title, title, part = split_explicit_title(text) try: full_url = base_url % part except (TypeError, ValueError): inliner.reporter.warning( 'unable to expand %s extlink with base URL %r, please make ' 'sure the base contains \'%%s\' exactly once' % (typ, base_url), line=lineno) full_url = base_url + part if not has_explicit_title: if prefix is None: title = full_url else: title = prefix + part pnode = nodes.reference(title, title, internal=False, refuri=full_url) return [pnode], [] return role def setup_link_roles(app): for name, (base_url, prefix) in iteritems(app.config.extlinks): app.add_role(name, make_link_role(base_url, prefix)) def setup(app): app.add_config_value('extlinks', {}, 'env') app.connect('builder-inited', setup_link_roles) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/todo.py0000664000175000017500000001442712665045070017567 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.todo ~~~~~~~~~~~~~~~ Allow todos to be inserted into your documentation. Inclusion of todos can be switched of by a configuration variable. The todolist directive collects all todos of your project and lists them along with a backlink to the original location. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from docutils import nodes from docutils.parsers.rst import directives import sphinx from sphinx.locale import _ from sphinx.environment import NoUri from sphinx.util.nodes import set_source_info from sphinx.util.compat import Directive, make_admonition class todo_node(nodes.Admonition, nodes.Element): pass class todolist(nodes.General, nodes.Element): pass class Todo(Directive): """ A todo entry, displayed (if configured) in the form of an admonition. """ has_content = True required_arguments = 0 optional_arguments = 0 final_argument_whitespace = False option_spec = { 'class': directives.class_option, } def run(self): env = self.state.document.settings.env targetid = 'index-%s' % env.new_serialno('index') targetnode = nodes.target('', '', ids=[targetid]) if not self.options.get('class'): self.options['class'] = ['admonition-todo'] ad = make_admonition(todo_node, self.name, [_('Todo')], self.options, self.content, self.lineno, self.content_offset, self.block_text, self.state, self.state_machine) set_source_info(self, ad[0]) return [targetnode] + ad def process_todos(app, doctree): # collect all todos in the environment # this is not done in the directive itself because it some transformations # must have already been run, e.g. substitutions env = app.builder.env if not hasattr(env, 'todo_all_todos'): env.todo_all_todos = [] for node in doctree.traverse(todo_node): try: targetnode = node.parent[node.parent.index(node) - 1] if not isinstance(targetnode, nodes.target): raise IndexError except IndexError: targetnode = None newnode = node.deepcopy() del newnode['ids'] env.todo_all_todos.append({ 'docname': env.docname, 'source': node.source or env.doc2path(env.docname), 'lineno': node.line, 'todo': newnode, 'target': targetnode, }) class TodoList(Directive): """ A list of all todo entries. """ has_content = False required_arguments = 0 optional_arguments = 0 final_argument_whitespace = False option_spec = {} def run(self): # Simply insert an empty todolist node which will be replaced later # when process_todo_nodes is called return [todolist('')] def process_todo_nodes(app, doctree, fromdocname): if not app.config['todo_include_todos']: for node in doctree.traverse(todo_node): node.parent.remove(node) # Replace all todolist nodes with a list of the collected todos. # Augment each todo with a backlink to the original location. env = app.builder.env if not hasattr(env, 'todo_all_todos'): env.todo_all_todos = [] for node in doctree.traverse(todolist): if not app.config['todo_include_todos']: node.replace_self([]) continue content = [] for todo_info in env.todo_all_todos: para = nodes.paragraph(classes=['todo-source']) description = _('(The <> is located in ' ' %s, line %d.)') % \ (todo_info['source'], todo_info['lineno']) desc1 = description[:description.find('<<')] desc2 = description[description.find('>>')+2:] para += nodes.Text(desc1, desc1) # Create a reference newnode = nodes.reference('', '', internal=True) innernode = nodes.emphasis(_('original entry'), _('original entry')) try: newnode['refuri'] = app.builder.get_relative_uri( fromdocname, todo_info['docname']) newnode['refuri'] += '#' + todo_info['target']['refid'] except NoUri: # ignore if no URI can be determined, e.g. for LaTeX output pass newnode.append(innernode) para += newnode para += nodes.Text(desc2, desc2) # (Recursively) resolve references in the todo content todo_entry = todo_info['todo'] env.resolve_references(todo_entry, todo_info['docname'], app.builder) # Insert into the todolist content.append(todo_entry) content.append(para) node.replace_self(content) def purge_todos(app, env, docname): if not hasattr(env, 'todo_all_todos'): return env.todo_all_todos = [todo for todo in env.todo_all_todos if todo['docname'] != docname] def merge_info(app, env, docnames, other): if not hasattr(other, 'todo_all_todos'): return if not hasattr(env, 'todo_all_todos'): env.todo_all_todos = [] env.todo_all_todos.extend(other.todo_all_todos) def visit_todo_node(self, node): self.visit_admonition(node) # self.visit_admonition(node, 'todo') def depart_todo_node(self, node): self.depart_admonition(node) def setup(app): app.add_config_value('todo_include_todos', False, 'html') app.add_node(todolist) app.add_node(todo_node, html=(visit_todo_node, depart_todo_node), latex=(visit_todo_node, depart_todo_node), text=(visit_todo_node, depart_todo_node), man=(visit_todo_node, depart_todo_node), texinfo=(visit_todo_node, depart_todo_node)) app.add_directive('todo', Todo) app.add_directive('todolist', TodoList) app.connect('doctree-read', process_todos) app.connect('doctree-resolved', process_todo_nodes) app.connect('env-purge-doc', purge_todos) app.connect('env-merge-info', merge_info) return {'version': sphinx.__display_version__, 'parallel_read_safe': True} Sphinx-1.3.6/sphinx/ext/mathbase.py0000664000175000017500000001535012665045077020411 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.ext.mathbase ~~~~~~~~~~~~~~~~~~~ Set up math support in source files and LaTeX/text output. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from docutils import nodes, utils from docutils.parsers.rst import directives from sphinx.util.nodes import set_source_info from sphinx.util.compat import Directive class math(nodes.Inline, nodes.TextElement): pass class displaymath(nodes.Part, nodes.Element): pass class eqref(nodes.Inline, nodes.TextElement): pass def wrap_displaymath(math, label): parts = math.split('\n\n') ret = [] for i, part in enumerate(parts): if not part.strip(): continue if label is not None and i == 0: ret.append('\\begin{split}%s\\end{split}' % part + (label and '\\label{'+label+'}' or '')) else: ret.append('\\begin{split}%s\\end{split}\\notag' % part) if not ret: return '' return '\\begin{gather}\n' + '\\\\'.join(ret) + '\n\\end{gather}' def math_role(role, rawtext, text, lineno, inliner, options={}, content=[]): latex = utils.unescape(text, restore_backslashes=True) return [math(latex=latex)], [] def eq_role(role, rawtext, text, lineno, inliner, options={}, content=[]): text = utils.unescape(text) node = eqref('(?)', '(?)', target=text) node['docname'] = inliner.document.settings.env.docname return [node], [] def is_in_section_title(node): """Determine whether the node is in a section title""" from sphinx.util.nodes import traverse_parent for ancestor in traverse_parent(node): if isinstance(ancestor, nodes.title) and \ isinstance(ancestor.parent, nodes.section): return True return False class MathDirective(Directive): has_content = True required_arguments = 0 optional_arguments = 1 final_argument_whitespace = True option_spec = { 'label': directives.unchanged, 'name': directives.unchanged, 'nowrap': directives.flag, } def run(self): latex = '\n'.join(self.content) if self.arguments and self.arguments[0]: latex = self.arguments[0] + '\n\n' + latex node = displaymath() node['latex'] = latex node['label'] = self.options.get('name', None) if node['label'] is None: node['label'] = self.options.get('label', None) node['nowrap'] = 'nowrap' in self.options node['docname'] = self.state.document.settings.env.docname ret = [node] set_source_info(self, node) if hasattr(self, 'src'): node.source = self.src if node['label']: tnode = nodes.target('', '', ids=['equation-' + node['label']]) self.state.document.note_explicit_target(tnode) ret.insert(0, tnode) return ret def latex_visit_math(self, node): if is_in_section_title(node): protect = r'\protect' else: protect = '' equation = protect + r'\(' + node['latex'] + protect + r'\)' self.body.append(equation) raise nodes.SkipNode def latex_visit_displaymath(self, node): if node['nowrap']: self.body.append(node['latex']) else: label = node['label'] and node['docname'] + '-' + node['label'] or None self.body.append(wrap_displaymath(node['latex'], label)) raise nodes.SkipNode def latex_visit_eqref(self, node): self.body.append('\\eqref{%s-%s}' % (node['docname'], node['target'])) raise nodes.SkipNode def text_visit_math(self, node): self.add_text(node['latex']) raise nodes.SkipNode def text_visit_displaymath(self, node): self.new_state() self.add_text(node['latex']) self.end_state() raise nodes.SkipNode def text_visit_eqref(self, node): self.add_text(node['target']) raise nodes.SkipNode def man_visit_math(self, node): self.body.append(node['latex']) raise nodes.SkipNode def man_visit_displaymath(self, node): self.visit_centered(node) def man_depart_displaymath(self, node): self.depart_centered(node) def man_visit_eqref(self, node): self.body.append(node['target']) raise nodes.SkipNode def texinfo_visit_math(self, node): self.body.append('@math{' + self.escape_arg(node['latex']) + '}') raise nodes.SkipNode def texinfo_visit_displaymath(self, node): if node.get('label'): self.add_anchor(node['label'], node) self.body.append('\n\n@example\n%s\n@end example\n\n' % self.escape_arg(node['latex'])) def texinfo_depart_displaymath(self, node): pass def texinfo_visit_eqref(self, node): self.add_xref(node['docname'] + ':' + node['target'], node['target'], node) raise nodes.SkipNode def html_visit_eqref(self, node): self.body.append('' % node['target']) def html_depart_eqref(self, node): self.body.append('') def number_equations(app, doctree, docname): num = 0 numbers = {} for node in doctree.traverse(displaymath): if node['label'] is not None: num += 1 node['number'] = num numbers[node['label']] = num else: node['number'] = None for node in doctree.traverse(eqref): if node['target'] not in numbers: continue num = '(%d)' % numbers[node['target']] node[0] = nodes.Text(num, num) def setup_amsfont(app): # use amsfonts if users do not configure latex_elements['amsfonts'] app.config.latex_elements.setdefault('amsfonts', r'\usepackage{amsfonts}') def setup_math(app, htmlinlinevisitors, htmldisplayvisitors): app.add_node(math, latex=(latex_visit_math, None), text=(text_visit_math, None), man=(man_visit_math, None), texinfo=(texinfo_visit_math, None), html=htmlinlinevisitors) app.add_node(displaymath, latex=(latex_visit_displaymath, None), text=(text_visit_displaymath, None), man=(man_visit_displaymath, man_depart_displaymath), texinfo=(texinfo_visit_displaymath, texinfo_depart_displaymath), html=htmldisplayvisitors) app.add_node(eqref, latex=(latex_visit_eqref, None), text=(text_visit_eqref, None), man=(man_visit_eqref, None), texinfo=(texinfo_visit_eqref, None), html=(html_visit_eqref, html_depart_eqref)) app.add_role('math', math_role) app.add_role('eq', eq_role) app.add_directive('math', MathDirective) app.connect('doctree-resolved', number_equations) app.connect('builder-inited', setup_amsfont) Sphinx-1.3.6/sphinx/pygments_styles.py0000664000175000017500000000572012660074372021271 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.pygments_styles ~~~~~~~~~~~~~~~~~~~~~~ Sphinx theme specific highlighting styles. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from pygments.style import Style from pygments.styles.friendly import FriendlyStyle from pygments.token import Generic, Comment, Number, Whitespace, Keyword, \ Operator, Name, String, Error class NoneStyle(Style): """Style without any styling.""" class SphinxStyle(Style): """ Like friendly, but a bit darker to enhance contrast on the green background. """ background_color = '#eeffcc' default_style = '' styles = FriendlyStyle.styles styles.update({ Generic.Output: '#333', Comment: 'italic #408090', Number: '#208050', }) class PyramidStyle(Style): """ Pylons/pyramid pygments style based on friendly style, by Blaise Laflamme. """ # work in progress... background_color = "#f8f8f8" default_style = "" styles = { Whitespace: "#bbbbbb", Comment: "italic #60a0b0", Comment.Preproc: "noitalic #007020", Comment.Special: "noitalic bg:#fff0f0", Keyword: "bold #007020", Keyword.Pseudo: "nobold", Keyword.Type: "nobold #902000", Operator: "#666666", Operator.Word: "bold #007020", Name.Builtin: "#007020", Name.Function: "#06287e", Name.Class: "bold #0e84b5", Name.Namespace: "bold #0e84b5", Name.Exception: "#007020", Name.Variable: "#bb60d5", Name.Constant: "#60add5", Name.Label: "bold #002070", Name.Entity: "bold #d55537", Name.Attribute: "#0e84b5", Name.Tag: "bold #062873", Name.Decorator: "bold #555555", String: "#4070a0", String.Doc: "italic", String.Interpol: "italic #70a0d0", String.Escape: "bold #4070a0", String.Regex: "#235388", String.Symbol: "#517918", String.Other: "#c65d09", Number: "#40a070", Generic.Heading: "bold #000080", Generic.Subheading: "bold #800080", Generic.Deleted: "#A00000", Generic.Inserted: "#00A000", Generic.Error: "#FF0000", Generic.Emph: "italic", Generic.Strong: "bold", Generic.Prompt: "bold #c65d09", Generic.Output: "#888", Generic.Traceback: "#04D", Error: "#a40000 bg:#fbe3e4" } Sphinx-1.3.6/sphinx/cmdline.py0000664000175000017500000002700612665045077017441 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.cmdline ~~~~~~~~~~~~~~ sphinx-build command-line handling. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from __future__ import print_function import sys import optparse import traceback from os import path from six import text_type, binary_type from docutils.utils import SystemMessage from sphinx import __display_version__ from sphinx.errors import SphinxError from sphinx.application import Sphinx from sphinx.util import Tee, format_exception_cut_frames, save_traceback from sphinx.util.console import red, nocolor, color_terminal from sphinx.util.osutil import abspath, fs_encoding from sphinx.util.pycompat import terminal_safe USAGE = """\ Sphinx v%s Usage: %%prog [options] sourcedir outdir [filenames...] Filename arguments: without -a and without filenames, write new and changed files. with -a, write all files. with filenames, write these. """ % __display_version__ EPILOG = """\ For more information, visit . """ class MyFormatter(optparse.IndentedHelpFormatter): def format_usage(self, usage): return usage def format_help(self, formatter): result = [] if self.description: result.append(self.format_description(formatter)) if self.option_list: result.append(self.format_option_help(formatter)) return "\n".join(result) def main(argv): if not color_terminal(): nocolor() parser = optparse.OptionParser(USAGE, epilog=EPILOG, formatter=MyFormatter()) parser.add_option('--version', action='store_true', dest='version', help='show version information and exit') group = parser.add_option_group('General options') group.add_option('-b', metavar='BUILDER', dest='builder', default='html', help='builder to use; default is html') group.add_option('-a', action='store_true', dest='force_all', help='write all files; default is to only write new and ' 'changed files') group.add_option('-E', action='store_true', dest='freshenv', help='don\'t use a saved environment, always read ' 'all files') group.add_option('-d', metavar='PATH', default=None, dest='doctreedir', help='path for the cached environment and doctree files ' '(default: outdir/.doctrees)') group.add_option('-j', metavar='N', default=1, type='int', dest='jobs', help='build in parallel with N processes where possible') # this option never gets through to this point (it is intercepted earlier) # group.add_option('-M', metavar='BUILDER', dest='make_mode', # help='"make" mode -- as used by Makefile, like ' # '"sphinx-build -M html"') group = parser.add_option_group('Build configuration options') group.add_option('-c', metavar='PATH', dest='confdir', help='path where configuration file (conf.py) is located ' '(default: same as sourcedir)') group.add_option('-C', action='store_true', dest='noconfig', help='use no config file at all, only -D options') group.add_option('-D', metavar='setting=value', action='append', dest='define', default=[], help='override a setting in configuration file') group.add_option('-A', metavar='name=value', action='append', dest='htmldefine', default=[], help='pass a value into HTML templates') group.add_option('-t', metavar='TAG', action='append', dest='tags', default=[], help='define tag: include "only" blocks with TAG') group.add_option('-n', action='store_true', dest='nitpicky', help='nit-picky mode, warn about all missing references') group = parser.add_option_group('Console output options') group.add_option('-v', action='count', dest='verbosity', default=0, help='increase verbosity (can be repeated)') group.add_option('-q', action='store_true', dest='quiet', help='no output on stdout, just warnings on stderr') group.add_option('-Q', action='store_true', dest='really_quiet', help='no output at all, not even warnings') group.add_option('-N', action='store_true', dest='nocolor', help='do not emit colored output') group.add_option('-w', metavar='FILE', dest='warnfile', help='write warnings (and errors) to given file') group.add_option('-W', action='store_true', dest='warningiserror', help='turn warnings into errors') group.add_option('-T', action='store_true', dest='traceback', help='show full traceback on exception') group.add_option('-P', action='store_true', dest='pdb', help='run Pdb on exception') # parse options try: opts, args = parser.parse_args(argv[1:]) except SystemExit as err: return err.code # handle basic options if opts.version: print('Sphinx (sphinx-build) %s' % __display_version__) return 0 # get paths (first and second positional argument) try: srcdir = abspath(args[0]) confdir = abspath(opts.confdir or srcdir) if opts.noconfig: confdir = None if not path.isdir(srcdir): print('Error: Cannot find source directory `%s\'.' % srcdir, file=sys.stderr) return 1 if not opts.noconfig and not path.isfile(path.join(confdir, 'conf.py')): print('Error: Config directory doesn\'t contain a conf.py file.', file=sys.stderr) return 1 outdir = abspath(args[1]) if srcdir == outdir: print('Error: source directory and destination directory are same.', file=sys.stderr) return 1 except IndexError: parser.print_help() return 1 except UnicodeError: print( 'Error: Multibyte filename not supported on this filesystem ' 'encoding (%r).' % fs_encoding, file=sys.stderr) return 1 # handle remaining filename arguments filenames = args[2:] err = 0 for filename in filenames: if not path.isfile(filename): print('Error: Cannot find file %r.' % filename, file=sys.stderr) err = 1 if err: return 1 # likely encoding used for command-line arguments try: locale = __import__('locale') # due to submodule of the same name likely_encoding = locale.getpreferredencoding() except Exception: likely_encoding = None if opts.force_all and filenames: print('Error: Cannot combine -a option and filenames.', file=sys.stderr) return 1 if opts.nocolor: nocolor() doctreedir = abspath(opts.doctreedir or path.join(outdir, '.doctrees')) status = sys.stdout warning = sys.stderr error = sys.stderr if opts.quiet: status = None if opts.really_quiet: status = warning = None if warning and opts.warnfile: try: warnfp = open(opts.warnfile, 'w') except Exception as exc: print('Error: Cannot open warning file %r: %s' % (opts.warnfile, exc), file=sys.stderr) sys.exit(1) warning = Tee(warning, warnfp) error = warning confoverrides = {} for val in opts.define: try: key, val = val.split('=') except ValueError: print('Error: -D option argument must be in the form name=value.', file=sys.stderr) return 1 if likely_encoding and isinstance(val, binary_type): try: val = val.decode(likely_encoding) except UnicodeError: pass confoverrides[key] = val for val in opts.htmldefine: try: key, val = val.split('=') except ValueError: print('Error: -A option argument must be in the form name=value.', file=sys.stderr) return 1 try: val = int(val) except ValueError: if likely_encoding and isinstance(val, binary_type): try: val = val.decode(likely_encoding) except UnicodeError: pass confoverrides['html_context.%s' % key] = val if opts.nitpicky: confoverrides['nitpicky'] = True app = None try: app = Sphinx(srcdir, confdir, outdir, doctreedir, opts.builder, confoverrides, status, warning, opts.freshenv, opts.warningiserror, opts.tags, opts.verbosity, opts.jobs) app.build(opts.force_all, filenames) return app.statuscode except (Exception, KeyboardInterrupt) as err: if opts.pdb: import pdb print(red('Exception occurred while building, starting debugger:'), file=error) traceback.print_exc() pdb.post_mortem(sys.exc_info()[2]) else: print(file=error) if opts.verbosity or opts.traceback: traceback.print_exc(None, error) print(file=error) if isinstance(err, KeyboardInterrupt): print('interrupted!', file=error) elif isinstance(err, SystemMessage): print(red('reST markup error:'), file=error) print(terminal_safe(err.args[0]), file=error) elif isinstance(err, SphinxError): print(red('%s:' % err.category), file=error) print(terminal_safe(text_type(err)), file=error) elif isinstance(err, UnicodeError): print(red('Encoding error:'), file=error) print(terminal_safe(text_type(err)), file=error) tbpath = save_traceback(app) print(red('The full traceback has been saved in %s, if you want ' 'to report the issue to the developers.' % tbpath), file=error) elif isinstance(err, RuntimeError) and 'recursion depth' in str(err): print(red('Recursion error:'), file=error) print(terminal_safe(text_type(err)), file=error) print(file=error) print('This can happen with very large or deeply nested source ' 'files. You can carefully increase the default Python ' 'recursion limit of 1000 in conf.py with e.g.:', file=error) print(' import sys; sys.setrecursionlimit(1500)', file=error) else: print(red('Exception occurred:'), file=error) print(format_exception_cut_frames().rstrip(), file=error) tbpath = save_traceback(app) print(red('The full traceback has been saved in %s, if you ' 'want to report the issue to the developers.' % tbpath), file=error) print('Please also report this if it was a user error, so ' 'that a better error message can be provided next time.', file=error) print('A bug report can be filed in the tracker at ' '. Thanks!', file=error) return 1 Sphinx-1.3.6/sphinx/versioning.py0000664000175000017500000001103612660074372020200 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.versioning ~~~~~~~~~~~~~~~~~ Implements the low-level algorithms Sphinx uses for the versioning of doctrees. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from uuid import uuid4 from operator import itemgetter from itertools import product from six import iteritems from six.moves import range, zip_longest try: import Levenshtein IS_SPEEDUP = True except ImportError: IS_SPEEDUP = False # anything below that ratio is considered equal/changed VERSIONING_RATIO = 65 def add_uids(doctree, condition): """Add a unique id to every node in the `doctree` which matches the condition and yield the nodes. :param doctree: A :class:`docutils.nodes.document` instance. :param condition: A callable which returns either ``True`` or ``False`` for a given node. """ for node in doctree.traverse(condition): node.uid = uuid4().hex yield node def merge_doctrees(old, new, condition): """Merge the `old` doctree with the `new` one while looking at nodes matching the `condition`. Each node which replaces another one or has been added to the `new` doctree will be yielded. :param condition: A callable which returns either ``True`` or ``False`` for a given node. """ old_iter = old.traverse(condition) new_iter = new.traverse(condition) old_nodes = [] new_nodes = [] ratios = {} seen = set() # compare the nodes each doctree in order for old_node, new_node in zip_longest(old_iter, new_iter): if old_node is None: new_nodes.append(new_node) continue if not getattr(old_node, 'uid', None): # maybe config.gettext_uuid has been changed. old_node.uid = uuid4().hex if new_node is None: old_nodes.append(old_node) continue ratio = get_ratio(old_node.rawsource, new_node.rawsource) if ratio == 0: new_node.uid = old_node.uid seen.add(new_node) else: ratios[old_node, new_node] = ratio old_nodes.append(old_node) new_nodes.append(new_node) # calculate the ratios for each unequal pair of nodes, should we stumble # on a pair which is equal we set the uid and add it to the seen ones for old_node, new_node in product(old_nodes, new_nodes): if new_node in seen or (old_node, new_node) in ratios: continue ratio = get_ratio(old_node.rawsource, new_node.rawsource) if ratio == 0: new_node.uid = old_node.uid seen.add(new_node) else: ratios[old_node, new_node] = ratio # choose the old node with the best ratio for each new node and set the uid # as long as the ratio is under a certain value, in which case we consider # them not changed but different ratios = sorted(iteritems(ratios), key=itemgetter(1)) for (old_node, new_node), ratio in ratios: if new_node in seen: continue else: seen.add(new_node) if ratio < VERSIONING_RATIO: new_node.uid = old_node.uid else: new_node.uid = uuid4().hex yield new_node # create new uuids for any new node we left out earlier, this happens # if one or more nodes are simply added. for new_node in set(new_nodes) - seen: new_node.uid = uuid4().hex yield new_node def get_ratio(old, new): """Return a "similiarity ratio" (in percent) representing the similarity between the two strings where 0 is equal and anything above less than equal. """ if not all([old, new]): return VERSIONING_RATIO if IS_SPEEDUP: return Levenshtein.distance(old, new) / (len(old) / 100.0) else: return levenshtein_distance(old, new) / (len(old) / 100.0) def levenshtein_distance(a, b): """Return the Levenshtein edit distance between two strings *a* and *b*.""" if a == b: return 0 if len(a) < len(b): a, b = b, a if not a: return len(b) previous_row = range(len(b) + 1) for i, column1 in enumerate(a): current_row = [i + 1] for j, column2 in enumerate(b): insertions = previous_row[j + 1] + 1 deletions = current_row[j] + 1 substitutions = previous_row[j] + (column1 != column2) current_row.append(min(insertions, deletions, substitutions)) previous_row = current_row return previous_row[-1] Sphinx-1.3.6/sphinx/__init__.py0000664000175000017500000000657712665046141017570 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ Sphinx ~~~~~~ The Sphinx documentation toolchain. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ # Keep this file executable as-is in Python 3! # (Otherwise getting the version out of it from setup.py is impossible.) import sys from os import path __version__ = '1.3.6' __released__ = '1.3.6' # used when Sphinx builds its own docs # version info for better programmatic use # possible values for 3rd element: 'alpha', 'beta', 'rc', 'final' # 'final' has 0 as the last element version_info = (1, 3, 6, 'final', 0) package_dir = path.abspath(path.dirname(__file__)) __display_version__ = __version__ # used for command line version if __version__.endswith('+'): # try to find out the changeset hash if checked out from hg, and append # it to __version__ (since we use this value from setup.py, it gets # automatically propagated to an installed copy as well) __display_version__ = __version__ __version__ = __version__[:-1] # remove '+' for PEP-440 version spec. try: import subprocess p = subprocess.Popen(['git', 'show', '-s', '--pretty=format:%h', path.join(package_dir, '..')], stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = p.communicate() if out: __display_version__ += '/' + out.decode().strip() except Exception: pass def main(argv=sys.argv): if sys.argv[1:2] == ['-M']: sys.exit(make_main(argv)) else: sys.exit(build_main(argv)) def build_main(argv=sys.argv): """Sphinx build "main" command-line entry.""" if (sys.version_info[:3] < (2, 6, 0) or (3, 0, 0) <= sys.version_info[:3] < (3, 3, 0)): sys.stderr.write('Error: Sphinx requires at least Python 2.6 or 3.3 to run.\n') return 1 try: from sphinx import cmdline except ImportError: err = sys.exc_info()[1] errstr = str(err) if errstr.lower().startswith('no module named'): whichmod = errstr[16:] hint = '' if whichmod.startswith('docutils'): whichmod = 'Docutils library' elif whichmod.startswith('jinja'): whichmod = 'Jinja2 library' elif whichmod == 'roman': whichmod = 'roman module (which is distributed with Docutils)' hint = ('This can happen if you upgraded docutils using\n' 'easy_install without uninstalling the old version' 'first.\n') else: whichmod += ' module' sys.stderr.write('Error: The %s cannot be found. ' 'Did you install Sphinx and its dependencies ' 'correctly?\n' % whichmod) if hint: sys.stderr.write(hint) return 1 raise from sphinx.util.compat import docutils_version if docutils_version < (0, 10): sys.stderr.write('Error: Sphinx requires at least Docutils 0.10 to ' 'run.\n') return 1 return cmdline.main(argv) def make_main(argv=sys.argv): """Sphinx build "make mode" entry.""" from sphinx import make_mode return make_mode.run_make_mode(argv[2:]) if __name__ == '__main__': sys.exit(main(sys.argv)) Sphinx-1.3.6/sphinx/directives/0000775000175000017500000000000012665046155017606 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/directives/__init__.py0000664000175000017500000001740012665045070021714 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.directives ~~~~~~~~~~~~~~~~~ Handlers for additional ReST directives. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from docutils import nodes from docutils.parsers.rst import Directive, directives, roles from sphinx import addnodes from sphinx.util.docfields import DocFieldTransformer # import and register directives from sphinx.directives.code import * # noqa from sphinx.directives.other import * # noqa # RE to strip backslash escapes nl_escape_re = re.compile(r'\\\n') strip_backslash_re = re.compile(r'\\(.)') class ObjectDescription(Directive): """ Directive to describe a class, function or similar object. Not used directly, but subclassed (in domain-specific directives) to add custom behavior. """ has_content = True required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = { 'noindex': directives.flag, } # types of doc fields that this directive handles, see sphinx.util.docfields doc_field_types = [] def get_signatures(self): """ Retrieve the signatures to document from the directive arguments. By default, signatures are given as arguments, one per line. Backslash-escaping of newlines is supported. """ lines = nl_escape_re.sub('', self.arguments[0]).split('\n') # remove backslashes to support (dummy) escapes; helps Vim highlighting return [strip_backslash_re.sub(r'\1', line.strip()) for line in lines] def handle_signature(self, sig, signode): """ Parse the signature *sig* into individual nodes and append them to *signode*. If ValueError is raised, parsing is aborted and the whole *sig* is put into a single desc_name node. The return value should be a value that identifies the object. It is passed to :meth:`add_target_and_index()` unchanged, and otherwise only used to skip duplicates. """ raise ValueError def add_target_and_index(self, name, sig, signode): """ Add cross-reference IDs and entries to self.indexnode, if applicable. *name* is whatever :meth:`handle_signature()` returned. """ return # do nothing by default def before_content(self): """ Called before parsing content. Used to set information about the current directive context on the build environment. """ pass def after_content(self): """ Called after parsing content. Used to reset information about the current directive context on the build environment. """ pass def run(self): """ Main directive entry function, called by docutils upon encountering the directive. This directive is meant to be quite easily subclassable, so it delegates to several additional methods. What it does: * find out if called as a domain-specific directive, set self.domain * create a `desc` node to fit all description inside * parse standard options, currently `noindex` * create an index node if needed as self.indexnode * parse all given signatures (as returned by self.get_signatures()) using self.handle_signature(), which should either return a name or raise ValueError * add index entries using self.add_target_and_index() * parse the content and handle doc fields in it """ if ':' in self.name: self.domain, self.objtype = self.name.split(':', 1) else: self.domain, self.objtype = '', self.name self.env = self.state.document.settings.env self.indexnode = addnodes.index(entries=[]) node = addnodes.desc() node.document = self.state.document node['domain'] = self.domain # 'desctype' is a backwards compatible attribute node['objtype'] = node['desctype'] = self.objtype node['noindex'] = noindex = ('noindex' in self.options) self.names = [] signatures = self.get_signatures() for i, sig in enumerate(signatures): # add a signature node for each signature in the current unit # and add a reference target for it signode = addnodes.desc_signature(sig, '') signode['first'] = False node.append(signode) try: # name can also be a tuple, e.g. (classname, objname); # this is strictly domain-specific (i.e. no assumptions may # be made in this base class) name = self.handle_signature(sig, signode) except ValueError: # signature parsing failed signode.clear() signode += addnodes.desc_name(sig, sig) continue # we don't want an index entry here if name not in self.names: self.names.append(name) if not noindex: # only add target and index entry if this is the first # description of the object with this name in this desc block self.add_target_and_index(name, sig, signode) contentnode = addnodes.desc_content() node.append(contentnode) if self.names: # needed for association of version{added,changed} directives self.env.temp_data['object'] = self.names[0] self.before_content() self.state.nested_parse(self.content, self.content_offset, contentnode) DocFieldTransformer(self).transform_all(contentnode) self.env.temp_data['object'] = None self.after_content() return [self.indexnode, node] # backwards compatible old name DescDirective = ObjectDescription class DefaultRole(Directive): """ Set the default interpreted text role. Overridden from docutils. """ optional_arguments = 1 final_argument_whitespace = False def run(self): if not self.arguments: if '' in roles._roles: # restore the "default" default role del roles._roles[''] return [] role_name = self.arguments[0] role, messages = roles.role(role_name, self.state_machine.language, self.lineno, self.state.reporter) if role is None: error = self.state.reporter.error( 'Unknown interpreted text role "%s".' % role_name, nodes.literal_block(self.block_text, self.block_text), line=self.lineno) return messages + [error] roles._roles[''] = role self.state.document.settings.env.temp_data['default_role'] = role_name return messages class DefaultDomain(Directive): """ Directive to (re-)set the default domain for this source file. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = False option_spec = {} def run(self): env = self.state.document.settings.env domain_name = self.arguments[0].lower() # if domain_name not in env.domains: # # try searching by label # for domain in itervalues(env.domains): # if domain.label.lower() == domain_name: # domain_name = domain.name # break env.temp_data['default_domain'] = env.domains.get(domain_name) return [] directives.register_directive('default-role', DefaultRole) directives.register_directive('default-domain', DefaultDomain) directives.register_directive('describe', ObjectDescription) # new, more consistent, name directives.register_directive('object', ObjectDescription) Sphinx-1.3.6/sphinx/directives/other.py0000664000175000017500000003724412660074372021310 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.directives.other ~~~~~~~~~~~~~~~~~~~~~~~ :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from six.moves import range from docutils import nodes from docutils.parsers.rst import Directive, directives from docutils.parsers.rst.directives.admonitions import BaseAdmonition from docutils.parsers.rst.directives.misc import Class from docutils.parsers.rst.directives.misc import Include as BaseInclude from sphinx import addnodes from sphinx.locale import versionlabels, _ from sphinx.util import url_re, docname_join from sphinx.util.nodes import explicit_title_re, set_source_info, \ process_index_entry from sphinx.util.matching import patfilter def int_or_nothing(argument): if not argument: return 999 return int(argument) class TocTree(Directive): """ Directive to notify Sphinx about the hierarchical structure of the docs, and to include a table-of-contents like tree in the current document. """ has_content = True required_arguments = 0 optional_arguments = 0 final_argument_whitespace = False option_spec = { 'maxdepth': int, 'name': directives.unchanged, 'caption': directives.unchanged_required, 'glob': directives.flag, 'hidden': directives.flag, 'includehidden': directives.flag, 'numbered': int_or_nothing, 'titlesonly': directives.flag, } def run(self): env = self.state.document.settings.env suffixes = env.config.source_suffix glob = 'glob' in self.options caption = self.options.get('caption') if caption: self.options.setdefault('name', nodes.fully_normalize_name(caption)) ret = [] # (title, ref) pairs, where ref may be a document, or an external link, # and title may be None if the document's title is to be used entries = [] includefiles = [] all_docnames = env.found_docs.copy() # don't add the currently visited file in catch-all patterns all_docnames.remove(env.docname) for entry in self.content: if not entry: continue if glob and ('*' in entry or '?' in entry or '[' in entry): patname = docname_join(env.docname, entry) docnames = sorted(patfilter(all_docnames, patname)) for docname in docnames: all_docnames.remove(docname) # don't include it again entries.append((None, docname)) includefiles.append(docname) if not docnames: ret.append(self.state.document.reporter.warning( 'toctree glob pattern %r didn\'t match any documents' % entry, line=self.lineno)) else: # look for explicit titles ("Some Title ") m = explicit_title_re.match(entry) if m: ref = m.group(2) title = m.group(1) docname = ref else: ref = docname = entry title = None # remove suffixes (backwards compatibility) for suffix in suffixes: if docname.endswith(suffix): docname = docname[:-len(suffix)] break # absolutize filenames docname = docname_join(env.docname, docname) if url_re.match(ref) or ref == 'self': entries.append((title, ref)) elif docname not in env.found_docs: ret.append(self.state.document.reporter.warning( 'toctree contains reference to nonexisting ' 'document %r' % docname, line=self.lineno)) env.note_reread() else: all_docnames.discard(docname) entries.append((title, docname)) includefiles.append(docname) subnode = addnodes.toctree() subnode['parent'] = env.docname # entries contains all entries (self references, external links etc.) subnode['entries'] = entries # includefiles only entries that are documents subnode['includefiles'] = includefiles subnode['maxdepth'] = self.options.get('maxdepth', -1) subnode['caption'] = caption subnode['glob'] = glob subnode['hidden'] = 'hidden' in self.options subnode['includehidden'] = 'includehidden' in self.options subnode['numbered'] = self.options.get('numbered', 0) subnode['titlesonly'] = 'titlesonly' in self.options set_source_info(self, subnode) wrappernode = nodes.compound(classes=['toctree-wrapper']) wrappernode.append(subnode) self.add_name(wrappernode) ret.append(wrappernode) return ret class Author(Directive): """ Directive to give the name of the author of the current document or section. Shown in the output only if the show_authors option is on. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {} def run(self): env = self.state.document.settings.env if not env.config.show_authors: return [] para = nodes.paragraph(translatable=False) emph = nodes.emphasis() para += emph if self.name == 'sectionauthor': text = _('Section author: ') elif self.name == 'moduleauthor': text = _('Module author: ') elif self.name == 'codeauthor': text = _('Code author: ') else: text = _('Author: ') emph += nodes.Text(text, text) inodes, messages = self.state.inline_text(self.arguments[0], self.lineno) emph.extend(inodes) return [para] + messages class Index(Directive): """ Directive to add entries to the index. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {} def run(self): arguments = self.arguments[0].split('\n') env = self.state.document.settings.env targetid = 'index-%s' % env.new_serialno('index') targetnode = nodes.target('', '', ids=[targetid]) self.state.document.note_explicit_target(targetnode) indexnode = addnodes.index() indexnode['entries'] = ne = [] indexnode['inline'] = False set_source_info(self, indexnode) for entry in arguments: ne.extend(process_index_entry(entry, targetid)) return [indexnode, targetnode] class VersionChange(Directive): """ Directive to describe a change/addition/deprecation in a specific version. """ has_content = True required_arguments = 1 optional_arguments = 1 final_argument_whitespace = True option_spec = {} def run(self): node = addnodes.versionmodified() node.document = self.state.document set_source_info(self, node) node['type'] = self.name node['version'] = self.arguments[0] text = versionlabels[self.name] % self.arguments[0] if len(self.arguments) == 2: inodes, messages = self.state.inline_text(self.arguments[1], self.lineno+1) para = nodes.paragraph(self.arguments[1], '', *inodes, translatable=False) set_source_info(self, para) node.append(para) else: messages = [] if self.content: self.state.nested_parse(self.content, self.content_offset, node) if len(node): if isinstance(node[0], nodes.paragraph) and node[0].rawsource: content = nodes.inline(node[0].rawsource, translatable=True) content.source = node[0].source content.line = node[0].line content += node[0].children node[0].replace_self(nodes.paragraph('', '', content, translatable=False)) node[0].insert(0, nodes.inline('', '%s: ' % text, classes=['versionmodified'])) else: para = nodes.paragraph('', '', nodes.inline('', '%s.' % text, classes=['versionmodified']), translatable=False) node.append(para) env = self.state.document.settings.env # XXX should record node.source as well env.note_versionchange(node['type'], node['version'], node, node.line) return [node] + messages class SeeAlso(BaseAdmonition): """ An admonition mentioning things to look at as reference. """ node_class = addnodes.seealso class TabularColumns(Directive): """ Directive to give an explicit tabulary column definition to LaTeX. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {} def run(self): node = addnodes.tabular_col_spec() node['spec'] = self.arguments[0] set_source_info(self, node) return [node] class Centered(Directive): """ Directive to create a centered line of bold text. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {} def run(self): if not self.arguments: return [] subnode = addnodes.centered() inodes, messages = self.state.inline_text(self.arguments[0], self.lineno) subnode.extend(inodes) return [subnode] + messages class Acks(Directive): """ Directive for a list of names. """ has_content = True required_arguments = 0 optional_arguments = 0 final_argument_whitespace = False option_spec = {} def run(self): node = addnodes.acks() node.document = self.state.document self.state.nested_parse(self.content, self.content_offset, node) if len(node.children) != 1 or not isinstance(node.children[0], nodes.bullet_list): return [self.state.document.reporter.warning( '.. acks content is not a list', line=self.lineno)] return [node] class HList(Directive): """ Directive for a list that gets compacted horizontally. """ has_content = True required_arguments = 0 optional_arguments = 0 final_argument_whitespace = False option_spec = { 'columns': int, } def run(self): ncolumns = self.options.get('columns', 2) node = nodes.paragraph() node.document = self.state.document self.state.nested_parse(self.content, self.content_offset, node) if len(node.children) != 1 or not isinstance(node.children[0], nodes.bullet_list): return [self.state.document.reporter.warning( '.. hlist content is not a list', line=self.lineno)] fulllist = node.children[0] # create a hlist node where the items are distributed npercol, nmore = divmod(len(fulllist), ncolumns) index = 0 newnode = addnodes.hlist() for column in range(ncolumns): endindex = index + (column < nmore and (npercol+1) or npercol) col = addnodes.hlistcol() col += nodes.bullet_list() col[0] += fulllist.children[index:endindex] index = endindex newnode += col return [newnode] class Only(Directive): """ Directive to only include text if the given tag(s) are enabled. """ has_content = True required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {} def run(self): node = addnodes.only() node.document = self.state.document set_source_info(self, node) node['expr'] = self.arguments[0] # Same as util.nested_parse_with_titles but try to handle nested # sections which should be raised higher up the doctree. surrounding_title_styles = self.state.memo.title_styles surrounding_section_level = self.state.memo.section_level self.state.memo.title_styles = [] self.state.memo.section_level = 0 try: self.state.nested_parse(self.content, self.content_offset, node, match_titles=1) title_styles = self.state.memo.title_styles if (not surrounding_title_styles or not title_styles or title_styles[0] not in surrounding_title_styles or not self.state.parent): # No nested sections so no special handling needed. return [node] # Calculate the depths of the current and nested sections. current_depth = 0 parent = self.state.parent while parent: current_depth += 1 parent = parent.parent current_depth -= 2 title_style = title_styles[0] nested_depth = len(surrounding_title_styles) if title_style in surrounding_title_styles: nested_depth = surrounding_title_styles.index(title_style) # Use these depths to determine where the nested sections should # be placed in the doctree. n_sects_to_raise = current_depth - nested_depth + 1 parent = self.state.parent for i in range(n_sects_to_raise): if parent.parent: parent = parent.parent parent.append(node) return [] finally: self.state.memo.title_styles = surrounding_title_styles self.state.memo.section_level = surrounding_section_level class Include(BaseInclude): """ Like the standard "Include" directive, but interprets absolute paths "correctly", i.e. relative to source directory. """ def run(self): env = self.state.document.settings.env if self.arguments[0].startswith('<') and \ self.arguments[0].endswith('>'): # docutils "standard" includes, do not do path processing return BaseInclude.run(self) rel_filename, filename = env.relfn2path(self.arguments[0]) self.arguments[0] = filename return BaseInclude.run(self) directives.register_directive('toctree', TocTree) directives.register_directive('sectionauthor', Author) directives.register_directive('moduleauthor', Author) directives.register_directive('codeauthor', Author) directives.register_directive('index', Index) directives.register_directive('deprecated', VersionChange) directives.register_directive('versionadded', VersionChange) directives.register_directive('versionchanged', VersionChange) directives.register_directive('seealso', SeeAlso) directives.register_directive('tabularcolumns', TabularColumns) directives.register_directive('centered', Centered) directives.register_directive('acks', Acks) directives.register_directive('hlist', HList) directives.register_directive('only', Only) directives.register_directive('include', Include) # register the standard rst class directive under a different name # only for backwards compatibility now directives.register_directive('cssclass', Class) # new standard name when default-domain with "class" is in effect directives.register_directive('rst-class', Class) Sphinx-1.3.6/sphinx/directives/code.py0000664000175000017500000003131512665045070021070 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.directives.code ~~~~~~~~~~~~~~~~~~~~~~ :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import sys import codecs from difflib import unified_diff from docutils import nodes from docutils.parsers.rst import Directive, directives from docutils.statemachine import ViewList from six import string_types from sphinx import addnodes from sphinx.util import parselinenos from sphinx.util.nodes import set_source_info class Highlight(Directive): """ Directive to set the highlighting language for code blocks, as well as the threshold for line numbers. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = False option_spec = { 'linenothreshold': directives.unchanged, } def run(self): if 'linenothreshold' in self.options: try: linenothreshold = int(self.options['linenothreshold']) except Exception: linenothreshold = 10 else: linenothreshold = sys.maxsize return [addnodes.highlightlang(lang=self.arguments[0].strip(), linenothreshold=linenothreshold)] def dedent_lines(lines, dedent): if not dedent: return lines new_lines = [] for line in lines: new_line = line[dedent:] if line.endswith('\n') and not new_line: new_line = '\n' # keep CRLF new_lines.append(new_line) return new_lines def container_wrapper(directive, literal_node, caption): container_node = nodes.container('', literal_block=True, classes=['literal-block-wrapper']) parsed = nodes.Element() directive.state.nested_parse(ViewList([caption], source=''), directive.content_offset, parsed) caption_node = nodes.caption(parsed[0].rawsource, '', *parsed[0].children) caption_node.source = parsed[0].source caption_node.line = parsed[0].line container_node += caption_node container_node += literal_node return container_node class CodeBlock(Directive): """ Directive for a code block with special highlighting or line numbering settings. """ has_content = True required_arguments = 1 optional_arguments = 0 final_argument_whitespace = False option_spec = { 'linenos': directives.flag, 'dedent': int, 'lineno-start': int, 'emphasize-lines': directives.unchanged_required, 'caption': directives.unchanged_required, 'name': directives.unchanged, } def run(self): code = u'\n'.join(self.content) linespec = self.options.get('emphasize-lines') if linespec: try: nlines = len(self.content) hl_lines = [x+1 for x in parselinenos(linespec, nlines)] except ValueError as err: document = self.state.document return [document.reporter.warning(str(err), line=self.lineno)] else: hl_lines = None if 'dedent' in self.options: lines = code.split('\n') lines = dedent_lines(lines, self.options['dedent']) code = '\n'.join(lines) literal = nodes.literal_block(code, code) literal['language'] = self.arguments[0] literal['linenos'] = 'linenos' in self.options or \ 'lineno-start' in self.options extra_args = literal['highlight_args'] = {} if hl_lines is not None: extra_args['hl_lines'] = hl_lines if 'lineno-start' in self.options: extra_args['linenostart'] = self.options['lineno-start'] set_source_info(self, literal) caption = self.options.get('caption') if caption: self.options.setdefault('name', nodes.fully_normalize_name(caption)) literal = container_wrapper(self, literal, caption) # literal will be note_implicit_target that is linked from caption and numref. # when options['name'] is provided, it should be primary ID. self.add_name(literal) return [literal] class LiteralInclude(Directive): """ Like ``.. include:: :literal:``, but only warns if the include file is not found, and does not raise errors. Also has several options for selecting what to include. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = { 'dedent': int, 'linenos': directives.flag, 'lineno-start': int, 'lineno-match': directives.flag, 'tab-width': int, 'language': directives.unchanged_required, 'encoding': directives.encoding, 'pyobject': directives.unchanged_required, 'lines': directives.unchanged_required, 'start-after': directives.unchanged_required, 'end-before': directives.unchanged_required, 'prepend': directives.unchanged_required, 'append': directives.unchanged_required, 'emphasize-lines': directives.unchanged_required, 'caption': directives.unchanged, 'name': directives.unchanged, 'diff': directives.unchanged_required, } def read_with_encoding(self, filename, document, codec_info, encoding): f = None try: f = codecs.StreamReaderWriter(open(filename, 'rb'), codec_info[2], codec_info[3], 'strict') lines = f.readlines() lines = dedent_lines(lines, self.options.get('dedent')) return lines except (IOError, OSError): return [document.reporter.warning( 'Include file %r not found or reading it failed' % filename, line=self.lineno)] except UnicodeError: return [document.reporter.warning( 'Encoding %r used for reading included file %r seems to ' 'be wrong, try giving an :encoding: option' % (encoding, filename))] finally: if f is not None: f.close() def run(self): document = self.state.document if not document.settings.file_insertion_enabled: return [document.reporter.warning('File insertion disabled', line=self.lineno)] env = document.settings.env rel_filename, filename = env.relfn2path(self.arguments[0]) if 'pyobject' in self.options and 'lines' in self.options: return [document.reporter.warning( 'Cannot use both "pyobject" and "lines" options', line=self.lineno)] if 'lineno-match' in self.options and 'lineno-start' in self.options: return [document.reporter.warning( 'Cannot use both "lineno-match" and "lineno-start"', line=self.lineno)] if 'lineno-match' in self.options and \ (set(['append', 'prepend']) & set(self.options.keys())): return [document.reporter.warning( 'Cannot use "lineno-match" and "append" or "prepend"', line=self.lineno)] encoding = self.options.get('encoding', env.config.source_encoding) codec_info = codecs.lookup(encoding) lines = self.read_with_encoding(filename, document, codec_info, encoding) if lines and not isinstance(lines[0], string_types): return lines diffsource = self.options.get('diff') if diffsource is not None: tmp, fulldiffsource = env.relfn2path(diffsource) difflines = self.read_with_encoding(fulldiffsource, document, codec_info, encoding) if not isinstance(difflines[0], string_types): return difflines diff = unified_diff( difflines, lines, diffsource, self.arguments[0]) lines = list(diff) linenostart = self.options.get('lineno-start', 1) objectname = self.options.get('pyobject') if objectname is not None: from sphinx.pycode import ModuleAnalyzer analyzer = ModuleAnalyzer.for_file(filename, '') tags = analyzer.find_tags() if objectname not in tags: return [document.reporter.warning( 'Object named %r not found in include file %r' % (objectname, filename), line=self.lineno)] else: lines = lines[tags[objectname][1]-1: tags[objectname][2]-1] if 'lineno-match' in self.options: linenostart = tags[objectname][1] linespec = self.options.get('lines') if linespec: try: linelist = parselinenos(linespec, len(lines)) except ValueError as err: return [document.reporter.warning(str(err), line=self.lineno)] if 'lineno-match' in self.options: # make sure the line list is not "disjoint". previous = linelist[0] for line_number in linelist[1:]: if line_number == previous + 1: previous = line_number continue return [document.reporter.warning( 'Cannot use "lineno-match" with a disjoint set of ' '"lines"', line=self.lineno)] linenostart = linelist[0] + 1 # just ignore non-existing lines lines = [lines[i] for i in linelist if i < len(lines)] if not lines: return [document.reporter.warning( 'Line spec %r: no lines pulled from include file %r' % (linespec, filename), line=self.lineno)] linespec = self.options.get('emphasize-lines') if linespec: try: hl_lines = [x+1 for x in parselinenos(linespec, len(lines))] except ValueError as err: return [document.reporter.warning(str(err), line=self.lineno)] else: hl_lines = None startafter = self.options.get('start-after') endbefore = self.options.get('end-before') if startafter is not None or endbefore is not None: use = not startafter res = [] for line_number, line in enumerate(lines): if not use and startafter and startafter in line: if 'lineno-match' in self.options: linenostart += line_number + 1 use = True elif use and endbefore and endbefore in line: break elif use: res.append(line) lines = res prepend = self.options.get('prepend') if prepend: lines.insert(0, prepend + '\n') append = self.options.get('append') if append: lines.append(append + '\n') text = ''.join(lines) if self.options.get('tab-width'): text = text.expandtabs(self.options['tab-width']) retnode = nodes.literal_block(text, text, source=filename) set_source_info(self, retnode) if diffsource: # if diff is set, set udiff retnode['language'] = 'udiff' if 'language' in self.options: retnode['language'] = self.options['language'] retnode['linenos'] = 'linenos' in self.options or \ 'lineno-start' in self.options or \ 'lineno-match' in self.options extra_args = retnode['highlight_args'] = {} if hl_lines is not None: extra_args['hl_lines'] = hl_lines extra_args['linenostart'] = linenostart env.note_dependency(rel_filename) caption = self.options.get('caption') if caption is not None: if not caption: caption = self.arguments[0] self.options.setdefault('name', nodes.fully_normalize_name(caption)) retnode = container_wrapper(self, retnode, caption) # retnode will be note_implicit_target that is linked from caption and numref. # when options['name'] is provided, it should be primary ID. self.add_name(retnode) return [retnode] directives.register_directive('highlight', Highlight) directives.register_directive('highlightlang', Highlight) # old directives.register_directive('code-block', CodeBlock) directives.register_directive('sourcecode', CodeBlock) directives.register_directive('literalinclude', LiteralInclude) Sphinx-1.3.6/sphinx/domains/0000775000175000017500000000000012665046155017077 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/domains/c.py0000664000175000017500000002542212665045070017673 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.domains.c ~~~~~~~~~~~~~~~~ The C language domain. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re import string from docutils import nodes from sphinx import addnodes from sphinx.roles import XRefRole from sphinx.locale import l_, _ from sphinx.domains import Domain, ObjType from sphinx.directives import ObjectDescription from sphinx.util.nodes import make_refnode from sphinx.util.docfields import Field, TypedField # RE to split at word boundaries wsplit_re = re.compile(r'(\W+)') # REs for C signatures c_sig_re = re.compile( r'''^([^(]*?) # return type ([\w:.]+) \s* # thing name (colon allowed for C++) (?: \((.*)\) )? # optionally arguments (\s+const)? $ # const specifier ''', re.VERBOSE) c_funcptr_sig_re = re.compile( r'''^([^(]+?) # return type (\( [^()]+ \)) \s* # name in parentheses \( (.*) \) # arguments (\s+const)? $ # const specifier ''', re.VERBOSE) c_funcptr_arg_sig_re = re.compile( r'''^\s*([^(,]+?) # return type \( ([^()]+) \) \s* # name in parentheses \( (.*) \) # arguments (\s+const)? # const specifier \s*(?=$|,) # end with comma or end of string ''', re.VERBOSE) c_funcptr_name_re = re.compile(r'^\(\s*\*\s*(.*?)\s*\)$') class CObject(ObjectDescription): """ Description of a C language object. """ doc_field_types = [ TypedField('parameter', label=l_('Parameters'), names=('param', 'parameter', 'arg', 'argument'), typerolename='type', typenames=('type',)), Field('returnvalue', label=l_('Returns'), has_arg=False, names=('returns', 'return')), Field('returntype', label=l_('Return type'), has_arg=False, names=('rtype',)), ] # These C types aren't described anywhere, so don't try to create # a cross-reference to them stopwords = set(( 'const', 'void', 'char', 'wchar_t', 'int', 'short', 'long', 'float', 'double', 'unsigned', 'signed', 'FILE', 'clock_t', 'time_t', 'ptrdiff_t', 'size_t', 'ssize_t', 'struct', '_Bool', )) def _parse_type(self, node, ctype): # add cross-ref nodes for all words for part in [_f for _f in wsplit_re.split(ctype) if _f]: tnode = nodes.Text(part, part) if part[0] in string.ascii_letters+'_' and \ part not in self.stopwords: pnode = addnodes.pending_xref( '', refdomain='c', reftype='type', reftarget=part, modname=None, classname=None) pnode += tnode node += pnode else: node += tnode def _parse_arglist(self, arglist): while True: m = c_funcptr_arg_sig_re.match(arglist) if m: yield m.group() arglist = c_funcptr_arg_sig_re.sub('', arglist) if ',' in arglist: _, arglist = arglist.split(',', 1) else: break else: if ',' in arglist: arg, arglist = arglist.split(',', 1) yield arg else: yield arglist break def handle_signature(self, sig, signode): """Transform a C signature into RST nodes.""" # first try the function pointer signature regex, it's more specific m = c_funcptr_sig_re.match(sig) if m is None: m = c_sig_re.match(sig) if m is None: raise ValueError('no match') rettype, name, arglist, const = m.groups() signode += addnodes.desc_type('', '') self._parse_type(signode[-1], rettype) try: classname, funcname = name.split('::', 1) classname += '::' signode += addnodes.desc_addname(classname, classname) signode += addnodes.desc_name(funcname, funcname) # name (the full name) is still both parts except ValueError: signode += addnodes.desc_name(name, name) # clean up parentheses from canonical name m = c_funcptr_name_re.match(name) if m: name = m.group(1) typename = self.env.ref_context.get('c:type') if self.name == 'c:member' and typename: fullname = typename + '.' + name else: fullname = name if not arglist: if self.objtype == 'function': # for functions, add an empty parameter list signode += addnodes.desc_parameterlist() if const: signode += addnodes.desc_addname(const, const) return fullname paramlist = addnodes.desc_parameterlist() arglist = arglist.replace('`', '').replace('\\ ', '') # remove markup # this messes up function pointer types, but not too badly ;) for arg in self._parse_arglist(arglist): arg = arg.strip() param = addnodes.desc_parameter('', '', noemph=True) try: m = c_funcptr_arg_sig_re.match(arg) if m: self._parse_type(param, m.group(1) + '(') param += nodes.emphasis(m.group(2), m.group(2)) self._parse_type(param, ')(' + m.group(3) + ')') if m.group(4): param += addnodes.desc_addname(m.group(4), m.group(4)) else: ctype, argname = arg.rsplit(' ', 1) self._parse_type(param, ctype) # separate by non-breaking space in the output param += nodes.emphasis(' '+argname, u'\xa0'+argname) except ValueError: # no argument name given, only the type self._parse_type(param, arg) paramlist += param signode += paramlist if const: signode += addnodes.desc_addname(const, const) return fullname def get_index_text(self, name): if self.objtype == 'function': return _('%s (C function)') % name elif self.objtype == 'member': return _('%s (C member)') % name elif self.objtype == 'macro': return _('%s (C macro)') % name elif self.objtype == 'type': return _('%s (C type)') % name elif self.objtype == 'var': return _('%s (C variable)') % name else: return '' def add_target_and_index(self, name, sig, signode): # for C API items we add a prefix since names are usually not qualified # by a module name and so easily clash with e.g. section titles targetname = 'c.' + name if targetname not in self.state.document.ids: signode['names'].append(targetname) signode['ids'].append(targetname) signode['first'] = (not self.names) self.state.document.note_explicit_target(signode) inv = self.env.domaindata['c']['objects'] if name in inv: self.state_machine.reporter.warning( 'duplicate C object description of %s, ' % name + 'other instance in ' + self.env.doc2path(inv[name][0]), line=self.lineno) inv[name] = (self.env.docname, self.objtype) indextext = self.get_index_text(name) if indextext: self.indexnode['entries'].append(('single', indextext, targetname, '')) def before_content(self): self.typename_set = False if self.name == 'c:type': if self.names: self.env.ref_context['c:type'] = self.names[0] self.typename_set = True def after_content(self): if self.typename_set: self.env.ref_context.pop('c:type', None) class CXRefRole(XRefRole): def process_link(self, env, refnode, has_explicit_title, title, target): if not has_explicit_title: target = target.lstrip('~') # only has a meaning for the title # if the first character is a tilde, don't display the module/class # parts of the contents if title[0:1] == '~': title = title[1:] dot = title.rfind('.') if dot != -1: title = title[dot+1:] return title, target class CDomain(Domain): """C language domain.""" name = 'c' label = 'C' object_types = { 'function': ObjType(l_('function'), 'func'), 'member': ObjType(l_('member'), 'member'), 'macro': ObjType(l_('macro'), 'macro'), 'type': ObjType(l_('type'), 'type'), 'var': ObjType(l_('variable'), 'data'), } directives = { 'function': CObject, 'member': CObject, 'macro': CObject, 'type': CObject, 'var': CObject, } roles = { 'func': CXRefRole(fix_parens=True), 'member': CXRefRole(), 'macro': CXRefRole(), 'data': CXRefRole(), 'type': CXRefRole(), } initial_data = { 'objects': {}, # fullname -> docname, objtype } def clear_doc(self, docname): for fullname, (fn, _l) in list(self.data['objects'].items()): if fn == docname: del self.data['objects'][fullname] def merge_domaindata(self, docnames, otherdata): # XXX check duplicates for fullname, (fn, objtype) in otherdata['objects'].items(): if fn in docnames: self.data['objects'][fullname] = (fn, objtype) def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): # strip pointer asterisk target = target.rstrip(' *') if target not in self.data['objects']: return None obj = self.data['objects'][target] return make_refnode(builder, fromdocname, obj[0], 'c.' + target, contnode, target) def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): # strip pointer asterisk target = target.rstrip(' *') if target not in self.data['objects']: return [] obj = self.data['objects'][target] return [('c:' + self.role_for_objtype(obj[1]), make_refnode(builder, fromdocname, obj[0], 'c.' + target, contnode, target))] def get_objects(self): for refname, (docname, type) in list(self.data['objects'].items()): yield (refname, refname, type, docname, 'c.' + refname, 1) Sphinx-1.3.6/sphinx/domains/cpp.py0000664000175000017500000027724412665045070020246 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.domains.cpp ~~~~~~~~~~~~~~~~~~ The C++ language domain. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from copy import deepcopy from six import iteritems, text_type from docutils import nodes from sphinx import addnodes from sphinx.roles import XRefRole from sphinx.locale import l_, _ from sphinx.domains import Domain, ObjType from sphinx.directives import ObjectDescription from sphinx.util.nodes import make_refnode from sphinx.util.compat import Directive from sphinx.util.pycompat import UnicodeMixin from sphinx.util.docfields import Field, GroupedField """ Important note on ids: Multiple id generation schemes are used due to backwards compatibility. - v1: 1.2.3 <= version < 1.3 The style used before the rewrite. It is not the actual old code, but a replication of the behaviour. - v2: 1.3 <= version < now Standardised mangling scheme from http://mentorembedded.github.io/cxx-abi/abi.html#mangling though not completely implemented. All versions are generated and attached to elements. The newest is used for the index. All of the versions should work as permalinks. See http://www.nongnu.org/hcb/ for the grammar. common grammar things: simple-declaration -> attribute-specifier-seq[opt] decl-specifier-seq[opt] init-declarator-list[opt] ; # Drop the semi-colon. For now: drop the attributes (TODO). # Use at most 1 init-declerator. -> decl-specifier-seq init-declerator -> decl-specifier-seq declerator initializer decl-specifier -> storage-class-specifier -> "static" (only for member_object and function_object) | "register" | type-specifier -> trailing-type-specifier | function-specifier -> "inline" | "virtual" | "explicit" (only for function_object) | "friend" (only for function_object) | "constexpr" (only for member_object and function_object) trailing-type-specifier -> simple-type-specifier | elaborated-type-specifier | typename-specifier | cv-qualifier -> "const" | "volatile" stricter grammar for decl-specifier-seq (with everything, each object uses a subset): visibility storage-class-specifier function-specifier "friend" "constexpr" "volatile" "const" trailing-type-specifier # where trailing-type-specifier can no be cv-qualifier # Inside e.g., template paramters a strict subset is used # (see type-specifier-seq) trailing-type-specifier -> simple-type-specifier -> ::[opt] nested-name-specifier[opt] type-name | ::[opt] nested-name-specifier "template" simple-template-id | "char" | "bool" | ect. | decltype-specifier | elaborated-type-specifier -> class-key attribute-specifier-seq[opt] ::[opt] nested-name-specifier[opt] identifier | class-key ::[opt] nested-name-specifier[opt] template[opt] simple-template-id | "enum" ::[opt] nested-name-specifier[opt] identifier | typename-specifier -> "typename" ::[opt] nested-name-specifier identifier | "typename" ::[opt] nested-name-specifier template[opt] simple-template-id class-key -> "class" | "struct" | "union" type-name ->* identifier | simple-template-id # ignoring attributes and decltype, and then some left-factoring trailing-type-specifier -> rest-of-trailing ("class" | "struct" | "union" | "typename") rest-of-trailing build-in -> "char" | "bool" | ect. decltype-specifier rest-of-trailing -> (with some simplification) "::"[opt] list-of-elements-separated-by-:: element -> "template"[opt] identifier ("<" template-argument-list ">")[opt] template-argument-list -> template-argument "..."[opt] | template-argument-list "," template-argument "..."[opt] template-argument -> constant-expression | type-specifier-seq abstract-declerator | id-expression declerator -> ptr-declerator | noptr-declarator parameters-and-qualifiers trailing-return-type (TODO: for now we don't support trailing-eturn-type) ptr-declerator -> noptr-declerator | ptr-operator ptr-declarator noptr-declerator -> declarator-id attribute-specifier-seq[opt] -> "..."[opt] id-expression | rest-of-trailing | noptr-declerator parameters-and-qualifiers | noptr-declarator "[" constant-expression[opt] "]" attribute-specifier-seq[opt] | "(" ptr-declarator ")" ptr-operator -> "*" attribute-specifier-seq[opt] cv-qualifier-seq[opt] | "& attribute-specifier-seq[opt] | "&&" attribute-specifier-seq[opt] | "::"[opt] nested-name-specifier "*" attribute-specifier-seq[opt] cv-qualifier-seq[opt] # TOOD: not implemented # function_object must use a parameters-and-qualifiers, the others may # use it (e.g., function poitners) parameters-and-qualifiers -> "(" parameter-clause ")" attribute-specifier-seq[opt] cv-qualifier-seq[opt] ref-qualifier[opt] exception-specification[opt] ref-qualifier -> "&" | "&&" exception-specification -> "noexcept" ("(" constant-expression ")")[opt] "throw" ("(" type-id-list ")")[opt] # TODO: we don't implement attributes # member functions can have initializers, but we fold them into here memberFunctionInit -> "=" "0" # (note: only "0" is allowed as the value, according to the standard, # right?) enum-head -> enum-key attribute-specifier-seq[opt] nested-name-specifier[opt] identifier enum-base[opt] enum-key -> "enum" | "enum struct" | "enum class" enum-base -> ":" type enumerator-definition -> identifier | identifier "=" constant-expression We additionally add the possibility for specifying the visibility as the first thing. type_object: goal: either a single type (e.g., "MyClass:Something_T" or a typedef-like thing (e.g. "Something Something_T" or "int I_arr[]" grammar, single type: based on a type in a function parameter, but without a name: parameter-declaration -> attribute-specifier-seq[opt] decl-specifier-seq abstract-declarator[opt] # Drop the attributes -> decl-specifier-seq abstract-declarator[opt] grammar, typedef-like: no initilizer decl-specifier-seq declerator member_object: goal: as a type_object which must have a declerator, and optionally with a initializer grammar: decl-specifier-seq declerator initializer function_object: goal: a function declaration, TODO: what about templates? for now: skip grammar: no initializer decl-specifier-seq declerator class_object: goal: a class declaration, but with specification of a base class TODO: what about templates? for now: skip grammar: nested-name | nested-name ":" 'comma-separated list of nested-name optionally with visibility' enum_object: goal: an unscoped enum or a scoped enum, optionally with the underlying type specified grammar: ("class" | "struct")[opt] visibility[opt] nested-name (":" type)[opt] enumerator_object: goal: an element in a scoped or unscoped enum. The name should be injected according to the scopedness. grammar: nested-name ("=" constant-expression) namespace_object: goal: a directive to put all following declarations in a specific scope grammar: nested-name """ _identifier_re = re.compile(r'(~?\b[a-zA-Z_][a-zA-Z0-9_]*)\b') _whitespace_re = re.compile(r'\s+(?u)') _string_re = re.compile(r"[LuU8]?('([^'\\]*(?:\\.[^'\\]*)*)'" r'|"([^"\\]*(?:\\.[^"\\]*)*)")', re.S) _visibility_re = re.compile(r'\b(public|private|protected)\b') _operator_re = re.compile(r'''(?x) \[\s*\] | \(\s*\) | \+\+ | -- | ->\*? | \, | (<<|>>)=? | && | \|\| | [!<>=/*%+|&^~-]=? ''') # see http://en.cppreference.com/w/cpp/keyword _keywords = [ 'alignas', 'alignof', 'and', 'and_eq', 'asm', 'auto', 'bitand', 'bitor', 'bool', 'break', 'case', 'catch', 'char', 'char16_t', 'char32_t', 'class', 'compl', 'concept', 'const', 'constexpr', 'const_cast', 'continue', 'decltype', 'default', 'delete', 'do', 'double', 'dynamic_cast', 'else', 'enum', 'explicit', 'export', 'extern', 'false', 'float', 'for', 'friend', 'goto', 'if', 'inline', 'int', 'long', 'mutable', 'namespace', 'new', 'noexcept', 'not', 'not_eq', 'nullptr', 'operator', 'or', 'or_eq', 'private', 'protected', 'public', 'register', 'reinterpret_cast', 'requires', 'return', 'short', 'signed', 'sizeof', 'static', 'static_assert', 'static_cast', 'struct', 'switch', 'template', 'this', 'thread_local', 'throw', 'true', 'try', 'typedef', 'typeid', 'typename', 'union', 'unsigned', 'using', 'virtual', 'void', 'volatile', 'wchar_t', 'while', 'xor', 'xor_eq' ] # ------------------------------------------------------------------------------ # Id v1 constants # ------------------------------------------------------------------------------ _id_fundamental_v1 = { 'char': 'c', 'signed char': 'c', 'unsigned char': 'C', 'int': 'i', 'signed int': 'i', 'unsigned int': 'U', 'long': 'l', 'signed long': 'l', 'unsigned long': 'L', 'bool': 'b' } _id_shorthands_v1 = { 'std::string': 'ss', 'std::ostream': 'os', 'std::istream': 'is', 'std::iostream': 'ios', 'std::vector': 'v', 'std::map': 'm' } _id_operator_v1 = { 'new': 'new-operator', 'new[]': 'new-array-operator', 'delete': 'delete-operator', 'delete[]': 'delete-array-operator', # the arguments will make the difference between unary and binary # '+(unary)' : 'ps', # '-(unary)' : 'ng', # '&(unary)' : 'ad', # '*(unary)' : 'de', '~': 'inv-operator', '+': 'add-operator', '-': 'sub-operator', '*': 'mul-operator', '/': 'div-operator', '%': 'mod-operator', '&': 'and-operator', '|': 'or-operator', '^': 'xor-operator', '=': 'assign-operator', '+=': 'add-assign-operator', '-=': 'sub-assign-operator', '*=': 'mul-assign-operator', '/=': 'div-assign-operator', '%=': 'mod-assign-operator', '&=': 'and-assign-operator', '|=': 'or-assign-operator', '^=': 'xor-assign-operator', '<<': 'lshift-operator', '>>': 'rshift-operator', '<<=': 'lshift-assign-operator', '>>=': 'rshift-assign-operator', '==': 'eq-operator', '!=': 'neq-operator', '<': 'lt-operator', '>': 'gt-operator', '<=': 'lte-operator', '>=': 'gte-operator', '!': 'not-operator', '&&': 'sand-operator', '||': 'sor-operator', '++': 'inc-operator', '--': 'dec-operator', ',': 'comma-operator', '->*': 'pointer-by-pointer-operator', '->': 'pointer-operator', '()': 'call-operator', '[]': 'subscript-operator' } # ------------------------------------------------------------------------------ # Id v2 constants # ------------------------------------------------------------------------------ _id_prefix_v2 = '_CPPv2' _id_fundamental_v2 = { # not all of these are actually parsed as fundamental types, TODO: do that 'void': 'v', 'bool': 'b', 'char': 'c', 'signed char': 'a', 'unsigned char': 'h', 'wchar_t': 'w', 'char32_t': 'Di', 'char16_t': 'Ds', 'short': 's', 'short int': 's', 'signed short': 's', 'signed short int': 's', 'unsigned short': 't', 'unsigned short int': 't', 'int': 'i', 'signed': 'i', 'signed int': 'i', 'unsigned': 'j', 'unsigned int': 'j', 'long': 'l', 'long int': 'l', 'signed long': 'l', 'signed long int': 'l', 'unsigned long': 'm', 'unsigned long int': 'm', 'long long': 'x', 'long long int': 'x', 'signed long long': 'x', 'signed long long int': 'x', 'unsigned long long': 'y', 'unsigned long long int': 'y', 'float': 'f', 'double': 'd', 'long double': 'e', 'auto': 'Da', 'decltype(auto)': 'Dc', 'std::nullptr_t': 'Dn' } _id_operator_v2 = { 'new': 'nw', 'new[]': 'na', 'delete': 'dl', 'delete[]': 'da', # the arguments will make the difference between unary and binary # '+(unary)' : 'ps', # '-(unary)' : 'ng', # '&(unary)' : 'ad', # '*(unary)' : 'de', '~': 'co', '+': 'pl', '-': 'mi', '*': 'ml', '/': 'dv', '%': 'rm', '&': 'an', '|': 'or', '^': 'eo', '=': 'aS', '+=': 'pL', '-=': 'mI', '*=': 'mL', '/=': 'dV', '%=': 'rM', '&=': 'aN', '|=': 'oR', '^=': 'eO', '<<': 'ls', '>>': 'rs', '<<=': 'lS', '>>=': 'rS', '==': 'eq', '!=': 'ne', '<': 'lt', '>': 'gt', '<=': 'le', '>=': 'ge', '!': 'nt', '&&': 'aa', '||': 'oo', '++': 'pp', '--': 'mm', ',': 'cm', '->*': 'pm', '->': 'pt', '()': 'cl', '[]': 'ix' } class NoOldIdError(UnicodeMixin, Exception): # Used to avoid implementing unneeded id generation for old id schmes. def __init__(self, description=""): self.description = description def __unicode__(self): return self.description class DefinitionError(UnicodeMixin, Exception): def __init__(self, description): self.description = description def __unicode__(self): return self.description class ASTBase(UnicodeMixin): def __eq__(self, other): if type(self) is not type(other): return False try: for key, value in iteritems(self.__dict__): if value != getattr(other, key): return False except AttributeError: return False return True def __ne__(self, other): return not self.__eq__(other) __hash__ = None def clone(self): """Clone a definition expression node.""" return deepcopy(self) def get_id_v1(self): """Return the v1 id for the node.""" raise NotImplementedError(repr(self)) def get_id_v2(self): """Return the v2 id for the node.""" raise NotImplementedError(repr(self)) def get_name(self): """Return the name. Returns either `None` or a node with a name you might call :meth:`split_owner` on. """ raise NotImplementedError(repr(self)) def prefix_nested_name(self, prefix): """Prefix a name node (a node returned by :meth:`get_name`).""" raise NotImplementedError(repr(self)) def __unicode__(self): raise NotImplementedError(repr(self)) def __repr__(self): return '<%s %s>' % (self.__class__.__name__, self) def _verify_description_mode(mode): if mode not in ('lastIsName', 'noneIsName', 'markType', 'param'): raise Exception("Description mode '%s' is invalid." % mode) class ASTOperatorBuildIn(ASTBase): def __init__(self, op): self.op = op def get_id_v1(self): if self.op not in _id_operator_v1: raise Exception('Internal error: Build-in operator "%s" can not ' 'be mapped to an id.' % self.op) return _id_operator_v1[self.op] def get_id_v2(self): if self.op not in _id_operator_v2: raise Exception('Internal error: Build-in operator "%s" can not ' 'be mapped to an id.' % self.op) return _id_operator_v2[self.op] def __unicode__(self): if self.op in ('new', 'new[]', 'delete', 'delete[]'): return u'operator ' + self.op else: return u'operator' + self.op def get_name_no_template(self): return text_type(self) def describe_signature(self, signode, mode, env, prefix, parentScope): _verify_description_mode(mode) identifier = text_type(self) if mode == 'lastIsName': signode += addnodes.desc_name(identifier, identifier) else: signode += addnodes.desc_addname(identifier, identifier) class ASTOperatorType(ASTBase): def __init__(self, type): self.type = type def __unicode__(self): return u''.join(['operator ', text_type(self.type)]) def get_id_v1(self): return u'castto-%s-operator' % self.type.get_id_v1() def get_id_v2(self): return u'cv' + self.type.get_id_v2() def get_name_no_template(self): return text_type(self) def describe_signature(self, signode, mode, env, prefix, parentScope): _verify_description_mode(mode) identifier = text_type(self) if mode == 'lastIsName': signode += addnodes.desc_name(identifier, identifier) else: signode += addnodes.desc_addname(identifier, identifier) class ASTTemplateArgConstant(ASTBase): def __init__(self, value): self.value = value def __unicode__(self): return text_type(self.value) def get_id_v1(self): return text_type(self).replace(u' ', u'-') def get_id_v2(self): # TODO: doing this properly needs parsing of expressions, let's just # juse it verbatim for now return u'X' + text_type(self) + u'E' def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) signode += nodes.Text(text_type(self)) class ASTNestedNameElementEmpty(ASTBase): """Used if a nested name starts with ::""" def get_id_v1(self): return u'' def get_id_v2(self): return u'' def __unicode__(self): return u'' def describe_signature(self, signode, mode, env, prefix, parentScope): pass class ASTNestedNameElement(ASTBase): def __init__(self, identifier, templateArgs): self.identifier = identifier self.templateArgs = templateArgs def get_id_v1(self): res = [] if self.identifier == 'size_t': res.append('s') else: res.append(self.identifier) if self.templateArgs: res.append(':') res.append(u'.'.join(a.get_id_v1() for a in self.templateArgs)) res.append(':') return u''.join(res) def get_id_v2(self): res = [] if self.identifier == "std": res.append(u'St') elif self.identifier[0] == "~": # a destructor, just use an arbitrary version of dtors res.append("D0") else: res.append(text_type(len(self.identifier))) res.append(self.identifier) if self.templateArgs: res.append('I') for a in self.templateArgs: res.append(a.get_id_v2()) res.append('E') return u''.join(res) def __unicode__(self): res = [] res.append(self.identifier) if self.templateArgs: res.append('<') first = True for a in self.templateArgs: if not first: res.append(', ') first = False res.append(text_type(a)) res.append('>') return u''.join(res) def get_name_no_template(self): return text_type(self.identifier) def describe_signature(self, signode, mode, env, prefix, parentScope): _verify_description_mode(mode) if mode == 'markType': targetText = prefix + text_type(self) pnode = addnodes.pending_xref( '', refdomain='cpp', reftype='type', reftarget=targetText, modname=None, classname=None) pnode['cpp:parent'] = [parentScope] pnode += nodes.Text(text_type(self.identifier)) signode += pnode elif mode == 'lastIsName': name = text_type(self.identifier) signode += addnodes.desc_name(name, name) else: raise Exception('Unknown description mode: %s' % mode) if self.templateArgs: signode += nodes.Text('<') first = True for a in self.templateArgs: if not first: signode += nodes.Text(', ') first = False a.describe_signature(signode, 'markType', env, parentScope=parentScope) signode += nodes.Text('>') class ASTNestedName(ASTBase): def __init__(self, names): assert len(names) > 0 self.names = names @property def name(self): return self def get_id_v1(self): tt = text_type(self) if tt in _id_shorthands_v1: return _id_shorthands_v1[tt] else: res = [] id = self.names[0].get_id_v1() if len(id) > 0: res.append(id) for n in self.names[1:]: res.append(n.get_id_v1()) return u'::'.join(res) def get_id_v2(self, modifiers=""): res = [] if len(self.names) > 1 or len(modifiers) > 0: res.append('N') res.append(modifiers) for n in self.names: res.append(n.get_id_v2()) if len(self.names) > 1 or len(modifiers) > 0: res.append('E') return u''.join(res) def get_name_no_last_template(self): res = u'::'.join([text_type(n) for n in self.names[:-1]]) if len(self.names) > 1: res += '::' res += self.names[-1].get_name_no_template() return res def prefix_nested_name(self, prefix): if self.names[0] == '': return self # it's defined at global namespace, don't tuch it assert isinstance(prefix, ASTNestedName) names = prefix.names[:] names.extend(self.names) return ASTNestedName(names) def __unicode__(self): return u'::'.join([text_type(n) for n in self.names]) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) if mode == 'lastIsName': addname = u'::'.join([text_type(n) for n in self.names[:-1]]) if len(self.names) > 1: addname += u'::' name = text_type(self.names[-1]) signode += addnodes.desc_addname(addname, addname) self.names[-1].describe_signature(signode, mode, env, '', parentScope=parentScope) elif mode == 'noneIsName': name = text_type(self) signode += nodes.Text(name) elif mode == 'param': name = text_type(self) signode += nodes.emphasis(name, name) elif mode == 'markType': # each element should be a pending xref targeting the complete # prefix. however, only the identifier part should be a link, such # that template args can be a link as well. prefix = '' first = True for name in self.names: if not first: signode += nodes.Text('::') prefix += '::' first = False if name != '': name.describe_signature(signode, mode, env, prefix, parentScope=parentScope) prefix += text_type(name) else: raise Exception('Unknown description mode: %s' % mode) class ASTTrailingTypeSpecFundamental(ASTBase): def __init__(self, name): self.name = name def __unicode__(self): return self.name def get_id_v1(self): res = [] for a in self.name.split(' '): if a in _id_fundamental_v1: res.append(_id_fundamental_v1[a]) else: res.append(a) return u'-'.join(res) def get_id_v2(self): if self.name not in _id_fundamental_v2: raise Exception( 'Semi-internal error: Fundamental type "%s" can not be mapped ' 'to an id. Is it a true fundamental type? If not so, the ' 'parser should have rejected it.' % self.name) return _id_fundamental_v2[self.name] def describe_signature(self, signode, mode, env, parentScope): signode += nodes.Text(text_type(self.name)) class ASTTrailingTypeSpecName(ASTBase): def __init__(self, prefix, nestedName): self.prefix = prefix self.nestedName = nestedName @property def name(self): return self.nestedName def get_id_v1(self): return self.nestedName.get_id_v1() def get_id_v2(self): return self.nestedName.get_id_v2() def __unicode__(self): res = [] if self.prefix: res.append(self.prefix) res.append(' ') res.append(text_type(self.nestedName)) return u''.join(res) def describe_signature(self, signode, mode, env, parentScope): if self.prefix: signode += addnodes.desc_annotation(self.prefix, self.prefix) signode += nodes.Text(' ') self.nestedName.describe_signature(signode, mode, env, parentScope=parentScope) class ASTFunctinoParameter(ASTBase): def __init__(self, arg, ellipsis=False): self.arg = arg self.ellipsis = ellipsis def get_id_v1(self): if self.ellipsis: return 'z' else: return self.arg.get_id_v1() def get_id_v2(self): if self.ellipsis: return 'z' else: return self.arg.get_id_v2() def __unicode__(self): if self.ellipsis: return '...' else: return text_type(self.arg) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) if self.ellipsis: signode += nodes.Text('...') else: self.arg.describe_signature(signode, mode, env, parentScope=parentScope) class ASTParametersQualifiers(ASTBase): def __init__(self, args, volatile, const, refQual, exceptionSpec, override, final, initializer): self.args = args self.volatile = volatile self.const = const self.refQual = refQual self.exceptionSpec = exceptionSpec self.override = override self.final = final self.initializer = initializer # Id v1 ------------------------------------------------------------------ def get_modifiers_id_v1(self): res = [] if self.volatile: res.append('V') if self.const: res.append('C') if self.refQual == '&&': res.append('O') elif self.refQual == '&': res.append('R') return u''.join(res) def get_param_id_v1(self): if len(self.args) == 0: return '' else: return u'__' + u'.'.join(a.get_id_v1() for a in self.args) # Id v2 ------------------------------------------------------------------ def get_modifiers_id_v2(self): res = [] if self.volatile: res.append('V') if self.const: res.append('K') if self.refQual == '&&': res.append('O') elif self.refQual == '&': res.append('R') return u''.join(res) def get_param_id_v2(self): if len(self.args) == 0: return 'v' else: return u''.join(a.get_id_v2() for a in self.args) def __unicode__(self): res = [] res.append('(') first = True for a in self.args: if not first: res.append(', ') first = False res.append(text_type(a)) res.append(')') if self.volatile: res.append(' volatile') if self.const: res.append(' const') if self.refQual: res.append(' ') res.append(self.refQual) if self.exceptionSpec: res.append(' ') res.append(text_type(self.exceptionSpec)) if self.final: res.append(' final') if self.override: res.append(' override') if self.initializer: res.append(' = ') res.append(self.initializer) return u''.join(res) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) paramlist = addnodes.desc_parameterlist() for arg in self.args: param = addnodes.desc_parameter('', '', noemph=True) if mode == 'lastIsName': # i.e., outer-function params arg.describe_signature(param, 'param', env, parentScope=parentScope) else: arg.describe_signature(param, 'markType', env, parentScope=parentScope) paramlist += param signode += paramlist def _add_anno(signode, text): signode += nodes.Text(' ') signode += addnodes.desc_annotation(text, text) def _add_text(signode, text): signode += nodes.Text(' ' + text) if self.volatile: _add_anno(signode, 'volatile') if self.const: _add_anno(signode, 'const') if self.refQual: _add_text(signode, self.refQual) if self.exceptionSpec: _add_anno(signode, text_type(self.exceptionSpec)) if self.final: _add_anno(signode, 'final') if self.override: _add_anno(signode, 'override') if self.initializer: _add_text(signode, '= ' + text_type(self.initializer)) class ASTDeclSpecsSimple(ASTBase): def __init__(self, storage, inline, virtual, explicit, constexpr, volatile, const): self.storage = storage self.inline = inline self.virtual = virtual self.explicit = explicit self.constexpr = constexpr self.volatile = volatile self.const = const def mergeWith(self, other): if not other: return self return ASTDeclSpecsSimple(self.storage or other.storage, self.inline or other.inline, self.virtual or other.virtual, self.explicit or other.explicit, self.constexpr or other.constexpr, self.volatile or other.volatile, self.const or other.const) def __unicode__(self): res = [] if self.storage: res.append(self.storage) if self.inline: res.append('inline') if self.virtual: res.append('virtual') if self.explicit: res.append('explicit') if self.constexpr: res.append('constexpr') if self.volatile: res.append('volatile') if self.const: res.append('const') return u' '.join(res) def describe_signature(self, modifiers): def _add(modifiers, text): if len(modifiers) > 0: modifiers.append(nodes.Text(' ')) modifiers.append(addnodes.desc_annotation(text, text)) if self.storage: _add(modifiers, self.storage) if self.inline: _add(modifiers, 'inline') if self.virtual: _add(modifiers, 'virtual') if self.explicit: _add(modifiers, 'explicit') if self.constexpr: _add(modifiers, 'constexpr') if self.volatile: _add(modifiers, 'volatile') if self.const: _add(modifiers, 'const') class ASTDeclSpecs(ASTBase): def __init__(self, outer, visibility, leftSpecs, rightSpecs, trailing): # leftSpecs and rightSpecs are used for output # allSpecs are used for id generation self.outer = outer self.visibility = visibility self.leftSpecs = leftSpecs self.rightSpecs = rightSpecs self.allSpecs = self.leftSpecs.mergeWith(self.rightSpecs) self.trailingTypeSpec = trailing @property def name(self): return self.trailingTypeSpec.name def get_id_v1(self): res = [] res.append(self.trailingTypeSpec.get_id_v1()) if self.allSpecs.volatile: res.append('V') if self.allSpecs.const: res.append('C') return u''.join(res) def get_id_v2(self): res = [] if self.leftSpecs.volatile or self.rightSpecs.volatile: res.append('V') if self.leftSpecs.const or self.rightSpecs.volatile: res.append('K') res.append(self.trailingTypeSpec.get_id_v2()) return u''.join(res) def _print_visibility(self): return (self.visibility and not (self.outer in ('type', 'member', 'function') and self.visibility == 'public')) def __unicode__(self): res = [] if self._print_visibility(): res.append(self.visibility) l = text_type(self.leftSpecs) if len(l) > 0: if len(res) > 0: res.append(" ") res.append(l) if self.trailingTypeSpec: if len(res) > 0: res.append(" ") res.append(text_type(self.trailingTypeSpec)) r = text_type(self.rightSpecs) if len(r) > 0: if len(res) > 0: res.append(" ") res.append(r) return "".join(res) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) modifiers = [] def _add(modifiers, text): if len(modifiers) > 0: modifiers.append(nodes.Text(' ')) modifiers.append(addnodes.desc_annotation(text, text)) if self._print_visibility(): _add(modifiers, self.visibility) self.leftSpecs.describe_signature(modifiers) for m in modifiers: signode += m if self.trailingTypeSpec: if len(modifiers) > 0: signode += nodes.Text(' ') self.trailingTypeSpec.describe_signature(signode, mode, env, parentScope=parentScope) modifiers = [] self.rightSpecs.describe_signature(modifiers) if len(modifiers) > 0: signode += nodes.Text(' ') for m in modifiers: signode += m class ASTArray(ASTBase): def __init__(self, size): self.size = size def __unicode__(self): return u''.join(['[', text_type(self.size), ']']) def get_id_v1(self): return u'A' def get_id_v2(self): # TODO: this should maybe be done differently return u'A' + text_type(self.size) + u'_' def describe_signature(self, signode, mode, env): _verify_description_mode(mode) signode += nodes.Text(text_type(self)) class ASTDeclaratorPtr(ASTBase): def __init__(self, next, volatile, const): assert next self.next = next self.volatile = volatile self.const = const @property def name(self): return self.next.name def require_space_after_declSpecs(self): # TODO: if has paramPack, then False ? return True def __unicode__(self): res = ['*'] if self.volatile: res.append('volatile ') if self.const: res.append('const ') res.append(text_type(self.next)) return u''.join(res) # Id v1 ------------------------------------------------------------------ def get_modifiers_id_v1(self): return self.next.get_modifiers_id_v1() def get_param_id_v1(self): return self.next.get_param_id_v1() def get_ptr_suffix_id_v1(self): res = 'P' if self.volatile: res += 'V' if self.const: res += 'C' return res + self.next.get_ptr_suffix_id_v1() # Id v2 ------------------------------------------------------------------ def get_modifiers_id_v2(self): return self.next.get_modifiers_id_v2() def get_param_id_v2(self): return self.next.get_param_id_v2() def get_ptr_suffix_id_v2(self): res = [self.next.get_ptr_suffix_id_v2()] res.append('P') if self.volatile: res.append('V') if self.const: res.append('C') return u''.join(res) def get_type_id_v2(self, returnTypeId): # ReturnType *next, so we are part of the return type of 'next res = ['P'] if self.volatile: res.append('V') if self.const: res.append('C') res.append(returnTypeId) return self.next.get_type_id_v2(returnTypeId=u''.join(res)) # ------------------------------------------------------------------------ def is_function_type(self): return self.next.is_function_type() def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) signode += nodes.Text("*") self.next.describe_signature(signode, mode, env, parentScope=parentScope) class ASTDeclaratorRef(ASTBase): def __init__(self, next): assert next self.next = next @property def name(self): return self.next.name def require_space_after_declSpecs(self): return self.next.require_space_after_declSpecs() def __unicode__(self): return '&' + text_type(self.next) # Id v1 ------------------------------------------------------------------ def get_modifiers_id_v1(self): return self.next.get_modifiers_id_v1() def get_param_id_v1(self): # only the parameters (if any) return self.next.get_param_id_v1() def get_ptr_suffix_id_v1(self): return u'R' + self.next.get_ptr_suffix_id_v1() # Id v2 ------------------------------------------------------------------ def get_modifiers_id_v2(self): return self.next.get_modifiers_id_v2() def get_param_id_v2(self): # only the parameters (if any) return self.next.get_param_id_v2() def get_ptr_suffix_id_v2(self): return self.next.get_ptr_suffix_id_v2() + u'R' def get_type_id_v2(self, returnTypeId): # ReturnType &next, so we are part of the return type of 'next return self.next.get_type_id_v2(returnTypeId=u'R' + returnTypeId) # ------------------------------------------------------------------------ def is_function_type(self): return self.next.is_function_type() def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) signode += nodes.Text("&") self.next.describe_signature(signode, mode, env, parentScope=parentScope) class ASTDeclaratorParamPack(ASTBase): def __init__(self, next): assert next self.next = next @property def name(self): return self.next.name def require_space_after_declSpecs(self): return False def __unicode__(self): res = text_type(self.next) if self.next.name: res = ' ' + res return '...' + res # Id v1 ------------------------------------------------------------------ def get_modifiers_id_v1(self): return self.next.get_modifiers_id_v1() def get_param_id_v1(self): # only the parameters (if any) return self.next.get_param_id_v1() def get_ptr_suffix_id_v1(self): return 'Dp' + self.next.get_ptr_suffix_id_v2() # Id v2 ------------------------------------------------------------------ def get_modifiers_id_v2(self): return self.next.get_modifiers_id_v2() def get_param_id_v2(self): # only the parameters (if any) return self.next.get_param_id_v2() def get_ptr_suffix_id_v2(self): return self.next.get_ptr_suffix_id_v2() + u'Dp' def get_type_id_v2(self, returnTypeId): # ReturnType... next, so we are part of the return type of 'next return self.next.get_type_id_v2(returnTypeId=u'Dp' + returnTypeId) # ------------------------------------------------------------------------ def is_function_type(self): return self.next.is_function_type() def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) signode += nodes.Text("...") if self.next.name: signode += nodes.Text(' ') self.next.describe_signature(signode, mode, env, parentScope=parentScope) class ASTDeclaratorParen(ASTBase): def __init__(self, inner, next): assert inner assert next self.inner = inner self.next = next # TODO: we assume the name, params, and qualifiers are in inner @property def name(self): return self.inner.name def require_space_after_declSpecs(self): return True def __unicode__(self): res = ['('] res.append(text_type(self.inner)) res.append(')') res.append(text_type(self.next)) return ''.join(res) # Id v1 ------------------------------------------------------------------ def get_modifiers_id_v1(self): return self.inner.get_modifiers_id_v1() def get_param_id_v1(self): # only the parameters (if any) return self.inner.get_param_id_v1() def get_ptr_suffix_id_v1(self): raise NoOldIdError() # TODO: was this implemented before? return self.next.get_ptr_suffix_id_v2() + \ self.inner.get_ptr_suffix_id_v2() # Id v2 ------------------------------------------------------------------ def get_modifiers_id_v2(self): return self.inner.get_modifiers_id_v2() def get_param_id_v2(self): # only the parameters (if any) return self.inner.get_param_id_v2() def get_ptr_suffix_id_v2(self): return self.inner.get_ptr_suffix_id_v2() + \ self.next.get_ptr_suffix_id_v2() def get_type_id_v2(self, returnTypeId): # ReturnType (inner)next, so 'inner' returns everything outside nextId = self.next.get_type_id_v2(returnTypeId) return self.inner.get_type_id_v2(returnTypeId=nextId) # ------------------------------------------------------------------------ def is_function_type(self): return self.inner.is_function_type() def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) signode += nodes.Text('(') self.inner.describe_signature(signode, mode, env, parentScope=parentScope) signode += nodes.Text(')') self.next.describe_signature(signode, "noneIsName", env, parentScope=parentScope) class ASTDecleratorNameParamQual(ASTBase): def __init__(self, declId, arrayOps, paramQual): self.declId = declId self.arrayOps = arrayOps self.paramQual = paramQual @property def name(self): return self.declId # Id v1 ------------------------------------------------------------------ def get_modifiers_id_v1(self): # only the modifiers for a function, e.g., # cv-qualifiers if self.paramQual: return self.paramQual.get_modifiers_id_v1() raise Exception( "This should only be called on a function: %s" % text_type(self)) def get_param_id_v1(self): # only the parameters (if any) if self.paramQual: return self.paramQual.get_param_id_v1() else: return '' def get_ptr_suffix_id_v1(self): # only the array specifiers return u''.join(a.get_id_v1() for a in self.arrayOps) # Id v2 ------------------------------------------------------------------ def get_modifiers_id_v2(self): # only the modifiers for a function, e.g., # cv-qualifiers if self.paramQual: return self.paramQual.get_modifiers_id_v2() raise Exception( "This should only be called on a function: %s" % text_type(self)) def get_param_id_v2(self): # only the parameters (if any) if self.paramQual: return self.paramQual.get_param_id_v2() else: return '' def get_ptr_suffix_id_v2(self): # only the array specifiers return u''.join(a.get_id_v2() for a in self.arrayOps) def get_type_id_v2(self, returnTypeId): res = [] # TOOD: can we actually have both array ops and paramQual? res.append(self.get_ptr_suffix_id_v2()) if self.paramQual: res.append(self.get_modifiers_id_v2()) res.append('F') res.append(returnTypeId) res.append(self.get_param_id_v2()) res.append('E') else: res.append(returnTypeId) return u''.join(res) # ------------------------------------------------------------------------ def require_space_after_declSpecs(self): return self.declId is not None def is_function_type(self): return self.paramQual is not None def __unicode__(self): res = [] if self.declId: res.append(text_type(self.declId)) for op in self.arrayOps: res.append(text_type(op)) if self.paramQual: res.append(text_type(self.paramQual)) return u''.join(res) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) if self.declId: self.declId.describe_signature(signode, mode, env, parentScope=parentScope) for op in self.arrayOps: op.describe_signature(signode, mode, env) if self.paramQual: self.paramQual.describe_signature(signode, mode, env, parentScope=parentScope) class ASTInitializer(ASTBase): def __init__(self, value): self.value = value def __unicode__(self): return u''.join([' = ', text_type(self.value)]) def describe_signature(self, signode, mode): _verify_description_mode(mode) signode += nodes.Text(text_type(self)) class ASTType(ASTBase): def __init__(self, declSpecs, decl): self.declSpecs = declSpecs self.decl = decl self.objectType = None @property def name(self): name = self.decl.name if not name: name = self.declSpecs.name return name def get_id_v1(self): res = [] if self.objectType: # needs the name if self.objectType == 'function': # also modifiers res.append(self.prefixedName.get_id_v1()) res.append(self.decl.get_param_id_v1()) res.append(self.decl.get_modifiers_id_v1()) if (self.declSpecs.leftSpecs.constexpr or (self.declSpecs.rightSpecs and self.declSpecs.rightSpecs.constexpr)): res.append('CE') elif self.objectType == 'type': # just the name res.append(self.prefixedName.get_id_v1()) else: print(self.objectType) assert False else: # only type encoding if self.decl.is_function_type(): raise NoOldIdError() res.append(self.declSpecs.get_id_v1()) res.append(self.decl.get_ptr_suffix_id_v1()) res.append(self.decl.get_param_id_v1()) return u''.join(res) def get_id_v2(self): res = [] if self.objectType: # needs the name res.append(_id_prefix_v2) if self.objectType == 'function': # also modifiers modifiers = self.decl.get_modifiers_id_v2() res.append(self.prefixedName.get_id_v2(modifiers)) res.append(self.decl.get_param_id_v2()) elif self.objectType == 'type': # just the name res.append(self.prefixedName.get_id_v2()) else: print(self.objectType) assert False else: # only type encoding # the 'returnType' of a non-function type is simply just the last # type, i.e., for 'int*' it is 'int' returnTypeId = self.declSpecs.get_id_v2() typeId = self.decl.get_type_id_v2(returnTypeId) res.append(typeId) return u''.join(res) def __unicode__(self): res = [] declSpecs = text_type(self.declSpecs) res.append(declSpecs) if self.decl.require_space_after_declSpecs() and len(declSpecs) > 0: res.append(u' ') res.append(text_type(self.decl)) return u''.join(res) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) self.declSpecs.describe_signature(signode, 'markType', env, parentScope=parentScope) if (self.decl.require_space_after_declSpecs() and len(text_type(self.declSpecs)) > 0): signode += nodes.Text(' ') self.decl.describe_signature(signode, mode, env, parentScope=parentScope) class ASTTypeWithInit(ASTBase): def __init__(self, type, init): self.objectType = None self.type = type self.init = init @property def name(self): return self.type.name def get_id_v1(self): if self.objectType == 'member': return self.prefixedName.get_id_v1() + u'__' + self.type.get_id_v1() else: return self.type.get_id_v1() def get_id_v2(self): if self.objectType == 'member': return _id_prefix_v2 + self.prefixedName.get_id_v2() else: return self.type.get_id_v2() def __unicode__(self): res = [] res.append(text_type(self.type)) if self.init: res.append(text_type(self.init)) return u''.join(res) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) self.type.describe_signature(signode, mode, env, parentScope=parentScope) if self.init: self.init.describe_signature(signode, mode) class ASTBaseClass(ASTBase): def __init__(self, name, visibility): self.name = name self.visibility = visibility def __unicode__(self): res = [] if self.visibility != 'private': res.append(self.visibility) res.append(' ') res.append(text_type(self.name)) return u''.join(res) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) if self.visibility != 'private': signode += addnodes.desc_annotation(self.visibility, self.visibility) signode += nodes.Text(' ') self.name.describe_signature(signode, mode, env, parentScope=parentScope) class ASTClass(ASTBase): def __init__(self, name, visibility, bases): self.name = name self.visibility = visibility self.bases = bases def get_id_v1(self): return self.prefixedName.get_id_v1() # name = _id_shortwords.get(self.name) # if name is not None: # return name # return self.name.replace(u' ', u'-') def get_id_v2(self): return _id_prefix_v2 + self.prefixedName.get_id_v2() def __unicode__(self): res = [] if self.visibility != 'public': res.append(self.visibility) res.append(' ') res.append(text_type(self.name)) if len(self.bases) > 0: res.append(' : ') first = True for b in self.bases: if not first: res.append(', ') first = False res.append(text_type(b)) return u''.join(res) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) if self.visibility != 'public': signode += addnodes.desc_annotation( self.visibility, self.visibility) signode += nodes.Text(' ') self.name.describe_signature(signode, mode, env, parentScope=parentScope) if len(self.bases) > 0: signode += nodes.Text(' : ') for b in self.bases: b.describe_signature(signode, mode, env, parentScope=parentScope) signode += nodes.Text(', ') signode.pop() class ASTEnum(ASTBase): def __init__(self, name, visibility, scoped, underlyingType): self.name = name self.visibility = visibility self.scoped = scoped self.underlyingType = underlyingType def get_id_v1(self): raise NoOldIdError() def get_id_v2(self): return _id_prefix_v2 + self.prefixedName.get_id_v2() def __unicode__(self): res = [] if self.scoped: res.append(self.scoped) res.append(' ') if self.visibility != 'public': res.append(self.visibility) res.append(' ') res.append(text_type(self.name)) if self.underlyingType: res.append(' : ') res.append(text_type(self.underlyingType)) return u''.join(res) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) # self.scoped has been done by the CPPEnumObject if self.visibility != 'public': signode += addnodes.desc_annotation( self.visibility, self.visibility) signode += nodes.Text(' ') self.name.describe_signature(signode, mode, env, parentScope=parentScope) if self.underlyingType: signode += nodes.Text(' : ') self.underlyingType.describe_signature(signode, 'noneIsName', env, parentScope=parentScope) class ASTEnumerator(ASTBase): def __init__(self, name, init): self.name = name self.init = init def get_id_v1(self): raise NoOldIdError() def get_id_v2(self): return _id_prefix_v2 + self.prefixedName.get_id_v2() def __unicode__(self): res = [] res.append(text_type(self.name)) if self.init: res.append(text_type(self.init)) return u''.join(res) def describe_signature(self, signode, mode, env, parentScope): _verify_description_mode(mode) self.name.describe_signature(signode, mode, env, parentScope=parentScope) if self.init: self.init.describe_signature(signode, 'noneIsName') class DefinitionParser(object): # those without signedness and size modifiers # see http://en.cppreference.com/w/cpp/language/types _simple_fundemental_types = ( 'void', 'bool', 'char', 'wchar_t', 'char16_t', 'char32_t', 'int', 'float', 'double', 'auto' ) _prefix_keys = ('class', 'struct', 'union', 'typename') def __init__(self, definition): self.definition = definition.strip() self.pos = 0 self.end = len(self.definition) self.last_match = None self._previous_state = (0, None) def fail(self, msg): indicator = '-' * self.pos + '^' raise DefinitionError( 'Invalid definition: %s [error at %d]\n %s\n %s' % (msg, self.pos, self.definition, indicator)) def match(self, regex): match = regex.match(self.definition, self.pos) if match is not None: self._previous_state = (self.pos, self.last_match) self.pos = match.end() self.last_match = match return True return False def backout(self): self.pos, self.last_match = self._previous_state def skip_string(self, string): strlen = len(string) if self.definition[self.pos:self.pos + strlen] == string: self.pos += strlen return True return False def skip_word(self, word): return self.match(re.compile(r'\b%s\b' % re.escape(word))) def skip_ws(self): return self.match(_whitespace_re) def skip_word_and_ws(self, word): if self.skip_word(word): self.skip_ws() return True return False @property def eof(self): return self.pos >= self.end @property def current_char(self): try: return self.definition[self.pos] except IndexError: return 'EOF' @property def matched_text(self): if self.last_match is not None: return self.last_match.group() def read_rest(self): rv = self.definition[self.pos:] self.pos = self.end return rv def assert_end(self): self.skip_ws() if not self.eof: self.fail('expected end of definition, got %r' % self.definition[self.pos:]) def _parse_expression(self, end): # Stupidly "parse" an expression. # 'end' should be a list of characters which ends the expression. assert end self.skip_ws() startPos = self.pos if self.match(_string_re): value = self.matched_text else: # TODO: add handling of more bracket-like things, and quote handling brackets = {'(': ')', '[': ']'} symbols = [] while not self.eof: if (len(symbols) == 0 and self.current_char in end): break if self.current_char in brackets.keys(): symbols.append(brackets[self.current_char]) elif len(symbols) > 0 and self.current_char == symbols[-1]: symbols.pop() self.pos += 1 if self.eof: self.fail("Could not find end of expression starting at %d." % startPos) value = self.definition[startPos:self.pos].strip() return value.strip() def _parse_operator(self): self.skip_ws() # adapted from the old code # thank god, a regular operator definition if self.match(_operator_re): return ASTOperatorBuildIn(self.matched_text) # new/delete operator? for op in 'new', 'delete': if not self.skip_word(op): continue self.skip_ws() if self.skip_string('['): self.skip_ws() if not self.skip_string(']'): self.fail('Expected "]" after "operator ' + op + '["') op += '[]' return ASTOperatorBuildIn(op) # oh well, looks like a cast operator definition. # In that case, eat another type. type = self._parse_type(named=False, outer="operatorCast") return ASTOperatorType(type) def _parse_nested_name(self): names = [] self.skip_ws() if self.skip_string('::'): names.append(ASTNestedNameElementEmpty()) while 1: self.skip_ws() if self.skip_word_and_ws('template'): self.fail("'template' in nested name not implemented.") elif self.skip_word_and_ws('operator'): op = self._parse_operator() names.append(op) else: if not self.match(_identifier_re): self.fail("Expected identifier in nested name.") identifier = self.matched_text # make sure there isn't a keyword if identifier in _keywords: self.fail("Expected identifier in nested name, " "got keyword: %s" % identifier) templateArgs = None self.skip_ws() if self.skip_string('<'): templateArgs = [] while 1: pos = self.pos try: type = self._parse_type(named=False) templateArgs.append(type) except DefinitionError: self.pos = pos try: value = self._parse_expression(end=[',', '>']) except DefinitionError: assert False # TODO: make nice error templateArgs.append(ASTTemplateArgConstant(value)) self.skip_ws() if self.skip_string('>'): break elif self.skip_string(','): continue else: self.fail('Expected ">" or "," in template ' 'argument list.') names.append(ASTNestedNameElement(identifier, templateArgs)) self.skip_ws() if not self.skip_string('::'): break return ASTNestedName(names) def _parse_trailing_type_spec(self): # fundemental types self.skip_ws() for t in self._simple_fundemental_types: if self.skip_word(t): return ASTTrailingTypeSpecFundamental(t) # TODO: this could/should be more strict elements = [] self.skip_ws() if self.skip_word_and_ws('signed'): elements.append('signed') elif self.skip_word_and_ws('unsigned'): elements.append('unsigned') while 1: if self.skip_word_and_ws('short'): elements.append('short') elif self.skip_word_and_ws('long'): elements.append('long') else: break if self.skip_word_and_ws('int'): elements.append('int') elif self.skip_word_and_ws('double'): elements.append('double') if len(elements) > 0: return ASTTrailingTypeSpecFundamental(u' '.join(elements)) # decltype self.skip_ws() if self.skip_word_and_ws('decltype'): self.fail('"decltype(.)" in trailing_type_spec not implemented') # prefixed prefix = None self.skip_ws() for k in self._prefix_keys: if self.skip_word_and_ws(k): prefix = k break nestedName = self._parse_nested_name() return ASTTrailingTypeSpecName(prefix, nestedName) def _parse_parameters_and_qualifiers(self, paramMode): self.skip_ws() if not self.skip_string('('): if paramMode == 'function': self.fail('Expecting "(" in parameters_and_qualifiers.') else: return None args = [] self.skip_ws() if not self.skip_string(')'): while 1: self.skip_ws() if self.skip_string('...'): args.append(ASTFunctinoParameter(None, True)) self.skip_ws() if not self.skip_string(')'): self.fail('Expected ")" after "..." in ' 'parameters_and_qualifiers.') break if paramMode == 'function': arg = self._parse_type_with_init(outer=None, named='maybe') else: arg = self._parse_type(named=False) # TODO: parse default parameters args.append(ASTFunctinoParameter(arg)) self.skip_ws() if self.skip_string(','): continue elif self.skip_string(')'): break else: self.fail( 'Expecting "," or ")" in parameters_and_qualifiers, ' 'got "%s".' % self.current_char) if paramMode != 'function': return ASTParametersQualifiers( args, None, None, None, None, None, None, None) self.skip_ws() const = self.skip_word_and_ws('const') volatile = self.skip_word_and_ws('volatile') if not const: # the can be permuted const = self.skip_word_and_ws('const') refQual = None if self.skip_string('&&'): refQual = '&&' if not refQual and self.skip_string('&'): refQual = '&' exceptionSpec = None override = None final = None initializer = None self.skip_ws() if self.skip_string('noexcept'): exceptionSpec = 'noexcept' self.skip_ws() if self.skip_string('('): self.fail('Parameterised "noexcept" not implemented.') self.skip_ws() override = self.skip_word_and_ws('override') final = self.skip_word_and_ws('final') if not override: override = self.skip_word_and_ws( 'override') # they can be permuted self.skip_ws() if self.skip_string('='): self.skip_ws() valid = ('0', 'delete', 'default') for w in valid: if self.skip_word_and_ws(w): initializer = w break if not initializer: self.fail( 'Expected "%s" in initializer-specifier.' % u'" or "'.join(valid)) return ASTParametersQualifiers( args, volatile, const, refQual, exceptionSpec, override, final, initializer) def _parse_decl_specs_simple(self, outer, typed): """Just parse the simple ones.""" storage = None inline = None virtual = None explicit = None constexpr = None volatile = None const = None while 1: # accept any permutation of a subset of some decl-specs self.skip_ws() if not storage: if outer in ('member', 'function'): if self.skip_word('static'): storage = 'static' continue if outer == 'member': if self.skip_word('mutable'): storage = 'mutable' continue if self.skip_word('register'): storage = 'register' continue if outer == 'function': # function-specifiers if not inline: inline = self.skip_word('inline') if inline: continue if not virtual: virtual = self.skip_word('virtual') if virtual: continue if not explicit: explicit = self.skip_word('explicit') if explicit: continue if not constexpr and outer in ('member', 'function'): constexpr = self.skip_word("constexpr") if constexpr: continue if not volatile and typed: volatile = self.skip_word('volatile') if volatile: continue if not const and typed: const = self.skip_word('const') if const: continue break return ASTDeclSpecsSimple(storage, inline, virtual, explicit, constexpr, volatile, const) def _parse_decl_specs(self, outer, typed=True): if outer: if outer not in ('type', 'member', 'function'): raise Exception('Internal error, unknown outer "%s".' % outer) """ visibility storage-class-specifier function-specifier "constexpr" "volatile" "const" trailing-type-specifier storage-class-specifier -> "static" (only for member_object and function_object) | "register" function-specifier -> "inline" | "virtual" | "explicit" (only for function_object) "constexpr" (only for member_object and function_object) """ visibility = None leftSpecs = None rightSpecs = None if outer: self.skip_ws() if self.match(_visibility_re): visibility = self.matched_text leftSpecs = self._parse_decl_specs_simple(outer, typed) if typed: trailing = self._parse_trailing_type_spec() rightSpecs = self._parse_decl_specs_simple(outer, typed) else: trailing = None return ASTDeclSpecs(outer, visibility, leftSpecs, rightSpecs, trailing) def _parse_declarator_name_param_qual(self, named, paramMode, typed): # now we should parse the name, and then suffixes if named == 'maybe': try: declId = self._parse_nested_name() except DefinitionError: declId = None elif named: declId = self._parse_nested_name() else: declId = None self.skip_ws() if typed and declId: if self.skip_string("*"): self.fail("Member pointers not implemented.") arrayOps = [] while 1: self.skip_ws() if typed and self.skip_string('['): value = self._parse_expression(end=[']']) res = self.skip_string(']') assert res arrayOps.append(ASTArray(value)) continue else: break paramQual = self._parse_parameters_and_qualifiers(paramMode) return ASTDecleratorNameParamQual(declId=declId, arrayOps=arrayOps, paramQual=paramQual) def _parse_declerator(self, named, paramMode, typed=True): # 'typed' here means 'parse return type stuff' if paramMode not in ('type', 'function', 'operatorCast'): raise Exception( "Internal error, unknown paramMode '%s'." % paramMode) self.skip_ws() if typed and self.skip_string('*'): self.skip_ws() volatile = False const = False while 1: if not volatile: volatile = self.skip_word_and_ws('volatile') if volatile: continue if not const: const = self.skip_word_and_ws('const') if const: continue break next = self._parse_declerator(named, paramMode, typed) return ASTDeclaratorPtr(next=next, volatile=volatile, const=const) # TODO: shouldn't we parse an R-value ref here first? elif typed and self.skip_string("&"): next = self._parse_declerator(named, paramMode, typed) return ASTDeclaratorRef(next=next) elif typed and self.skip_string("..."): next = self._parse_declerator(named, paramMode, False) return ASTDeclaratorParamPack(next=next) elif typed and self.current_char == '(': # note: peeking, not skipping if paramMode == "operatorCast": # TODO: we should be able to parse cast operators which return # function pointers. For now, just hax it and ignore. return ASTDecleratorNameParamQual(declId=None, arrayOps=[], paramQual=None) # maybe this is the beginning of params and quals,try that first, # otherwise assume it's noptr->declarator > ( ptr-declarator ) startPos = self.pos try: # assume this is params and quals res = self._parse_declarator_name_param_qual(named, paramMode, typed) return res except DefinitionError as exParamQual: self.pos = startPos try: assert self.current_char == '(' self.skip_string('(') # TODO: hmm, if there is a name, it must be in inner, right? # TODO: hmm, if there must be parameters, they must b # inside, right? inner = self._parse_declerator(named, paramMode, typed) if not self.skip_string(')'): self.fail("Expected ')' in \"( ptr-declarator )\"") next = self._parse_declerator(named=False, paramMode="type", typed=typed) return ASTDeclaratorParen(inner=inner, next=next) except DefinitionError as exNoPtrParen: raise DefinitionError( "If declId, parameters, and qualifiers {\n%s\n" "} else If parenthesis in noptr-declarator {\n%s\n}" % (exParamQual, exNoPtrParen)) else: return self._parse_declarator_name_param_qual(named, paramMode, typed) def _parse_initializer(self, outer=None): self.skip_ws() # TODO: support paren and brace initialization for memberObject if not self.skip_string('='): return None else: if outer == 'member': value = self.read_rest().strip() return ASTInitializer(value) elif outer is None: # function parameter value = self._parse_expression(end=[',', ')']) return ASTInitializer(value) else: self.fail("Internal error, initializer for outer '%s' not " "implemented." % outer) def _parse_type(self, named, outer=None): """ named=False|'maybe'|True: 'maybe' is e.g., for function objects which doesn't need to name the arguments outer == operatorCast: annoying case, we should not take the params """ if outer: # always named if outer not in ('type', 'member', 'function', 'operatorCast'): raise Exception('Internal error, unknown outer "%s".' % outer) if outer != 'operatorCast': assert named if outer in ('type', 'function'): # We allow type objects to just be a name. # Some functions don't have normal return types: constructors, # destrutors, cast operators startPos = self.pos # first try without the type try: declSpecs = self._parse_decl_specs(outer=outer, typed=False) decl = self._parse_declerator(named=True, paramMode=outer, typed=False) self.assert_end() except DefinitionError as exUntyped: self.pos = startPos try: declSpecs = self._parse_decl_specs(outer=outer) decl = self._parse_declerator(named=True, paramMode=outer) except DefinitionError as exTyped: # Retain the else branch for easier debugging. # TODO: it would be nice to save the previous stacktrace # and output it here. if True: if outer == 'type': desc = ('Type must be either just a name or a ' 'typedef-like declaration.\n' 'Just a name error: %s\n' 'Typedef-like expression error: %s') elif outer == 'function': desc = ('Error when parsing function declaration:\n' 'If no return type {\n%s\n' '} else if return type {\n%s\n}') else: assert False raise DefinitionError( desc % (exUntyped.description, exTyped.description)) else: # For testing purposes. # do it again to get the proper traceback (how do you # relieable save a traceback when an exception is # constructed?) self.pos = startPos declSpecs = self._parse_decl_specs(outer=outer, typed=False) decl = self._parse_declerator(named=True, paramMode=outer, typed=False) else: paramMode = 'type' if outer == 'member': # i.e., member named = True elif outer == 'operatorCast': paramMode = 'operatorCast' outer = None declSpecs = self._parse_decl_specs(outer=outer) decl = self._parse_declerator(named=named, paramMode=paramMode) return ASTType(declSpecs, decl) def _parse_type_with_init(self, named, outer): if outer: assert outer in ('type', 'member', 'function') type = self._parse_type(outer=outer, named=named) init = self._parse_initializer(outer=outer) return ASTTypeWithInit(type, init) def _parse_class(self): classVisibility = 'public' if self.match(_visibility_re): classVisibility = self.matched_text name = self._parse_nested_name() bases = [] self.skip_ws() if self.skip_string(':'): while 1: self.skip_ws() visibility = 'private' if self.match(_visibility_re): visibility = self.matched_text baseName = self._parse_nested_name() bases.append(ASTBaseClass(baseName, visibility)) self.skip_ws() if self.skip_string(','): continue else: break return ASTClass(name, classVisibility, bases) def _parse_enum(self): scoped = None # is set by CPPEnumObject self.skip_ws() visibility = 'public' if self.match(_visibility_re): visibility = self.matched_text self.skip_ws() name = self._parse_nested_name() self.skip_ws() underlyingType = None if self.skip_string(':'): underlyingType = self._parse_type(named=False) return ASTEnum(name, visibility, scoped, underlyingType) def _parse_enumerator(self): name = self._parse_nested_name() self.skip_ws() init = None if self.skip_string('='): self.skip_ws() init = ASTInitializer(self.read_rest()) return ASTEnumerator(name, init) def parse_type_object(self): res = self._parse_type(named=True, outer='type') res.objectType = 'type' return res def parse_member_object(self): res = self._parse_type_with_init(named=True, outer='member') res.objectType = 'member' return res def parse_function_object(self): res = self._parse_type(named=True, outer='function') res.objectType = 'function' return res def parse_class_object(self): res = self._parse_class() res.objectType = 'class' return res def parse_enum_object(self): res = self._parse_enum() res.objectType = 'enum' return res def parse_enumerator_object(self): res = self._parse_enumerator() res.objectType = 'enumerator' return res def parse_namespace_object(self): res = self._parse_nested_name() res.objectType = 'namespace' return res def parse_xref_object(self): res = self._parse_nested_name() res.objectType = 'xref' return res class CPPObject(ObjectDescription): """Description of a C++ language object.""" doc_field_types = [ GroupedField('parameter', label=l_('Parameters'), names=('param', 'parameter', 'arg', 'argument'), can_collapse=True), GroupedField('exceptions', label=l_('Throws'), rolename='cpp:class', names=('throws', 'throw', 'exception'), can_collapse=True), Field('returnvalue', label=l_('Returns'), has_arg=False, names=('returns', 'return')), ] def add_target_and_index(self, ast, sig, signode): # general note: name must be lstrip(':')'ed, to remove "::" try: id_v1 = ast.get_id_v1() except NoOldIdError: id_v1 = None id_v2 = ast.get_id_v2() # store them in reverse order, so the newest is first ids = [id_v2, id_v1] theid = ids[0] ast.newestId = theid assert theid # shouldn't be None name = text_type(ast.prefixedName).lstrip(':') if theid not in self.state.document.ids: # if the name is not unique, the first one will win objects = self.env.domaindata['cpp']['objects'] if name not in objects: signode['names'].append(name) else: pass # print("[CPP] non-unique name:", name) for id in ids: if id: # is None when the element didn't exist in that version signode['ids'].append(id) signode['first'] = (not self.names) self.state.document.note_explicit_target(signode) if name not in objects: objects.setdefault(name, (self.env.docname, ast)) if ast.objectType == 'enumerator': # find the parent, if it exists && is an enum # && it's unscoped, # then add the name to the parent scope assert len(ast.prefixedName.names) > 0 parentPrefixedAstName = ASTNestedName(ast.prefixedName.names[:-1]) parentPrefixedName = text_type(parentPrefixedAstName).lstrip(':') if parentPrefixedName in objects: docname, parentAst = objects[parentPrefixedName] if parentAst.objectType == 'enum' and not parentAst.scoped: enumeratorName = ASTNestedName([ast.prefixedName.names[-1]]) assert len(parentAst.prefixedName.names) > 0 enumScope = ASTNestedName(parentAst.prefixedName.names[:-1]) unscopedName = enumeratorName.prefix_nested_name(enumScope) txtUnscopedName = text_type(unscopedName).lstrip(':') if txtUnscopedName not in objects: objects.setdefault(txtUnscopedName, (self.env.docname, ast)) # add the uninstantiated template if it doesn't exist uninstantiated = ast.prefixedName.get_name_no_last_template().lstrip(':') if uninstantiated != name and uninstantiated not in objects: signode['names'].append(uninstantiated) objects.setdefault(uninstantiated, (self.env.docname, ast)) indextext = self.get_index_text(name) if not re.compile(r'^[a-zA-Z0-9_]*$').match(theid): self.state_machine.reporter.warning( 'Index id generation for C++ object "%s" failed, please ' 'report as bug (id=%s).' % (text_type(ast), theid), line=self.lineno) self.indexnode['entries'].append(('single', indextext, theid, '')) def parse_definition(self, parser): raise NotImplementedError() def describe_signature(self, signode, ast, parentScope): raise NotImplementedError() def handle_signature(self, sig, signode): def set_lastname(name): parent = self.env.ref_context.get('cpp:parent') if parent and len(parent) > 0: res = name.prefix_nested_name(parent[-1]) else: res = name assert res self.env.ref_context['cpp:lastname'] = res return res parser = DefinitionParser(sig) try: ast = self.parse_definition(parser) parser.assert_end() except DefinitionError as e: self.state_machine.reporter.warning(e.description, line=self.lineno) # It is easier to assume some phony name than handling the error in # the possibly inner declarations. name = ASTNestedName([ ASTNestedNameElement("PhonyNameDueToError", None) ]) set_lastname(name) raise ValueError ast.prefixedName = set_lastname(ast.name) assert ast.prefixedName self.describe_signature(signode, ast, parentScope=ast.prefixedName) return ast class CPPTypeObject(CPPObject): def get_index_text(self, name): return _('%s (C++ type)') % name def parse_definition(self, parser): return parser.parse_type_object() def describe_signature(self, signode, ast, parentScope): signode += addnodes.desc_annotation('type ', 'type ') ast.describe_signature(signode, 'lastIsName', self.env, parentScope=parentScope) class CPPMemberObject(CPPObject): def get_index_text(self, name): return _('%s (C++ member)') % name def parse_definition(self, parser): return parser.parse_member_object() def describe_signature(self, signode, ast, parentScope): ast.describe_signature(signode, 'lastIsName', self.env, parentScope=parentScope) class CPPFunctionObject(CPPObject): def get_index_text(self, name): return _('%s (C++ function)') % name def parse_definition(self, parser): return parser.parse_function_object() def describe_signature(self, signode, ast, parentScope): ast.describe_signature(signode, 'lastIsName', self.env, parentScope=parentScope) class CPPClassObject(CPPObject): def get_index_text(self, name): return _('%s (C++ class)') % name def before_content(self): lastname = self.env.ref_context['cpp:lastname'] assert lastname if 'cpp:parent' in self.env.ref_context: self.env.ref_context['cpp:parent'].append(lastname) else: self.env.ref_context['cpp:parent'] = [lastname] def after_content(self): self.env.ref_context['cpp:parent'].pop() def parse_definition(self, parser): return parser.parse_class_object() def describe_signature(self, signode, ast, parentScope): signode += addnodes.desc_annotation('class ', 'class ') ast.describe_signature(signode, 'lastIsName', self.env, parentScope=parentScope) class CPPEnumObject(CPPObject): def get_index_text(self, name): return _('%s (C++ enum)') % name def before_content(self): lastname = self.env.ref_context['cpp:lastname'] assert lastname if 'cpp:parent' in self.env.ref_context: self.env.ref_context['cpp:parent'].append(lastname) else: self.env.ref_context['cpp:parent'] = [lastname] def after_content(self): self.env.ref_context['cpp:parent'].pop() def parse_definition(self, parser): ast = parser.parse_enum_object() # self.objtype is set by ObjectDescription in run() if self.objtype == "enum": ast.scoped = None elif self.objtype == "enum-struct": ast.scoped = "struct" elif self.objtype == "enum-class": ast.scoped = "class" else: assert False return ast def describe_signature(self, signode, ast, parentScope): prefix = 'enum ' if ast.scoped: prefix += ast.scoped prefix += ' ' signode += addnodes.desc_annotation(prefix, prefix) ast.describe_signature(signode, 'lastIsName', self.env, parentScope=parentScope) class CPPEnumeratorObject(CPPObject): def get_index_text(self, name): return _('%s (C++ enumerator)') % name def parse_definition(self, parser): return parser.parse_enumerator_object() def describe_signature(self, signode, ast, parentScope): signode += addnodes.desc_annotation('enumerator ', 'enumerator ') ast.describe_signature(signode, 'lastIsName', self.env, parentScope=parentScope) class CPPNamespaceObject(Directive): """ This directive is just to tell Sphinx that we're documenting stuff in namespace foo. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {} def run(self): env = self.state.document.settings.env if self.arguments[0].strip() in ('NULL', '0', 'nullptr'): env.ref_context['cpp:parent'] = [] else: parser = DefinitionParser(self.arguments[0]) try: prefix = parser.parse_namespace_object() parser.assert_end() except DefinitionError as e: self.state_machine.reporter.warning(e.description, line=self.lineno) else: env.ref_context['cpp:parent'] = [prefix] return [] class CPPXRefRole(XRefRole): def process_link(self, env, refnode, has_explicit_title, title, target): parent = env.ref_context.get('cpp:parent') if parent: refnode['cpp:parent'] = parent[:] if refnode['reftype'] == 'any': # Remove parentheses from the target (not from title) title, target = self._fix_parens(env, True, title, target) if not has_explicit_title: target = target.lstrip('~') # only has a meaning for the title # if the first character is a tilde, don't display the module/class # parts of the contents if title[:1] == '~': title = title[1:] dcolon = title.rfind('::') if dcolon != -1: title = title[dcolon + 2:] return title, target class CPPDomain(Domain): """C++ language domain.""" name = 'cpp' label = 'C++' object_types = { 'class': ObjType(l_('class'), 'class'), 'function': ObjType(l_('function'), 'func'), 'member': ObjType(l_('member'), 'member'), 'type': ObjType(l_('type'), 'type'), 'enum': ObjType(l_('enum'), 'enum'), 'enumerator': ObjType(l_('enumerator'), 'enumerator') } directives = { 'class': CPPClassObject, 'function': CPPFunctionObject, 'member': CPPMemberObject, 'var': CPPMemberObject, 'type': CPPTypeObject, 'enum': CPPEnumObject, 'enum-struct': CPPEnumObject, 'enum-class': CPPEnumObject, 'enumerator': CPPEnumeratorObject, 'namespace': CPPNamespaceObject } roles = { 'any': CPPXRefRole(), 'class': CPPXRefRole(), 'func': CPPXRefRole(fix_parens=True), 'member': CPPXRefRole(), 'var': CPPXRefRole(), 'type': CPPXRefRole(), 'enum': CPPXRefRole(), 'enumerator': CPPXRefRole() } initial_data = { 'objects': {}, # prefixedName -> (docname, ast) } def clear_doc(self, docname): for fullname, data in list(self.data['objects'].items()): if data[0] == docname: del self.data['objects'][fullname] def merge_domaindata(self, docnames, otherdata): # XXX check duplicates for fullname, data in otherdata['objects'].items(): if data[0] in docnames: self.data['objects'][fullname] = data def _resolve_xref_inner(self, env, fromdocname, builder, target, node, contnode, warn=True): def _create_refnode(nameAst): name = text_type(nameAst).lstrip(':') if name not in self.data['objects']: # try dropping the last template name = nameAst.get_name_no_last_template() if name not in self.data['objects']: return None, None docname, ast = self.data['objects'][name] if node['reftype'] == 'any' and ast.objectType == 'function': title = contnode.pop(0).astext() if title.endswith('()'): # remove parentheses title = title[:-2] if env.config.add_function_parentheses: # add them back to all occurrences if configured title += '()' contnode.insert(0, nodes.Text(title)) return make_refnode(builder, fromdocname, docname, ast.newestId, contnode, name), ast.objectType parser = DefinitionParser(target) try: nameAst = parser.parse_xref_object().name parser.skip_ws() if not parser.eof: raise DefinitionError('') except DefinitionError: if warn: env.warn_node('unparseable C++ definition: %r' % target, node) return None, None # If a name starts with ::, then it is global scope, so if # parent[0] = A::B # parent[1] = ::C # then we should look up in ::C, and in :: # Therefore, use only the last name as the basis of lookup. parent = node.get('cpp:parent', None) if parent and len(parent) > 0: parentScope = parent[-1].clone() else: # env.warn_node("C++ xref has no 'parent' set: %s" % target, node) parentScope = ASTNestedName([ASTNestedNameElementEmpty()]) while len(parentScope.names) > 0: name = nameAst.prefix_nested_name(parentScope) res = _create_refnode(name) if res[0]: return res parentScope.names.pop() # finally try in global scope (we might have done that already though) return _create_refnode(nameAst) def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): return self._resolve_xref_inner(env, fromdocname, builder, target, node, contnode)[0] def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): node, objtype = self._resolve_xref_inner(env, fromdocname, builder, target, node, contnode, warn=False) if node: return [('cpp:' + self.role_for_objtype(objtype), node)] return [] def get_objects(self): for refname, (docname, ast) in iteritems(self.data['objects']): yield (refname, refname, ast.objectType, docname, ast.newestId, 1) Sphinx-1.3.6/sphinx/domains/__init__.py0000664000175000017500000002643112660074372021213 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.domains ~~~~~~~~~~~~~~ Support for domains, which are groupings of description directives and roles describing e.g. constructs of one programming language. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from six import iteritems from sphinx.errors import SphinxError from sphinx.locale import _ class ObjType(object): """ An ObjType is the description for a type of object that a domain can document. In the object_types attribute of Domain subclasses, object type names are mapped to instances of this class. Constructor arguments: - *lname*: localized name of the type (do not include domain name) - *roles*: all the roles that can refer to an object of this type - *attrs*: object attributes -- currently only "searchprio" is known, which defines the object's priority in the full-text search index, see :meth:`Domain.get_objects()`. """ known_attrs = { 'searchprio': 1, } def __init__(self, lname, *roles, **attrs): self.lname = lname self.roles = roles self.attrs = self.known_attrs.copy() self.attrs.update(attrs) class Index(object): """ An Index is the description for a domain-specific index. To add an index to a domain, subclass Index, overriding the three name attributes: * `name` is an identifier used for generating file names. * `localname` is the section title for the index. * `shortname` is a short name for the index, for use in the relation bar in HTML output. Can be empty to disable entries in the relation bar. and providing a :meth:`generate()` method. Then, add the index class to your domain's `indices` list. Extensions can add indices to existing domains using :meth:`~sphinx.application.Sphinx.add_index_to_domain()`. """ name = None localname = None shortname = None def __init__(self, domain): if self.name is None or self.localname is None: raise SphinxError('Index subclass %s has no valid name or localname' % self.__class__.__name__) self.domain = domain def generate(self, docnames=None): """Return entries for the index given by *name*. If *docnames* is given, restrict to entries referring to these docnames. The return value is a tuple of ``(content, collapse)``, where *collapse* is a boolean that determines if sub-entries should start collapsed (for output formats that support collapsing sub-entries). *content* is a sequence of ``(letter, entries)`` tuples, where *letter* is the "heading" for the given *entries*, usually the starting letter. *entries* is a sequence of single entries, where a single entry is a sequence ``[name, subtype, docname, anchor, extra, qualifier, descr]``. The items in this sequence have the following meaning: - `name` -- the name of the index entry to be displayed - `subtype` -- sub-entry related type: 0 -- normal entry 1 -- entry with sub-entries 2 -- sub-entry - `docname` -- docname where the entry is located - `anchor` -- anchor for the entry within `docname` - `extra` -- extra info for the entry - `qualifier` -- qualifier for the description - `descr` -- description for the entry Qualifier and description are not rendered e.g. in LaTeX output. """ return [] class Domain(object): """ A Domain is meant to be a group of "object" description directives for objects of a similar nature, and corresponding roles to create references to them. Examples would be Python modules, classes, functions etc., elements of a templating language, Sphinx roles and directives, etc. Each domain has a separate storage for information about existing objects and how to reference them in `self.data`, which must be a dictionary. It also must implement several functions that expose the object information in a uniform way to parts of Sphinx that allow the user to reference or search for objects in a domain-agnostic way. About `self.data`: since all object and cross-referencing information is stored on a BuildEnvironment instance, the `domain.data` object is also stored in the `env.domaindata` dict under the key `domain.name`. Before the build process starts, every active domain is instantiated and given the environment object; the `domaindata` dict must then either be nonexistent or a dictionary whose 'version' key is equal to the domain class' :attr:`data_version` attribute. Otherwise, `IOError` is raised and the pickled environment is discarded. """ #: domain name: should be short, but unique name = '' #: domain label: longer, more descriptive (used in messages) label = '' #: type (usually directive) name -> ObjType instance object_types = {} #: directive name -> directive class directives = {} #: role name -> role callable roles = {} #: a list of Index subclasses indices = [] #: role name -> a warning message if reference is missing dangling_warnings = {} #: data value for a fresh environment initial_data = {} #: data version, bump this when the format of `self.data` changes data_version = 0 def __init__(self, env): self.env = env if self.name not in env.domaindata: assert isinstance(self.initial_data, dict) new_data = self.initial_data.copy() new_data['version'] = self.data_version self.data = env.domaindata[self.name] = new_data else: self.data = env.domaindata[self.name] if self.data['version'] != self.data_version: raise IOError('data of %r domain out of date' % self.label) self._role_cache = {} self._directive_cache = {} self._role2type = {} self._type2role = {} for name, obj in iteritems(self.object_types): for rolename in obj.roles: self._role2type.setdefault(rolename, []).append(name) self._type2role[name] = obj.roles[0] if obj.roles else '' self.objtypes_for_role = self._role2type.get self.role_for_objtype = self._type2role.get def role(self, name): """Return a role adapter function that always gives the registered role its full name ('domain:name') as the first argument. """ if name in self._role_cache: return self._role_cache[name] if name not in self.roles: return None fullname = '%s:%s' % (self.name, name) def role_adapter(typ, rawtext, text, lineno, inliner, options={}, content=[]): return self.roles[name](fullname, rawtext, text, lineno, inliner, options, content) self._role_cache[name] = role_adapter return role_adapter def directive(self, name): """Return a directive adapter class that always gives the registered directive its full name ('domain:name') as ``self.name``. """ if name in self._directive_cache: return self._directive_cache[name] if name not in self.directives: return None fullname = '%s:%s' % (self.name, name) BaseDirective = self.directives[name] class DirectiveAdapter(BaseDirective): def run(self): self.name = fullname return BaseDirective.run(self) self._directive_cache[name] = DirectiveAdapter return DirectiveAdapter # methods that should be overwritten def clear_doc(self, docname): """Remove traces of a document in the domain-specific inventories.""" pass def merge_domaindata(self, docnames, otherdata): """Merge in data regarding *docnames* from a different domaindata inventory (coming from a subprocess in parallel builds). """ raise NotImplementedError('merge_domaindata must be implemented in %s ' 'to be able to do parallel builds!' % self.__class__) def process_doc(self, env, docname, document): """Process a document after it is read by the environment.""" pass def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): """Resolve the pending_xref *node* with the given *typ* and *target*. This method should return a new node, to replace the xref node, containing the *contnode* which is the markup content of the cross-reference. If no resolution can be found, None can be returned; the xref node will then given to the 'missing-reference' event, and if that yields no resolution, replaced by *contnode*. The method can also raise :exc:`sphinx.environment.NoUri` to suppress the 'missing-reference' event being emitted. """ pass def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): """Resolve the pending_xref *node* with the given *target*. The reference comes from an "any" or similar role, which means that we don't know the type. Otherwise, the arguments are the same as for :meth:`resolve_xref`. The method must return a list (potentially empty) of tuples ``('domain:role', newnode)``, where ``'domain:role'`` is the name of a role that could have created the same reference, e.g. ``'py:func'``. ``newnode`` is what :meth:`resolve_xref` would return. .. versionadded:: 1.3 """ raise NotImplementedError def get_objects(self): """Return an iterable of "object descriptions", which are tuples with five items: * `name` -- fully qualified name * `dispname` -- name to display when searching/linking * `type` -- object type, a key in ``self.object_types`` * `docname` -- the document where it is to be found * `anchor` -- the anchor name for the object * `priority` -- how "important" the object is (determines placement in search results) - 1: default priority (placed before full-text matches) - 0: object is important (placed before default-priority objects) - 2: object is unimportant (placed after full-text matches) - -1: object should not show up in search at all """ return [] def get_type_name(self, type, primary=False): """Return full name for given ObjType.""" if primary: return type.lname return _('%s %s') % (self.label, type.lname) from sphinx.domains.c import CDomain # noqa from sphinx.domains.cpp import CPPDomain # noqa from sphinx.domains.std import StandardDomain # noqa from sphinx.domains.python import PythonDomain # noqa from sphinx.domains.javascript import JavaScriptDomain # noqa from sphinx.domains.rst import ReSTDomain # noqa BUILTIN_DOMAINS = { 'std': StandardDomain, 'py': PythonDomain, 'c': CDomain, 'cpp': CPPDomain, 'js': JavaScriptDomain, 'rst': ReSTDomain, } Sphinx-1.3.6/sphinx/domains/rst.py0000664000175000017500000001206412665045070020257 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.domains.rst ~~~~~~~~~~~~~~~~~~ The reStructuredText domain. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from six import iteritems from sphinx import addnodes from sphinx.domains import Domain, ObjType from sphinx.locale import l_, _ from sphinx.directives import ObjectDescription from sphinx.roles import XRefRole from sphinx.util.nodes import make_refnode dir_sig_re = re.compile(r'\.\. (.+?)::(.*)$') class ReSTMarkup(ObjectDescription): """ Description of generic reST markup. """ def add_target_and_index(self, name, sig, signode): targetname = self.objtype + '-' + name if targetname not in self.state.document.ids: signode['names'].append(targetname) signode['ids'].append(targetname) signode['first'] = (not self.names) self.state.document.note_explicit_target(signode) objects = self.env.domaindata['rst']['objects'] key = (self.objtype, name) if key in objects: self.state_machine.reporter.warning( 'duplicate description of %s %s, ' % (self.objtype, name) + 'other instance in ' + self.env.doc2path(objects[key]), line=self.lineno) objects[key] = self.env.docname indextext = self.get_index_text(self.objtype, name) if indextext: self.indexnode['entries'].append(('single', indextext, targetname, '')) def get_index_text(self, objectname, name): if self.objtype == 'directive': return _('%s (directive)') % name elif self.objtype == 'role': return _('%s (role)') % name return '' def parse_directive(d): """Parse a directive signature. Returns (directive, arguments) string tuple. If no arguments are given, returns (directive, ''). """ dir = d.strip() if not dir.startswith('.'): # Assume it is a directive without syntax return (dir, '') m = dir_sig_re.match(dir) if not m: return (dir, '') parsed_dir, parsed_args = m.groups() return (parsed_dir.strip(), ' ' + parsed_args.strip()) class ReSTDirective(ReSTMarkup): """ Description of a reST directive. """ def handle_signature(self, sig, signode): name, args = parse_directive(sig) desc_name = '.. %s::' % name signode += addnodes.desc_name(desc_name, desc_name) if len(args) > 0: signode += addnodes.desc_addname(args, args) return name class ReSTRole(ReSTMarkup): """ Description of a reST role. """ def handle_signature(self, sig, signode): signode += addnodes.desc_name(':%s:' % sig, ':%s:' % sig) return sig class ReSTDomain(Domain): """ReStructuredText domain.""" name = 'rst' label = 'reStructuredText' object_types = { 'directive': ObjType(l_('directive'), 'dir'), 'role': ObjType(l_('role'), 'role'), } directives = { 'directive': ReSTDirective, 'role': ReSTRole, } roles = { 'dir': XRefRole(), 'role': XRefRole(), } initial_data = { 'objects': {}, # fullname -> docname, objtype } def clear_doc(self, docname): for (typ, name), doc in list(self.data['objects'].items()): if doc == docname: del self.data['objects'][typ, name] def merge_domaindata(self, docnames, otherdata): # XXX check duplicates for (typ, name), doc in otherdata['objects'].items(): if doc in docnames: self.data['objects'][typ, name] = doc def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): objects = self.data['objects'] objtypes = self.objtypes_for_role(typ) for objtype in objtypes: if (objtype, target) in objects: return make_refnode(builder, fromdocname, objects[objtype, target], objtype + '-' + target, contnode, target + ' ' + objtype) def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): objects = self.data['objects'] results = [] for objtype in self.object_types: if (objtype, target) in self.data['objects']: results.append(('rst:' + self.role_for_objtype(objtype), make_refnode(builder, fromdocname, objects[objtype, target], objtype + '-' + target, contnode, target + ' ' + objtype))) return results def get_objects(self): for (typ, name), docname in iteritems(self.data['objects']): yield name, name, typ, docname, typ + '-' + name, 1 Sphinx-1.3.6/sphinx/domains/javascript.py0000664000175000017500000002100612665045070021611 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.domains.javascript ~~~~~~~~~~~~~~~~~~~~~~~~~ The JavaScript domain. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ from sphinx import addnodes from sphinx.domains import Domain, ObjType from sphinx.locale import l_, _ from sphinx.directives import ObjectDescription from sphinx.roles import XRefRole from sphinx.domains.python import _pseudo_parse_arglist from sphinx.util.nodes import make_refnode from sphinx.util.docfields import Field, GroupedField, TypedField class JSObject(ObjectDescription): """ Description of a JavaScript object. """ #: If set to ``True`` this object is callable and a `desc_parameterlist` is #: added has_arguments = False #: what is displayed right before the documentation entry display_prefix = None def handle_signature(self, sig, signode): sig = sig.strip() if '(' in sig and sig[-1:] == ')': prefix, arglist = sig.split('(', 1) prefix = prefix.strip() arglist = arglist[:-1].strip() else: prefix = sig arglist = None if '.' in prefix: nameprefix, name = prefix.rsplit('.', 1) else: nameprefix = None name = prefix objectname = self.env.ref_context.get('js:object') if nameprefix: if objectname: # someone documenting the method of an attribute of the current # object? shouldn't happen but who knows... nameprefix = objectname + '.' + nameprefix fullname = nameprefix + '.' + name elif objectname: fullname = objectname + '.' + name else: # just a function or constructor objectname = '' fullname = name signode['object'] = objectname signode['fullname'] = fullname if self.display_prefix: signode += addnodes.desc_annotation(self.display_prefix, self.display_prefix) if nameprefix: signode += addnodes.desc_addname(nameprefix + '.', nameprefix + '.') signode += addnodes.desc_name(name, name) if self.has_arguments: if not arglist: signode += addnodes.desc_parameterlist() else: _pseudo_parse_arglist(signode, arglist) return fullname, nameprefix def add_target_and_index(self, name_obj, sig, signode): objectname = self.options.get( 'object', self.env.ref_context.get('js:object')) fullname = name_obj[0] if fullname not in self.state.document.ids: signode['names'].append(fullname) signode['ids'].append(fullname.replace('$', '_S_')) signode['first'] = not self.names self.state.document.note_explicit_target(signode) objects = self.env.domaindata['js']['objects'] if fullname in objects: self.state_machine.reporter.warning( 'duplicate object description of %s, ' % fullname + 'other instance in ' + self.env.doc2path(objects[fullname][0]), line=self.lineno) objects[fullname] = self.env.docname, self.objtype indextext = self.get_index_text(objectname, name_obj) if indextext: self.indexnode['entries'].append(('single', indextext, fullname.replace('$', '_S_'), '')) def get_index_text(self, objectname, name_obj): name, obj = name_obj if self.objtype == 'function': if not obj: return _('%s() (built-in function)') % name return _('%s() (%s method)') % (name, obj) elif self.objtype == 'class': return _('%s() (class)') % name elif self.objtype == 'data': return _('%s (global variable or constant)') % name elif self.objtype == 'attribute': return _('%s (%s attribute)') % (name, obj) return '' class JSCallable(JSObject): """Description of a JavaScript function, method or constructor.""" has_arguments = True doc_field_types = [ TypedField('arguments', label=l_('Arguments'), names=('argument', 'arg', 'parameter', 'param'), typerolename='func', typenames=('paramtype', 'type')), GroupedField('errors', label=l_('Throws'), rolename='err', names=('throws', ), can_collapse=True), Field('returnvalue', label=l_('Returns'), has_arg=False, names=('returns', 'return')), Field('returntype', label=l_('Return type'), has_arg=False, names=('rtype',)), ] class JSConstructor(JSCallable): """Like a callable but with a different prefix.""" display_prefix = 'class ' class JSXRefRole(XRefRole): def process_link(self, env, refnode, has_explicit_title, title, target): # basically what sphinx.domains.python.PyXRefRole does refnode['js:object'] = env.ref_context.get('js:object') if not has_explicit_title: title = title.lstrip('.') target = target.lstrip('~') if title[0:1] == '~': title = title[1:] dot = title.rfind('.') if dot != -1: title = title[dot+1:] if target[0:1] == '.': target = target[1:] refnode['refspecific'] = True return title, target class JavaScriptDomain(Domain): """JavaScript language domain.""" name = 'js' label = 'JavaScript' # if you add a new object type make sure to edit JSObject.get_index_string object_types = { 'function': ObjType(l_('function'), 'func'), 'class': ObjType(l_('class'), 'class'), 'data': ObjType(l_('data'), 'data'), 'attribute': ObjType(l_('attribute'), 'attr'), } directives = { 'function': JSCallable, 'class': JSConstructor, 'data': JSObject, 'attribute': JSObject, } roles = { 'func': JSXRefRole(fix_parens=True), 'class': JSXRefRole(fix_parens=True), 'data': JSXRefRole(), 'attr': JSXRefRole(), } initial_data = { 'objects': {}, # fullname -> docname, objtype } def clear_doc(self, docname): for fullname, (fn, _l) in list(self.data['objects'].items()): if fn == docname: del self.data['objects'][fullname] def merge_domaindata(self, docnames, otherdata): # XXX check duplicates for fullname, (fn, objtype) in otherdata['objects'].items(): if fn in docnames: self.data['objects'][fullname] = (fn, objtype) def find_obj(self, env, obj, name, typ, searchorder=0): if name[-2:] == '()': name = name[:-2] objects = self.data['objects'] newname = None if searchorder == 1: if obj and obj + '.' + name in objects: newname = obj + '.' + name else: newname = name else: if name in objects: newname = name elif obj and obj + '.' + name in objects: newname = obj + '.' + name return newname, objects.get(newname) def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): objectname = node.get('js:object') searchorder = node.hasattr('refspecific') and 1 or 0 name, obj = self.find_obj(env, objectname, target, typ, searchorder) if not obj: return None return make_refnode(builder, fromdocname, obj[0], name.replace('$', '_S_'), contnode, name) def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): objectname = node.get('js:object') name, obj = self.find_obj(env, objectname, target, None, 1) if not obj: return [] return [('js:' + self.role_for_objtype(obj[1]), make_refnode(builder, fromdocname, obj[0], name.replace('$', '_S_'), contnode, name))] def get_objects(self): for refname, (docname, type) in list(self.data['objects'].items()): yield refname, refname, type, docname, \ refname.replace('$', '_S_'), 1 Sphinx-1.3.6/sphinx/domains/python.py0000664000175000017500000007123412665045070020774 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.domains.python ~~~~~~~~~~~~~~~~~~~~~ The Python domain. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re from six import iteritems from docutils import nodes from docutils.parsers.rst import directives from sphinx import addnodes from sphinx.roles import XRefRole from sphinx.locale import l_, _ from sphinx.domains import Domain, ObjType, Index from sphinx.directives import ObjectDescription from sphinx.util.nodes import make_refnode from sphinx.util.compat import Directive from sphinx.util.docfields import Field, GroupedField, TypedField # REs for Python signatures py_sig_re = re.compile( r'''^ ([\w.]*\.)? # class name(s) (\w+) \s* # thing name (?: \((.*)\) # optional: arguments (?:\s* -> \s* (.*))? # return annotation )? $ # and nothing more ''', re.VERBOSE) def _pseudo_parse_arglist(signode, arglist): """"Parse" a list of arguments separated by commas. Arguments can have "optional" annotations given by enclosing them in brackets. Currently, this will split at any comma, even if it's inside a string literal (e.g. default argument value). """ paramlist = addnodes.desc_parameterlist() stack = [paramlist] try: for argument in arglist.split(','): argument = argument.strip() ends_open = ends_close = 0 while argument.startswith('['): stack.append(addnodes.desc_optional()) stack[-2] += stack[-1] argument = argument[1:].strip() while argument.startswith(']'): stack.pop() argument = argument[1:].strip() while argument.endswith(']') and not argument.endswith('[]'): ends_close += 1 argument = argument[:-1].strip() while argument.endswith('['): ends_open += 1 argument = argument[:-1].strip() if argument: stack[-1] += addnodes.desc_parameter(argument, argument) while ends_open: stack.append(addnodes.desc_optional()) stack[-2] += stack[-1] ends_open -= 1 while ends_close: stack.pop() ends_close -= 1 if len(stack) != 1: raise IndexError except IndexError: # if there are too few or too many elements on the stack, just give up # and treat the whole argument list as one argument, discarding the # already partially populated paramlist node signode += addnodes.desc_parameterlist() signode[-1] += addnodes.desc_parameter(arglist, arglist) else: signode += paramlist # This override allows our inline type specifiers to behave like :class: link # when it comes to handling "." and "~" prefixes. class PyXrefMixin(object): def make_xref(self, rolename, domain, target, innernode=nodes.emphasis, contnode=None): result = super(PyXrefMixin, self).make_xref(rolename, domain, target, innernode, contnode) result['refspecific'] = True if target.startswith(('.', '~')): prefix, result['reftarget'] = target[0], target[1:] if prefix == '.': text = target[1:] elif prefix == '~': text = target.split('.')[-1] for node in result.traverse(nodes.Text): node.parent[node.parent.index(node)] = nodes.Text(text) break return result class PyField(PyXrefMixin, Field): pass class PyTypedField(PyXrefMixin, TypedField): pass class PyObject(ObjectDescription): """ Description of a general Python object. """ option_spec = { 'noindex': directives.flag, 'module': directives.unchanged, 'annotation': directives.unchanged, } doc_field_types = [ PyTypedField('parameter', label=l_('Parameters'), names=('param', 'parameter', 'arg', 'argument', 'keyword', 'kwarg', 'kwparam'), typerolename='obj', typenames=('paramtype', 'type'), can_collapse=True), PyTypedField('variable', label=l_('Variables'), rolename='obj', names=('var', 'ivar', 'cvar'), typerolename='obj', typenames=('vartype',), can_collapse=True), GroupedField('exceptions', label=l_('Raises'), rolename='exc', names=('raises', 'raise', 'exception', 'except'), can_collapse=True), Field('returnvalue', label=l_('Returns'), has_arg=False, names=('returns', 'return')), PyField('returntype', label=l_('Return type'), has_arg=False, names=('rtype',), bodyrolename='obj'), ] def get_signature_prefix(self, sig): """May return a prefix to put before the object name in the signature. """ return '' def needs_arglist(self): """May return true if an empty argument list is to be generated even if the document contains none. """ return False def handle_signature(self, sig, signode): """Transform a Python signature into RST nodes. Return (fully qualified name of the thing, classname if any). If inside a class, the current class name is handled intelligently: * it is stripped from the displayed name if present * it is added to the full name (return value) if not present """ m = py_sig_re.match(sig) if m is None: raise ValueError name_prefix, name, arglist, retann = m.groups() # determine module and class name (if applicable), as well as full name modname = self.options.get( 'module', self.env.ref_context.get('py:module')) classname = self.env.ref_context.get('py:class') if classname: add_module = False if name_prefix and name_prefix.startswith(classname): fullname = name_prefix + name # class name is given again in the signature name_prefix = name_prefix[len(classname):].lstrip('.') elif name_prefix: # class name is given in the signature, but different # (shouldn't happen) fullname = classname + '.' + name_prefix + name else: # class name is not given in the signature fullname = classname + '.' + name else: add_module = True if name_prefix: classname = name_prefix.rstrip('.') fullname = name_prefix + name else: classname = '' fullname = name signode['module'] = modname signode['class'] = classname signode['fullname'] = fullname sig_prefix = self.get_signature_prefix(sig) if sig_prefix: signode += addnodes.desc_annotation(sig_prefix, sig_prefix) if name_prefix: signode += addnodes.desc_addname(name_prefix, name_prefix) # exceptions are a special case, since they are documented in the # 'exceptions' module. elif add_module and self.env.config.add_module_names: modname = self.options.get( 'module', self.env.ref_context.get('py:module')) if modname and modname != 'exceptions': nodetext = modname + '.' signode += addnodes.desc_addname(nodetext, nodetext) anno = self.options.get('annotation') signode += addnodes.desc_name(name, name) if not arglist: if self.needs_arglist(): # for callables, add an empty parameter list signode += addnodes.desc_parameterlist() if retann: signode += addnodes.desc_returns(retann, retann) if anno: signode += addnodes.desc_annotation(' ' + anno, ' ' + anno) return fullname, name_prefix _pseudo_parse_arglist(signode, arglist) if retann: signode += addnodes.desc_returns(retann, retann) if anno: signode += addnodes.desc_annotation(' ' + anno, ' ' + anno) return fullname, name_prefix def get_index_text(self, modname, name): """Return the text for the index entry of the object.""" raise NotImplementedError('must be implemented in subclasses') def add_target_and_index(self, name_cls, sig, signode): modname = self.options.get( 'module', self.env.ref_context.get('py:module')) fullname = (modname and modname + '.' or '') + name_cls[0] # note target if fullname not in self.state.document.ids: signode['names'].append(fullname) signode['ids'].append(fullname) signode['first'] = (not self.names) self.state.document.note_explicit_target(signode) objects = self.env.domaindata['py']['objects'] if fullname in objects: self.state_machine.reporter.warning( 'duplicate object description of %s, ' % fullname + 'other instance in ' + self.env.doc2path(objects[fullname][0]) + ', use :noindex: for one of them', line=self.lineno) objects[fullname] = (self.env.docname, self.objtype) indextext = self.get_index_text(modname, name_cls) if indextext: self.indexnode['entries'].append(('single', indextext, fullname, '')) def before_content(self): # needed for automatic qualification of members (reset in subclasses) self.clsname_set = False def after_content(self): if self.clsname_set: self.env.ref_context.pop('py:class', None) class PyModulelevel(PyObject): """ Description of an object on module level (functions, data). """ def needs_arglist(self): return self.objtype == 'function' def get_index_text(self, modname, name_cls): if self.objtype == 'function': if not modname: return _('%s() (built-in function)') % name_cls[0] return _('%s() (in module %s)') % (name_cls[0], modname) elif self.objtype == 'data': if not modname: return _('%s (built-in variable)') % name_cls[0] return _('%s (in module %s)') % (name_cls[0], modname) else: return '' class PyClasslike(PyObject): """ Description of a class-like object (classes, interfaces, exceptions). """ def get_signature_prefix(self, sig): return self.objtype + ' ' def get_index_text(self, modname, name_cls): if self.objtype == 'class': if not modname: return _('%s (built-in class)') % name_cls[0] return _('%s (class in %s)') % (name_cls[0], modname) elif self.objtype == 'exception': return name_cls[0] else: return '' def before_content(self): PyObject.before_content(self) if self.names: self.env.ref_context['py:class'] = self.names[0][0] self.clsname_set = True class PyClassmember(PyObject): """ Description of a class member (methods, attributes). """ def needs_arglist(self): return self.objtype.endswith('method') def get_signature_prefix(self, sig): if self.objtype == 'staticmethod': return 'static ' elif self.objtype == 'classmethod': return 'classmethod ' return '' def get_index_text(self, modname, name_cls): name, cls = name_cls add_modules = self.env.config.add_module_names if self.objtype == 'method': try: clsname, methname = name.rsplit('.', 1) except ValueError: if modname: return _('%s() (in module %s)') % (name, modname) else: return '%s()' % name if modname and add_modules: return _('%s() (%s.%s method)') % (methname, modname, clsname) else: return _('%s() (%s method)') % (methname, clsname) elif self.objtype == 'staticmethod': try: clsname, methname = name.rsplit('.', 1) except ValueError: if modname: return _('%s() (in module %s)') % (name, modname) else: return '%s()' % name if modname and add_modules: return _('%s() (%s.%s static method)') % (methname, modname, clsname) else: return _('%s() (%s static method)') % (methname, clsname) elif self.objtype == 'classmethod': try: clsname, methname = name.rsplit('.', 1) except ValueError: if modname: return _('%s() (in module %s)') % (name, modname) else: return '%s()' % name if modname: return _('%s() (%s.%s class method)') % (methname, modname, clsname) else: return _('%s() (%s class method)') % (methname, clsname) elif self.objtype == 'attribute': try: clsname, attrname = name.rsplit('.', 1) except ValueError: if modname: return _('%s (in module %s)') % (name, modname) else: return name if modname and add_modules: return _('%s (%s.%s attribute)') % (attrname, modname, clsname) else: return _('%s (%s attribute)') % (attrname, clsname) else: return '' def before_content(self): PyObject.before_content(self) lastname = self.names and self.names[-1][1] if lastname and not self.env.ref_context.get('py:class'): self.env.ref_context['py:class'] = lastname.strip('.') self.clsname_set = True class PyDecoratorMixin(object): """ Mixin for decorator directives. """ def handle_signature(self, sig, signode): ret = super(PyDecoratorMixin, self).handle_signature(sig, signode) signode.insert(0, addnodes.desc_addname('@', '@')) return ret def needs_arglist(self): return False class PyDecoratorFunction(PyDecoratorMixin, PyModulelevel): """ Directive to mark functions meant to be used as decorators. """ def run(self): # a decorator function is a function after all self.name = 'py:function' return PyModulelevel.run(self) class PyDecoratorMethod(PyDecoratorMixin, PyClassmember): """ Directive to mark methods meant to be used as decorators. """ def run(self): self.name = 'py:method' return PyClassmember.run(self) class PyModule(Directive): """ Directive to mark description of a new module. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = False option_spec = { 'platform': lambda x: x, 'synopsis': lambda x: x, 'noindex': directives.flag, 'deprecated': directives.flag, } def run(self): env = self.state.document.settings.env modname = self.arguments[0].strip() noindex = 'noindex' in self.options env.ref_context['py:module'] = modname ret = [] if not noindex: env.domaindata['py']['modules'][modname] = \ (env.docname, self.options.get('synopsis', ''), self.options.get('platform', ''), 'deprecated' in self.options) # make a duplicate entry in 'objects' to facilitate searching for # the module in PythonDomain.find_obj() env.domaindata['py']['objects'][modname] = (env.docname, 'module') targetnode = nodes.target('', '', ids=['module-' + modname], ismod=True) self.state.document.note_explicit_target(targetnode) # the platform and synopsis aren't printed; in fact, they are only # used in the modindex currently ret.append(targetnode) indextext = _('%s (module)') % modname inode = addnodes.index(entries=[('single', indextext, 'module-' + modname, '')]) ret.append(inode) return ret class PyCurrentModule(Directive): """ This directive is just to tell Sphinx that we're documenting stuff in module foo, but links to module foo won't lead here. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = False option_spec = {} def run(self): env = self.state.document.settings.env modname = self.arguments[0].strip() if modname == 'None': env.ref_context.pop('py:module', None) else: env.ref_context['py:module'] = modname return [] class PyXRefRole(XRefRole): def process_link(self, env, refnode, has_explicit_title, title, target): refnode['py:module'] = env.ref_context.get('py:module') refnode['py:class'] = env.ref_context.get('py:class') if not has_explicit_title: title = title.lstrip('.') # only has a meaning for the target target = target.lstrip('~') # only has a meaning for the title # if the first character is a tilde, don't display the module/class # parts of the contents if title[0:1] == '~': title = title[1:] dot = title.rfind('.') if dot != -1: title = title[dot+1:] # if the first character is a dot, search more specific namespaces first # else search builtins first if target[0:1] == '.': target = target[1:] refnode['refspecific'] = True return title, target class PythonModuleIndex(Index): """ Index subclass to provide the Python module index. """ name = 'modindex' localname = l_('Python Module Index') shortname = l_('modules') def generate(self, docnames=None): content = {} # list of prefixes to ignore ignores = self.domain.env.config['modindex_common_prefix'] ignores = sorted(ignores, key=len, reverse=True) # list of all modules, sorted by module name modules = sorted(iteritems(self.domain.data['modules']), key=lambda x: x[0].lower()) # sort out collapsable modules prev_modname = '' num_toplevels = 0 for modname, (docname, synopsis, platforms, deprecated) in modules: if docnames and docname not in docnames: continue for ignore in ignores: if modname.startswith(ignore): modname = modname[len(ignore):] stripped = ignore break else: stripped = '' # we stripped the whole module name? if not modname: modname, stripped = stripped, '' entries = content.setdefault(modname[0].lower(), []) package = modname.split('.')[0] if package != modname: # it's a submodule if prev_modname == package: # first submodule - make parent a group head if entries: entries[-1][1] = 1 elif not prev_modname.startswith(package): # submodule without parent in list, add dummy entry entries.append([stripped + package, 1, '', '', '', '', '']) subtype = 2 else: num_toplevels += 1 subtype = 0 qualifier = deprecated and _('Deprecated') or '' entries.append([stripped + modname, subtype, docname, 'module-' + stripped + modname, platforms, qualifier, synopsis]) prev_modname = modname # apply heuristics when to collapse modindex at page load: # only collapse if number of toplevel modules is larger than # number of submodules collapse = len(modules) - num_toplevels < num_toplevels # sort by first letter content = sorted(iteritems(content)) return content, collapse class PythonDomain(Domain): """Python language domain.""" name = 'py' label = 'Python' object_types = { 'function': ObjType(l_('function'), 'func', 'obj'), 'data': ObjType(l_('data'), 'data', 'obj'), 'class': ObjType(l_('class'), 'class', 'exc', 'obj'), 'exception': ObjType(l_('exception'), 'exc', 'class', 'obj'), 'method': ObjType(l_('method'), 'meth', 'obj'), 'classmethod': ObjType(l_('class method'), 'meth', 'obj'), 'staticmethod': ObjType(l_('static method'), 'meth', 'obj'), 'attribute': ObjType(l_('attribute'), 'attr', 'obj'), 'module': ObjType(l_('module'), 'mod', 'obj'), } directives = { 'function': PyModulelevel, 'data': PyModulelevel, 'class': PyClasslike, 'exception': PyClasslike, 'method': PyClassmember, 'classmethod': PyClassmember, 'staticmethod': PyClassmember, 'attribute': PyClassmember, 'module': PyModule, 'currentmodule': PyCurrentModule, 'decorator': PyDecoratorFunction, 'decoratormethod': PyDecoratorMethod, } roles = { 'data': PyXRefRole(), 'exc': PyXRefRole(), 'func': PyXRefRole(fix_parens=True), 'class': PyXRefRole(), 'const': PyXRefRole(), 'attr': PyXRefRole(), 'meth': PyXRefRole(fix_parens=True), 'mod': PyXRefRole(), 'obj': PyXRefRole(), } initial_data = { 'objects': {}, # fullname -> docname, objtype 'modules': {}, # modname -> docname, synopsis, platform, deprecated } indices = [ PythonModuleIndex, ] def clear_doc(self, docname): for fullname, (fn, _l) in list(self.data['objects'].items()): if fn == docname: del self.data['objects'][fullname] for modname, (fn, _x, _x, _x) in list(self.data['modules'].items()): if fn == docname: del self.data['modules'][modname] def merge_domaindata(self, docnames, otherdata): # XXX check duplicates? for fullname, (fn, objtype) in otherdata['objects'].items(): if fn in docnames: self.data['objects'][fullname] = (fn, objtype) for modname, data in otherdata['modules'].items(): if data[0] in docnames: self.data['modules'][modname] = data def find_obj(self, env, modname, classname, name, type, searchmode=0): """Find a Python object for "name", perhaps using the given module and/or classname. Returns a list of (name, object entry) tuples. """ # skip parens if name[-2:] == '()': name = name[:-2] if not name: return [] objects = self.data['objects'] matches = [] newname = None if searchmode == 1: if type is None: objtypes = list(self.object_types) else: objtypes = self.objtypes_for_role(type) if objtypes is not None: if modname and classname: fullname = modname + '.' + classname + '.' + name if fullname in objects and objects[fullname][1] in objtypes: newname = fullname if not newname: if modname and modname + '.' + name in objects and \ objects[modname + '.' + name][1] in objtypes: newname = modname + '.' + name elif name in objects and objects[name][1] in objtypes: newname = name else: # "fuzzy" searching mode searchname = '.' + name matches = [(oname, objects[oname]) for oname in objects if oname.endswith(searchname) and objects[oname][1] in objtypes] else: # NOTE: searching for exact match, object type is not considered if name in objects: newname = name elif type == 'mod': # only exact matches allowed for modules return [] elif classname and classname + '.' + name in objects: newname = classname + '.' + name elif modname and modname + '.' + name in objects: newname = modname + '.' + name elif modname and classname and \ modname + '.' + classname + '.' + name in objects: newname = modname + '.' + classname + '.' + name # special case: builtin exceptions have module "exceptions" set elif type == 'exc' and '.' not in name and \ 'exceptions.' + name in objects: newname = 'exceptions.' + name # special case: object methods elif type in ('func', 'meth') and '.' not in name and \ 'object.' + name in objects: newname = 'object.' + name if newname is not None: matches.append((newname, objects[newname])) return matches def resolve_xref(self, env, fromdocname, builder, type, target, node, contnode): modname = node.get('py:module') clsname = node.get('py:class') searchmode = node.hasattr('refspecific') and 1 or 0 matches = self.find_obj(env, modname, clsname, target, type, searchmode) if not matches: return None elif len(matches) > 1: env.warn_node( 'more than one target found for cross-reference ' '%r: %s' % (target, ', '.join(match[0] for match in matches)), node) name, obj = matches[0] if obj[1] == 'module': return self._make_module_refnode(builder, fromdocname, name, contnode) else: return make_refnode(builder, fromdocname, obj[0], name, contnode, name) def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): modname = node.get('py:module') clsname = node.get('py:class') results = [] # always search in "refspecific" mode with the :any: role matches = self.find_obj(env, modname, clsname, target, None, 1) for name, obj in matches: if obj[1] == 'module': results.append(('py:mod', self._make_module_refnode(builder, fromdocname, name, contnode))) else: results.append(('py:' + self.role_for_objtype(obj[1]), make_refnode(builder, fromdocname, obj[0], name, contnode, name))) return results def _make_module_refnode(self, builder, fromdocname, name, contnode): # get additional info for modules docname, synopsis, platform, deprecated = self.data['modules'][name] title = name if synopsis: title += ': ' + synopsis if deprecated: title += _(' (deprecated)') if platform: title += ' (' + platform + ')' return make_refnode(builder, fromdocname, docname, 'module-' + name, contnode, title) def get_objects(self): for modname, info in iteritems(self.data['modules']): yield (modname, modname, 'module', info[0], 'module-' + modname, 0) for refname, (docname, type) in iteritems(self.data['objects']): if type != 'module': # modules are already handled yield (refname, refname, type, docname, refname, 1) Sphinx-1.3.6/sphinx/domains/std.py0000664000175000017500000007016012665045077020251 0ustar vagrantvagrant00000000000000# -*- coding: utf-8 -*- """ sphinx.domains.std ~~~~~~~~~~~~~~~~~~ The standard domain. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. """ import re import unicodedata from six import iteritems from docutils import nodes from docutils.nodes import fully_normalize_name from docutils.parsers.rst import directives from docutils.statemachine import ViewList from sphinx import addnodes from sphinx.roles import XRefRole from sphinx.locale import l_, _ from sphinx.domains import Domain, ObjType from sphinx.directives import ObjectDescription from sphinx.util import ws_re, get_figtype from sphinx.util.nodes import clean_astext, make_refnode from sphinx.util.compat import Directive # RE for option descriptions option_desc_re = re.compile(r'((?:/|--|-|\+)?[-?@#_a-zA-Z0-9]+)(=?\s*.*)') # RE for grammar tokens token_re = re.compile('`(\w+)`', re.U) class GenericObject(ObjectDescription): """ A generic x-ref directive registered with Sphinx.add_object_type(). """ indextemplate = '' parse_node = None def handle_signature(self, sig, signode): if self.parse_node: name = self.parse_node(self.env, sig, signode) else: signode.clear() signode += addnodes.desc_name(sig, sig) # normalize whitespace like XRefRole does name = ws_re.sub('', sig) return name def add_target_and_index(self, name, sig, signode): targetname = '%s-%s' % (self.objtype, name) signode['ids'].append(targetname) self.state.document.note_explicit_target(signode) if self.indextemplate: colon = self.indextemplate.find(':') if colon != -1: indextype = self.indextemplate[:colon].strip() indexentry = self.indextemplate[colon+1:].strip() % (name,) else: indextype = 'single' indexentry = self.indextemplate % (name,) self.indexnode['entries'].append((indextype, indexentry, targetname, '')) self.env.domaindata['std']['objects'][self.objtype, name] = \ self.env.docname, targetname class EnvVar(GenericObject): indextemplate = l_('environment variable; %s') class EnvVarXRefRole(XRefRole): """ Cross-referencing role for environment variables (adds an index entry). """ def result_nodes(self, document, env, node, is_ref): if not is_ref: return [node], [] varname = node['reftarget'] tgtid = 'index-%s' % env.new_serialno('index') indexnode = addnodes.index() indexnode['entries'] = [ ('single', varname, tgtid, ''), ('single', _('environment variable; %s') % varname, tgtid, '') ] targetnode = nodes.target('', '', ids=[tgtid]) document.note_explicit_target(targetnode) return [indexnode, targetnode, node], [] class Target(Directive): """ Generic target for user-defined cross-reference types. """ indextemplate = '' has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {} def run(self): env = self.state.document.settings.env # normalize whitespace in fullname like XRefRole does fullname = ws_re.sub(' ', self.arguments[0].strip()) targetname = '%s-%s' % (self.name, fullname) node = nodes.target('', '', ids=[targetname]) self.state.document.note_explicit_target(node) ret = [node] if self.indextemplate: indexentry = self.indextemplate % (fullname,) indextype = 'single' colon = indexentry.find(':') if colon != -1: indextype = indexentry[:colon].strip() indexentry = indexentry[colon+1:].strip() inode = addnodes.index(entries=[(indextype, indexentry, targetname, '')]) ret.insert(0, inode) name = self.name if ':' in self.name: _, name = self.name.split(':', 1) env.domaindata['std']['objects'][name, fullname] = \ env.docname, targetname return ret class Cmdoption(ObjectDescription): """ Description of a command-line option (.. option). """ def handle_signature(self, sig, signode): """Transform an option description into RST nodes.""" count = 0 firstname = '' for potential_option in sig.split(', '): potential_option = potential_option.strip() m = option_desc_re.match(potential_option) if not m: self.env.warn( self.env.docname, 'Malformed option description %r, should ' 'look like "opt", "-opt args", "--opt args", ' '"/opt args" or "+opt args"' % potential_option, self.lineno) continue optname, args = m.groups() if count: signode += addnodes.desc_addname(', ', ', ') signode += addnodes.desc_name(optname, optname) signode += addnodes.desc_addname(args, args) if not count: firstname = optname signode['allnames'] = [optname] else: signode['allnames'].append(optname) count += 1 if not firstname: raise ValueError return firstname def add_target_and_index(self, firstname, sig, signode): currprogram = self.env.ref_context.get('std:program') for optname in signode.get('allnames', []): targetname = optname.replace('/', '-') if not targetname.startswith('-'): targetname = '-arg-' + targetname if currprogram: targetname = '-' + currprogram + targetname targetname = 'cmdoption' + targetname signode['ids'].append(targetname) self.state.document.note_explicit_target(signode) self.env.domaindata['std']['progoptions'][currprogram, optname] = \ self.env.docname, targetname # create only one index entry for the whole option if optname == firstname: self.indexnode['entries'].append( ('pair', _('%scommand line option; %s') % ((currprogram and currprogram + ' ' or ''), sig), targetname, '')) class Program(Directive): """ Directive to name the program for which options are documented. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {} def run(self): env = self.state.document.settings.env program = ws_re.sub('-', self.arguments[0].strip()) if program == 'None': env.ref_context.pop('std:program', None) else: env.ref_context['std:program'] = program return [] class OptionXRefRole(XRefRole): def process_link(self, env, refnode, has_explicit_title, title, target): refnode['std:program'] = env.ref_context.get('std:program') return title, target def make_termnodes_from_paragraph_node(env, node, new_id=None): gloss_entries = env.temp_data.setdefault('gloss_entries', set()) objects = env.domaindata['std']['objects'] termtext = node.astext() if new_id is None: new_id = nodes.make_id('term-' + termtext) if new_id in gloss_entries: new_id = 'term-' + str(len(gloss_entries)) gloss_entries.add(new_id) objects['term', termtext.lower()] = env.docname, new_id # add an index entry too indexnode = addnodes.index() indexnode['entries'] = [('single', termtext, new_id, 'main')] new_termnodes = [] new_termnodes.append(indexnode) new_termnodes.extend(node.children) new_termnodes.append(addnodes.termsep()) for termnode in new_termnodes: termnode.source, termnode.line = node.source, node.line return new_id, termtext, new_termnodes def make_term_from_paragraph_node(termnodes, ids): # make a single "term" node with all the terms, separated by termsep # nodes (remove the dangling trailing separator) term = nodes.term('', '', *termnodes[:-1]) term.source, term.line = termnodes[0].source, termnodes[0].line term.rawsource = term.astext() term['ids'].extend(ids) term['names'].extend(ids) return term class Glossary(Directive): """ Directive to create a glossary with cross-reference targets for :term: roles. """ has_content = True required_arguments = 0 optional_arguments = 0 final_argument_whitespace = False option_spec = { 'sorted': directives.flag, } def run(self): env = self.state.document.settings.env node = addnodes.glossary() node.document = self.state.document # This directive implements a custom format of the reST definition list # that allows multiple lines of terms before the definition. This is # easy to parse since we know that the contents of the glossary *must # be* a definition list. # first, collect single entries entries = [] in_definition = True was_empty = True messages = [] for line, (source, lineno) in zip(self.content, self.content.items): # empty line -> add to last definition if not line: if in_definition and entries: entries[-1][1].append('', source, lineno) was_empty = True continue # unindented line -> a term if line and not line[0].isspace(): # enable comments if line.startswith('.. '): continue # first term of definition if in_definition: if not was_empty: messages.append(self.state.reporter.system_message( 2, 'glossary term must be preceded by empty line', source=source, line=lineno)) entries.append(([(line, source, lineno)], ViewList())) in_definition = False # second term and following else: if was_empty: messages.append(self.state.reporter.system_message( 2, 'glossary terms must not be separated by empty ' 'lines', source=source, line=lineno)) if entries: entries[-1][0].append((line, source, lineno)) else: messages.append(self.state.reporter.system_message( 2, 'glossary seems to be misformatted, check ' 'indentation', source=source, line=lineno)) else: if not in_definition: # first line of definition, determines indentation in_definition = True indent_len = len(line) - len(line.lstrip()) if entries: entries[-1][1].append(line[indent_len:], source, lineno) else: messages.append(self.state.reporter.system_message( 2, 'glossary seems to be misformatted, check ' 'indentation', source=source, line=lineno)) was_empty = False # now, parse all the entries into a big definition list items = [] for terms, definition in entries: termtexts = [] termnodes = [] system_messages = [] ids = [] for line, source, lineno in terms: # parse the term with inline markup res = self.state.inline_text(line, lineno) system_messages.extend(res[1]) # get a text-only representation of the term and register it # as a cross-reference target tmp = nodes.paragraph('', '', *res[0]) tmp.source = source tmp.line = lineno new_id, termtext, new_termnodes = \ make_termnodes_from_paragraph_node(env, tmp) ids.append(new_id) termtexts.append(termtext) termnodes.extend(new_termnodes) term = make_term_from_paragraph_node(termnodes, ids) term += system_messages defnode = nodes.definition() if definition: self.state.nested_parse(definition, definition.items[0][1], defnode) items.append((termtexts, nodes.definition_list_item('', term, defnode))) if 'sorted' in self.options: items.sort(key=lambda x: unicodedata.normalize('NFD', x[0][0].lower())) dlist = nodes.definition_list() dlist['classes'].append('glossary') dlist.extend(item[1] for item in items) node += dlist return messages + [node] def token_xrefs(text): retnodes = [] pos = 0 for m in token_re.finditer(text): if m.start() > pos: txt = text[pos:m.start()] retnodes.append(nodes.Text(txt, txt)) refnode = addnodes.pending_xref( m.group(1), reftype='token', refdomain='std', reftarget=m.group(1)) refnode += nodes.literal(m.group(1), m.group(1), classes=['xref']) retnodes.append(refnode) pos = m.end() if pos < len(text): retnodes.append(nodes.Text(text[pos:], text[pos:])) return retnodes class ProductionList(Directive): """ Directive to list grammar productions. """ has_content = False required_arguments = 1 optional_arguments = 0 final_argument_whitespace = True option_spec = {} def run(self): env = self.state.document.settings.env objects = env.domaindata['std']['objects'] node = addnodes.productionlist() messages = [] i = 0 for rule in self.arguments[0].split('\n'): if i == 0 and ':' not in rule: # production group continue i += 1 try: name, tokens = rule.split(':', 1) except ValueError: break subnode = addnodes.production() subnode['tokenname'] = name.strip() if subnode['tokenname']: idname = 'grammar-token-%s' % subnode['tokenname'] if idname not in self.state.document.ids: subnode['ids'].append(idname) self.state.document.note_implicit_target(subnode, subnode) objects['token', subnode['tokenname']] = env.docname, idname subnode.extend(token_xrefs(tokens)) node.append(subnode) return [node] + messages class StandardDomain(Domain): """ Domain for all objects that don't fit into another domain or are added via the application interface. """ name = 'std' label = 'Default' object_types = { 'term': ObjType(l_('glossary term'), 'term', searchprio=-1), 'token': ObjType(l_('grammar token'), 'token', searchprio=-1), 'label': ObjType(l_('reference label'), 'ref', 'keyword', searchprio=-1), 'envvar': ObjType(l_('environment variable'), 'envvar'), 'cmdoption': ObjType(l_('program option'), 'option'), } directives = { 'program': Program, 'cmdoption': Cmdoption, # old name for backwards compatibility 'option': Cmdoption, 'envvar': EnvVar, 'glossary': Glossary, 'productionlist': ProductionList, } roles = { 'option': OptionXRefRole(), 'envvar': EnvVarXRefRole(), # links to tokens in grammar productions 'token': XRefRole(), # links to terms in glossary 'term': XRefRole(lowercase=True, innernodeclass=nodes.inline, warn_dangling=True), # links to headings or arbitrary labels 'ref': XRefRole(lowercase=True, innernodeclass=nodes.inline, warn_dangling=True), # links to labels of numbered figures, tables and code-blocks 'numref': XRefRole(lowercase=True, warn_dangling=True), # links to labels, without a different title 'keyword': XRefRole(warn_dangling=True), } initial_data = { 'progoptions': {}, # (program, name) -> docname, labelid 'objects': {}, # (type, name) -> docname, labelid 'labels': { # labelname -> docname, labelid, sectionname 'genindex': ('genindex', '', l_('Index')), 'modindex': ('py-modindex', '', l_('Module Index')), 'search': ('search', '', l_('Search Page')), }, 'anonlabels': { # labelname -> docname, labelid 'genindex': ('genindex', ''), 'modindex': ('py-modindex', ''), 'search': ('search', ''), }, } dangling_warnings = { 'term': 'term not in glossary: %(target)s', 'ref': 'undefined label: %(target)s (if the link has no caption ' 'the label must precede a section header)', 'numref': 'undefined label: %(target)s', 'keyword': 'unknown keyword: %(target)s', } def clear_doc(self, docname): for key, (fn, _l) in list(self.data['progoptions'].items()): if fn == docname: del self.data['progoptions'][key] for key, (fn, _l) in list(self.data['objects'].items()): if fn == docname: del self.data['objects'][key] for key, (fn, _l, _l) in list(self.data['labels'].items()): if fn == docname: del self.data['labels'][key] for key, (fn, _l) in list(self.data['anonlabels'].items()): if fn == docname: del self.data['anonlabels'][key] def merge_domaindata(self, docnames, otherdata): # XXX duplicates? for key, data in otherdata['progoptions'].items(): if data[0] in docnames: self.data['progoptions'][key] = data for key, data in otherdata['objects'].items(): if data[0] in docnames: self.data['objects'][key] = data for key, data in otherdata['labels'].items(): if data[0] in docnames: self.data['labels'][key] = data for key, data in otherdata['anonlabels'].items(): if data[0] in docnames: self.data['anonlabels'][key] = data def process_doc(self, env, docname, document): labels, anonlabels = self.data['labels'], self.data['anonlabels'] for name, explicit in iteritems(document.nametypes): if not explicit: continue labelid = document.nameids[name] if labelid is None: continue node = document.ids[labelid] if name.isdigit() or 'refuri' in node or \ node.tagname.startswith('desc_'): # ignore footnote labels, labels automatically generated from a # link and object descriptions continue if name in labels: env.warn_node('duplicate label %s, ' % name + 'other instance ' 'in ' + env.doc2path(labels[name][0]), node) anonlabels[name] = docname, labelid if node.tagname == 'section': sectname = clean_astext(node[0]) # node[0] == title node elif node.tagname == 'figure': for n in node: if n.tagname == 'caption': sectname = clean_astext(n) break else: continue elif node.tagname == 'image' and node.parent.tagname == 'figure': for n in node.parent: if n.tagname == 'caption': sectname = clean_astext(n) break else: continue elif node.tagname == 'table': for n in node: if n.tagname == 'title': sectname = clean_astext(n) break else: continue elif node.tagname == 'container' and node.get('literal_block'): for n in node: if n.tagname == 'caption': sectname = clean_astext(n) break else: continue elif node.traverse(addnodes.toctree): n = node.traverse(addnodes.toctree)[0] if n.get('caption'): sectname = n['caption'] else: continue else: # anonymous-only labels continue labels[name] = docname, labelid, sectname def build_reference_node(self, fromdocname, builder, docname, labelid, sectname, **options): nodeclass = options.pop('nodeclass', nodes.reference) newnode = nodeclass('', '', internal=True, **options) innernode = nodes.inline(sectname, sectname) if docname == fromdocname: newnode['refid'] = labelid else: # set more info in contnode; in case the # get_relative_uri call raises NoUri, # the builder will then have to resolve these contnode = addnodes.pending_xref('') contnode['refdocname'] = docname contnode['refsectname'] = sectname newnode['refuri'] = builder.get_relative_uri( fromdocname, docname) if labelid: newnode['refuri'] += '#' + labelid newnode.append(innernode) return newnode def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): if typ == 'ref': if node['refexplicit']: # reference to anonymous label; the reference uses # the supplied link caption docname, labelid = self.data['anonlabels'].get(target, ('', '')) sectname = node.astext() else: # reference to named label; the final node will # contain the section name after the label docname, labelid, sectname = self.data['labels'].get(target, ('', '', '')) if not docname: return None return self.build_reference_node(fromdocname, builder, docname, labelid, sectname) elif typ == 'numref': docname, labelid = self.data['anonlabels'].get(target, ('', '')) if not docname: return None if env.config.numfig is False: env.warn(fromdocname, 'numfig is disabled. :numref: is ignored.', lineno=node.line) return contnode try: target_node = env.get_doctree(docname).ids[labelid] figtype = get_figtype(target_node) except: return None try: figure_id = target_node['ids'][0] fignumber = env.toc_fignumbers[docname][figtype][figure_id] except (KeyError, IndexError): # target_node is found, but fignumber is not assigned. # Maybe it is defined in orphaned document. env.warn(fromdocname, "no number is assigned for %s: %s" % (figtype, labelid), lineno=node.line) return contnode title = contnode.astext() if target == fully_normalize_name(title): title = env.config.numfig_format.get(figtype, '') try: newtitle = title % '.'.join(map(str, fignumber)) except TypeError: env.warn(fromdocname, 'invalid numfig_format: %s' % title, lineno=node.line) return None return self.build_reference_node(fromdocname, builder, docname, labelid, newtitle, nodeclass=addnodes.number_reference, title=title) elif typ == 'keyword': # keywords are oddballs: they are referenced by named labels docname, labelid, _ = self.data['labels'].get(target, ('', '', '')) if not docname: return None return make_refnode(builder, fromdocname, docname, labelid, contnode) elif typ == 'option': progname = node.get('std:program') target = target.strip() docname, labelid = self.data['progoptions'].get((progname, target), ('', '')) if not docname: commands = [] while ws_re.search(target): subcommand, target = ws_re.split(target, 1) commands.append(subcommand) progname = "-".join(commands) docname, labelid = self.data['progoptions'].get((progname, target), ('', '')) if docname: break else: return None return make_refnode(builder, fromdocname, docname, labelid, contnode) else: objtypes = self.objtypes_for_role(typ) or [] for objtype in objtypes: if (objtype, target) in self.data['objects']: docname, labelid = self.data['objects'][objtype, target] break else: docname, labelid = '', '' if not docname: return None return make_refnode(builder, fromdocname, docname, labelid, contnode) def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): results = [] ltarget = target.lower() # :ref: lowercases its target automatically for role in ('ref', 'option'): # do not try "keyword" res = self.resolve_xref(env, fromdocname, builder, role, ltarget if role == 'ref' else target, node, contnode) if res: results.append(('std:' + role, res)) # all others for objtype in self.object_types: key = (objtype, target) if objtype == 'term': key = (objtype, ltarget) if key in self.data['objects']: docname, labelid = self.data['objects'][key] results.append(('std:' + self.role_for_objtype(objtype), make_refnode(builder, fromdocname, docname, labelid, contnode))) return results def get_objects(self): for (prog, option), info in iteritems(self.data['progoptions']): yield (option, option, 'option', info[0], info[1], 1) for (type, name), info in iteritems(self.data['objects']): yield (name, name, type, info[0], info[1], self.object_types[type].attrs['searchprio']) for name, info in iteritems(self.data['labels']): yield (name, info[2], 'label', info[0], info[1], -1) # add anonymous-only labels as well non_anon_labels = set(self.data['labels']) for name, info in iteritems(self.data['anonlabels']): if name not in non_anon_labels: yield (name, name, 'label', info[0], info[1], -1) def get_type_name(self, type, primary=False): # never prepend "Default" return type.lname Sphinx-1.3.6/sphinx/themes/0000775000175000017500000000000012665046155016732 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/themes/haiku/0000775000175000017500000000000012665046155020033 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/themes/haiku/static/0000775000175000017500000000000012665046155021322 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/themes/haiku/static/bg-page.png0000664000175000017500000000022312477663665023343 0ustar vagrantvagrant00000000000000PNG  IHDR hރgAMA a pHYs  tIME +PLTEeIDATWca" m?dIENDB`Sphinx-1.3.6/sphinx/themes/haiku/static/bullet_orange.png0000664000175000017500000000055512477663665024673 0ustar vagrantvagrant00000000000000PNG  IHDRz`PLTE~*̓m7޵ث*ŋzn˅*<ņt⟄bKGD  pHYsHHFk> vpAg/IDAT u? .r/P5z=,%&;%tEXtdate:create2010-01-07T12:49:06+01:00:y%tEXtdate:modify2010-01-07T12:49:06+01:00KtIENDB`Sphinx-1.3.6/sphinx/themes/haiku/static/alert_info_32.png0000664000175000017500000000222012477663665024466 0ustar vagrantvagrant00000000000000PNG  IHDR szzWIDATXýohUu?"JzћWzqzq ]d܆.d54܄;5TҴ?i4Gd Mm^ݷLKbн=r<| cp'̣@&Vx8At,('"I'{ckꗥ o`0DQ:!:OnzVk%{u՜% [#RqԪ~7;(>`fKe? !H16O & ['U@8d =r3͔G8&/˅-*de `<<,X(lN`u"coEj2 Ijxp p:ְR.pe–maiQrP P!TxH5[rJmSyf|a5 (=*"®k"ג?K\}S=*iHCo,*8[xIM+טPlVn*9.5x;Y,/Kޒ.x?DZ 75+p[V/pS$zCE0%+-U,l("@m٤[bܵ`jƢ<O^YsEMdա=!ux o~\(9OV/_,P^V0S)Uޫje/~MĮOۇ@rD $L/OcbsJ~ӛj]FINZA`?0x!0#sS5;x7l(_.X+NaS\@~-h/5?._z9.NAzS|lbW2`mC.S?+pG 2Æ `.P.*[^⤢s:מT$)܁x #\?0OeEI/ɫVY WYK\Pl o/HtKt˕ HE`7ޏ"rQD:_gCqG &Y%ߐ!$IENDB`Sphinx-1.3.6/sphinx/themes/haiku/static/haiku.css_t0000664000175000017500000001557212665045077023474 0ustar vagrantvagrant00000000000000/* * haiku.css_t * ~~~~~~~~~~~ * * Sphinx stylesheet -- haiku theme. * * Adapted from http://haiku-os.org/docs/Haiku-doc.css. * Original copyright message: * * Copyright 2008-2009, Haiku. All rights reserved. * Distributed under the terms of the MIT License. * * Authors: * Francois Revol * Stephan Assmus * Braden Ewing * Humdinger * * :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ @import url("basic.css"); html { margin: 0px; padding: 0px; background: #FFF url(bg-page.png) top left repeat-x; } body { line-height: 1.5; margin: auto; padding: 0px; font-family: "DejaVu Sans", Arial, Helvetica, sans-serif; min-width: 59em; max-width: 70em; color: {{ theme_textcolor }}; } div.footer { padding: 8px; font-size: 11px; text-align: center; letter-spacing: 0.5px; } /* link colors and text decoration */ a:link { font-weight: bold; text-decoration: none; color: {{ theme_linkcolor }}; } a:visited { font-weight: bold; text-decoration: none; color: {{ theme_visitedlinkcolor }}; } a:hover, a:active { text-decoration: underline; color: {{ theme_hoverlinkcolor }}; } /* Some headers act as anchors, don't give them a hover effect */ h1 a:hover, a:active { text-decoration: none; color: {{ theme_headingcolor }}; } h2 a:hover, a:active { text-decoration: none; color: {{ theme_headingcolor }}; } h3 a:hover, a:active { text-decoration: none; color: {{ theme_headingcolor }}; } h4 a:hover, a:active { text-decoration: none; color: {{ theme_headingcolor }}; } a.headerlink { color: #a7ce38; padding-left: 5px; } a.headerlink:hover { color: #a7ce38; } /* basic text elements */ div.content { margin-top: 20px; margin-left: 40px; margin-right: 40px; margin-bottom: 50px; font-size: 0.9em; } /* heading and navigation */ div.header { position: relative; left: 0px; top: 0px; height: 85px; /* background: #eeeeee; */ padding: 0 40px; } div.header h1 { font-size: 1.6em; font-weight: normal; letter-spacing: 1px; color: {{ theme_headingcolor }}; border: 0; margin: 0; padding-top: 15px; } div.header h1 a { font-weight: normal; color: {{ theme_headingcolor }}; } div.header h2 { font-size: 1.3em; font-weight: normal; letter-spacing: 1px; text-transform: uppercase; color: #aaa; border: 0; margin-top: -3px; padding: 0; } div.header img.rightlogo { float: right; } div.title { font-size: 1.3em; font-weight: bold; color: {{ theme_headingcolor }}; border-bottom: dotted thin #e0e0e0; margin-bottom: 25px; } div.topnav { /* background: #e0e0e0; */ } div.topnav p { margin-top: 0; margin-left: 40px; margin-right: 40px; margin-bottom: 0px; text-align: right; font-size: 0.8em; } div.bottomnav { background: #eeeeee; } div.bottomnav p { margin-right: 40px; text-align: right; font-size: 0.8em; } a.uplink { font-weight: normal; } /* contents box */ table.index { margin: 0px 0px 30px 30px; padding: 1px; border-width: 1px; border-style: dotted; border-color: #e0e0e0; } table.index tr.heading { background-color: #e0e0e0; text-align: center; font-weight: bold; font-size: 1.1em; } table.index tr.index { background-color: #eeeeee; } table.index td { padding: 5px 20px; } table.index a:link, table.index a:visited { font-weight: normal; text-decoration: none; color: {{ theme_linkcolor }}; } table.index a:hover, table.index a:active { text-decoration: underline; color: {{ theme_hoverlinkcolor }}; } /* Haiku User Guide styles and layout */ /* Rounded corner boxes */ /* Common declarations */ div.admonition { -webkit-border-radius: 10px; -khtml-border-radius: 10px; -moz-border-radius: 10px; border-radius: 10px; border-style: dotted; border-width: thin; border-color: #dcdcdc; padding: 10px 15px 10px 15px; margin-bottom: 15px; margin-top: 15px; } div.note { padding: 10px 15px 10px 80px; background: #e4ffde url(alert_info_32.png) 15px 15px no-repeat; min-height: 42px; } div.warning { padding: 10px 15px 10px 80px; background: #fffbc6 url(alert_warning_32.png) 15px 15px no-repeat; min-height: 42px; } div.seealso { background: #e4ffde; } /* More layout and styles */ h1 { font-size: 1.3em; font-weight: bold; color: {{ theme_headingcolor }}; border-bottom: dotted thin #e0e0e0; margin-top: 30px; } h2 { font-size: 1.2em; font-weight: normal; color: {{ theme_headingcolor }}; border-bottom: dotted thin #e0e0e0; margin-top: 30px; } h3 { font-size: 1.1em; font-weight: normal; color: {{ theme_headingcolor }}; margin-top: 30px; } h4 { font-size: 1.0em; font-weight: normal; color: {{ theme_headingcolor }}; margin-top: 30px; } p { text-align: justify; } p.last { margin-bottom: 0; } ol { padding-left: 20px; } ul { padding-left: 5px; margin-top: 3px; } li { line-height: 1.3; } div.content ul > li { -moz-background-clip:border; -moz-background-inline-policy:continuous; -moz-background-origin:padding; background: transparent url(bullet_orange.png) no-repeat scroll left 0.45em; list-style-image: none; list-style-type: none; padding: 0 0 0 1.666em; margin-bottom: 3px; } td { vertical-align: top; } code { background-color: #e2e2e2; font-size: 1.0em; font-family: monospace; } pre { border-color: #0c3762; border-style: dotted; border-width: thin; margin: 0 0 12px 0; padding: 0.8em; background-color: #f0f0f0; } hr { border-top: 1px solid #ccc; border-bottom: 0; border-right: 0; border-left: 0; margin-bottom: 10px; margin-top: 20px; } /* printer only pretty stuff */ @media print { .noprint { display: none; } /* for acronyms we want their definitions inlined at print time */ acronym[title]:after { font-size: small; content: " (" attr(title) ")"; font-style: italic; } /* and not have mozilla dotted underline */ acronym { border: none; } div.topnav, div.bottomnav, div.header, table.index { display: none; } div.content { margin: 0px; padding: 0px; } html { background: #FFF; } } .viewcode-back { font-family: "DejaVu Sans", Arial, Helvetica, sans-serif; } div.viewcode-block:target { background-color: #f4debf; border-top: 1px solid #ac9; border-bottom: 1px solid #ac9; margin: -1px -10px; padding: 0 12px; } /* math display */ div.math p { text-align: center; } Sphinx-1.3.6/sphinx/themes/haiku/static/alert_warning_32.png0000664000175000017500000000204412477663665025204 0ustar vagrantvagrant00000000000000PNG  IHDR szzIDATXML\U o((e &`uUvlUcM7mhbDn(u4F4ta1PB(-TZt޻y3yL;'}ssszX)`os;\df7x᜜z] FUqDtNDDԖsO^+239*zKΜ:ϜH24=g )@N$љmWPmGwV e ,X`Hܭ9S_Zt ' (PggPK4@*Bh * ;P@5) A@9 eunjnb..R-Q[ًȗt30'fQQpʍO8 P1avʮ 37 /azM,ýv Ä뭼eCp` ^=_}mBt4 B*cl) `A]Trh?~_Ke:p#dښiFPVt& cn*p  trfrHi`[B {%- block haikurel1 %} {%- endblock %} {%- if prev %} «  {{ prev.title }}   ::   {%- endif %} {{ _('Contents') }} {%- if next %}   ::   {{ next.title }}  » {%- endif %} {%- block haikurel2 %} {%- endblock %}

    {% endmacro %} {% block content %}
    {#{%- if display_toc %}

    {{ _('Table Of Contents') }}

    {{ toc }}
    {%- endif %}#} {% block body %}{% endblock %}
    {% endblock %} Sphinx-1.3.6/sphinx/themes/haiku/theme.conf0000664000175000017500000000037012477663665022020 0ustar vagrantvagrant00000000000000[theme] inherit = basic stylesheet = haiku.css pygments_style = autumn [options] full_logo = false textcolor = #333333 headingcolor = #0c3762 linkcolor = #dc3c01 visitedlinkcolor = #892601 hoverlinkcolor = #ff4500 Sphinx-1.3.6/sphinx/themes/default/0000775000175000017500000000000012665046155020356 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/themes/default/static/0000775000175000017500000000000012665046155021645 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/themes/default/static/default.css0000664000175000017500000000003412660074372023775 0ustar vagrantvagrant00000000000000@import url("classic.css"); Sphinx-1.3.6/sphinx/themes/default/theme.conf0000664000175000017500000000003212500756737022324 0ustar vagrantvagrant00000000000000[theme] inherit = classic Sphinx-1.3.6/sphinx/themes/sphinxdoc/0000775000175000017500000000000012665046155020731 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/themes/sphinxdoc/static/0000775000175000017500000000000012665046155022220 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/themes/sphinxdoc/static/navigation.png0000664000175000017500000000033212477663665025077 0ustar vagrantvagrant00000000000000PNG  IHDR<sRGB pHYs  tIME y݉tEXtCommentCreated with GIMPWGIDATӽ0 OBt~8qg*m, 0{,Bt6o.q\Y~t7"LIENDB`Sphinx-1.3.6/sphinx/themes/sphinxdoc/static/contents.png0000664000175000017500000000031212477663665024573 0ustar vagrantvagrant00000000000000PNG  IHDR(?wsRGB pHYs  tIME 7C{tEXtCommentCreated with GIMPW7IDATץ9 Z^']x.$@Z[!8EȞ-oÝo\KIENDB`Sphinx-1.3.6/sphinx/themes/sphinxdoc/static/sphinxdoc.css_t0000664000175000017500000001417212660074372025256 0ustar vagrantvagrant00000000000000/* * sphinxdoc.css_t * ~~~~~~~~~~~~~~~ * * Sphinx stylesheet -- sphinxdoc theme. Originally created by * Armin Ronacher for Werkzeug. * * :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; font-size: 14px; letter-spacing: -0.01em; line-height: 150%; text-align: center; background-color: #BFD1D4; color: black; padding: 0; border: 1px solid #aaa; margin: 0px 80px 0px 80px; min-width: 740px; } div.document { background-color: white; text-align: left; background-image: url(contents.png); background-repeat: repeat-x; } div.bodywrapper { margin: 0 {{ theme_sidebarwidth|toint + 10 }}px 0 0; border-right: 1px solid #ccc; } div.body { margin: 0; padding: 0.5em 20px 20px 20px; } div.related { font-size: 1em; } div.related ul { background-image: url(navigation.png); height: 2em; border-top: 1px solid #ddd; border-bottom: 1px solid #ddd; } div.related ul li { margin: 0; padding: 0; height: 2em; float: left; } div.related ul li.right { float: right; margin-right: 5px; } div.related ul li a { margin: 0; padding: 0 5px 0 5px; line-height: 1.75em; color: #EE9816; } div.related ul li a:hover { color: #3CA8E7; } div.sphinxsidebarwrapper { padding: 0; } div.sphinxsidebar { margin: 0; padding: 0.5em 15px 15px 0; width: {{ theme_sidebarwidth|toint - 20 }}px; float: right; font-size: 1em; text-align: left; } div.sphinxsidebar h3, div.sphinxsidebar h4 { margin: 1em 0 0.5em 0; font-size: 1em; padding: 0.1em 0 0.1em 0.5em; color: white; border: 1px solid #86989B; background-color: #AFC1C4; } div.sphinxsidebar h3 a { color: white; } div.sphinxsidebar ul { padding-left: 1.5em; margin-top: 7px; padding: 0; line-height: 130%; } div.sphinxsidebar ul ul { margin-left: 20px; } div.footer { background-color: #E3EFF1; color: #86989B; padding: 3px 8px 3px 0; clear: both; font-size: 0.8em; text-align: right; } div.footer a { color: #86989B; text-decoration: underline; } /* -- body styles ----------------------------------------------------------- */ p { margin: 0.8em 0 0.5em 0; } a { color: #CA7900; text-decoration: none; } a:hover { color: #2491CF; } div.body a { text-decoration: underline; } h1 { margin: 0; padding: 0.7em 0 0.3em 0; font-size: 1.5em; color: #11557C; } h2 { margin: 1.3em 0 0.2em 0; font-size: 1.35em; padding: 0; } h3 { margin: 1em 0 -0.3em 0; font-size: 1.2em; } div.body h1 a, div.body h2 a, div.body h3 a, div.body h4 a, div.body h5 a, div.body h6 a { color: black!important; } h1 a.anchor, h2 a.anchor, h3 a.anchor, h4 a.anchor, h5 a.anchor, h6 a.anchor { display: none; margin: 0 0 0 0.3em; padding: 0 0.2em 0 0.2em; color: #aaa!important; } h1:hover a.anchor, h2:hover a.anchor, h3:hover a.anchor, h4:hover a.anchor, h5:hover a.anchor, h6:hover a.anchor { display: inline; } h1 a.anchor:hover, h2 a.anchor:hover, h3 a.anchor:hover, h4 a.anchor:hover, h5 a.anchor:hover, h6 a.anchor:hover { color: #777; background-color: #eee; } a.headerlink { color: #c60f0f!important; font-size: 1em; margin-left: 6px; padding: 0 4px 0 4px; text-decoration: none!important; } a.headerlink:hover { background-color: #ccc; color: white!important; } cite, code, code { font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.95em; letter-spacing: 0.01em; } code { background-color: #f2f2f2; border-bottom: 1px solid #ddd; color: #333; } code.descname, code.descclassname, code.xref { border: 0; } hr { border: 1px solid #abc; margin: 2em; } a code { border: 0; color: #CA7900; } a code:hover { color: #2491CF; } pre { font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.95em; letter-spacing: 0.015em; line-height: 120%; padding: 0.5em; border: 1px solid #ccc; background-color: #f8f8f8; } pre a { color: inherit; text-decoration: underline; } td.linenos pre { padding: 0.5em 0; } div.quotebar { background-color: #f8f8f8; max-width: 250px; float: right; padding: 2px 7px; border: 1px solid #ccc; } div.topic { background-color: #f8f8f8; } table { border-collapse: collapse; margin: 0 -0.5em 0 -0.5em; } table td, table th { padding: 0.2em 0.5em 0.2em 0.5em; } div.admonition, div.warning { font-size: 0.9em; margin: 1em 0 1em 0; border: 1px solid #86989B; background-color: #f7f7f7; padding: 0; } div.admonition p, div.warning p { margin: 0.5em 1em 0.5em 1em; padding: 0; } div.admonition pre, div.warning pre { margin: 0.4em 1em 0.4em 1em; } div.admonition p.admonition-title, div.warning p.admonition-title { margin: 0; padding: 0.1em 0 0.1em 0.5em; color: white; border-bottom: 1px solid #86989B; font-weight: bold; background-color: #AFC1C4; } div.warning { border: 1px solid #940000; } div.warning p.admonition-title { background-color: #CF0000; border-bottom-color: #940000; } div.admonition ul, div.admonition ol, div.warning ul, div.warning ol { margin: 0.1em 0.5em 0.5em 3em; padding: 0; } div.versioninfo { margin: 1em 0 0 0; border: 1px solid #ccc; background-color: #DDEAF0; padding: 8px; line-height: 1.3em; font-size: 0.9em; } .viewcode-back { font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; } div.viewcode-block:target { background-color: #f4debf; border-top: 1px solid #ac9; border-bottom: 1px solid #ac9; } div.code-block-caption { background-color: #ddd; color: #222; border: 1px solid #ccc; } Sphinx-1.3.6/sphinx/themes/sphinxdoc/layout.html0000664000175000017500000000060012660074372023125 0ustar vagrantvagrant00000000000000{# sphinxdoc/layout.html ~~~~~~~~~~~~~~~~~~~~~ Sphinx layout template for the sphinxdoc theme. :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. #} {%- extends "basic/layout.html" %} {# put the sidebar before the body #} {% block sidebar1 %}{{ sidebar() }}{% endblock %} {% block sidebar2 %}{% endblock %} Sphinx-1.3.6/sphinx/themes/sphinxdoc/theme.conf0000664000175000017500000000011512477663665022713 0ustar vagrantvagrant00000000000000[theme] inherit = basic stylesheet = sphinxdoc.css pygments_style = friendly Sphinx-1.3.6/sphinx/themes/classic/0000775000175000017500000000000012665046155020353 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/themes/classic/static/0000775000175000017500000000000012665046155021642 5ustar vagrantvagrant00000000000000Sphinx-1.3.6/sphinx/themes/classic/static/sidebar.js_t0000664000175000017500000001220012660074372024124 0ustar vagrantvagrant00000000000000/* * sidebar.js * ~~~~~~~~~~ * * This script makes the Sphinx sidebar collapsible. * * .sphinxsidebar contains .sphinxsidebarwrapper. This script adds * in .sphixsidebar, after .sphinxsidebarwrapper, the #sidebarbutton * used to collapse and expand the sidebar. * * When the sidebar is collapsed the .sphinxsidebarwrapper is hidden * and the width of the sidebar and the margin-left of the document * are decreased. When the sidebar is expanded the opposite happens. * This script saves a per-browser/per-session cookie used to * remember the position of the sidebar among the pages. * Once the browser is closed the cookie is deleted and the position * reset to the default (expanded). * * :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ $(function() { {% if theme_rightsidebar|tobool %} {% set side = 'right' %} {% set opposite = 'left' %} {% set initial_label = '»' %} {% set expand_label = '«' %} {% set collapse_label = '»' %} {% else %} {% set side = 'left' %} {% set opposite = 'right' %} {% set initial_label = '«' %} {% set expand_label = '»' %} {% set collapse_label = '«' %} {% endif %} // global elements used by the functions. // the 'sidebarbutton' element is defined as global after its // creation, in the add_sidebar_button function var bodywrapper = $('.bodywrapper'); var sidebar = $('.sphinxsidebar'); var sidebarwrapper = $('.sphinxsidebarwrapper'); // for some reason, the document has no sidebar; do not run into errors if (!sidebar.length) return; // original margin-left of the bodywrapper and width of the sidebar // with the sidebar expanded var bw_margin_expanded = bodywrapper.css('margin-{{side}}'); var ssb_width_expanded = sidebar.width(); // margin-left of the bodywrapper and width of the sidebar // with the sidebar collapsed var bw_margin_collapsed = '.8em'; var ssb_width_collapsed = '.8em'; // colors used by the current theme var dark_color = $('.related').css('background-color'); var light_color = $('.document').css('background-color'); function sidebar_is_collapsed() { return sidebarwrapper.is(':not(:visible)'); } function toggle_sidebar() { if (sidebar_is_collapsed()) expand_sidebar(); else collapse_sidebar(); } function collapse_sidebar() { sidebarwrapper.hide(); sidebar.css('width', ssb_width_collapsed); bodywrapper.css('margin-{{side}}', bw_margin_collapsed); sidebarbutton.css({ 'margin-{{side}}': '0', 'height': bodywrapper.height() }); sidebarbutton.find('span').text('{{expand_label}}'); sidebarbutton.attr('title', _('Expand sidebar')); document.cookie = 'sidebar=collapsed'; } function expand_sidebar() { bodywrapper.css('margin-{{side}}', bw_margin_expanded); sidebar.css('width', ssb_width_expanded); sidebarwrapper.show(); sidebarbutton.css({ 'margin-{{side}}': ssb_width_expanded-12, 'height': bodywrapper.height() }); sidebarbutton.find('span').text('{{collapse_label}}'); sidebarbutton.attr('title', _('Collapse sidebar')); document.cookie = 'sidebar=expanded'; } function add_sidebar_button() { sidebarwrapper.css({ 'float': '{{side}}', 'margin-{{opposite}}': '0', 'width': ssb_width_expanded - 28 }); // create the button sidebar.append( '
    {{initial_label}}
    ' ); var sidebarbutton = $('#sidebarbutton'); light_color = sidebarbutton.css('background-color'); // find the height of the viewport to center the '<<' in the page var viewport_height; if (window.innerHeight) viewport_height = window.innerHeight; else viewport_height = $(window).height(); sidebarbutton.find('span').css({ 'display': 'block', 'margin-top': (viewport_height - sidebar.position().top - 20) / 2 }); sidebarbutton.click(toggle_sidebar); sidebarbutton.attr('title', _('Collapse sidebar')); sidebarbutton.css({ 'color': '#FFFFFF', 'border-{{side}}': '1px solid ' + dark_color, 'font-size': '1.2em', 'cursor': 'pointer', 'height': bodywrapper.height(), 'padding-top': '1px', 'margin-{{side}}': ssb_width_expanded - 12 }); sidebarbutton.hover( function () { $(this).css('background-color', dark_color); }, function () { $(this).css('background-color', light_color); } ); } function set_position_from_cookie() { if (!document.cookie) return; var items = document.cookie.split(';'); for(var k=0; k