gccintro-1.0/0000777000175000017500000000000010046441135006725 5gccintro-1.0/README0000664000175000017500000000044210046441033007520 This is the source code for "An Introduction to GCC", published by Network Theory Ltd. Brian Gough Network Theory Ltd -- Publishing Free Software Manuals 15 Royal Park Bristol BS8 3AL United Kingdom Tel: +44 (0)117 3179309 Fax: +44 (0)117 9048108 Web: http://www.network-theory.co.uk/ gccintro-1.0/fdl.texi0000664000175000017500000005067610035011426010313 @cindex FDL, GNU Free Documentation License @center Version 1.2, November 2002 @smallerfonts @rm @display Copyright @copyright{} 2000,2001,2002 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. @end display @enumerate 0 @item PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document @dfn{free} in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of ``copyleft'', which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. @item APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The ``Document'', below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as ``you''. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. A ``Modified Version'' of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A ``Secondary Section'' is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The ``Invariant Sections'' are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. The ``Cover Texts'' are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A ``Transparent'' copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not ``Transparent'' is called ``Opaque''. Examples of suitable formats for Transparent copies include plain @sc{ascii} without markup, Texinfo input format, La@TeX{} input format, @acronym{SGML} or @acronym{XML} using a publicly available @acronym{DTD}, and standard-conforming simple @acronym{HTML}, PostScript or @acronym{PDF} designed for human modification. Examples of transparent image formats include @acronym{PNG}, @acronym{XCF} and @acronym{JPG}. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, @acronym{SGML} or @acronym{XML} for which the @acronym{DTD} and/or processing tools are not generally available, and the machine-generated @acronym{HTML}, PostScript or @acronym{PDF} produced by some word processors for output purposes only. The ``Title Page'' means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, ``Title Page'' means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text. A section ``Entitled XYZ'' means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as ``Acknowledgements'', ``Dedications'', ``Endorsements'', or ``History''.) To ``Preserve the Title'' of such a section when you modify the Document means that it remains a section ``Entitled XYZ'' according to this definition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. @item VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. @item COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. @item MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version: @enumerate A @item Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. @item List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement. @item State on the Title page the name of the publisher of the Modified Version, as the publisher. @item Preserve all the copyright notices of the Document. @item Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. @item Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. @item Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice. @item Include an unaltered copy of this License. @item Preserve the section Entitled ``History'', Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled ``History'' in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence. @item Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the ``History'' section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. @item For any section Entitled ``Acknowledgements'' or ``Dedications'', Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. @item Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. @item Delete any section Entitled ``Endorsements''. Such a section may not be included in the Modified Version. @item Do not retitle any existing section to be Entitled ``Endorsements'' or to conflict in title with any Invariant Section. @item Preserve any Warranty Disclaimers. @end enumerate If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles. You may add a section Entitled ``Endorsements'', provided it contains nothing but endorsements of your Modified Version by various parties---for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. @item COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled ``History'' in the various original documents, forming one section Entitled ``History''; likewise combine any sections Entitled ``Acknowledgements'', and any sections Entitled ``Dedications''. You must delete all sections Entitled ``Endorsements.'' @item COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. @item AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an ``aggregate'' if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. @item TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled ``Acknowledgements'', ``Dedications'', or ``History'', the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. @item TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. @item FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See @uref{http://www.gnu.org/copyleft/}. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License ``or any later version'' applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. @end enumerate @unnumberedsubsec ADDENDUM: How to use this License for your documents To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: @smallexample @group Copyright (C) @var{year} @var{your name}. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ``GNU Free Documentation License''. @end group @end smallexample If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the ``with...Texts.'' line with this: @smallexample @group with the Invariant Sections being @var{list their titles}, with the Front-Cover Texts being @var{list}, and with the Back-Cover Texts being @var{list}. @end group @end smallexample If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation. If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software. @c Local Variables: @c ispell-local-pdict: "ispell-dict" @c End: gccintro-1.0/books.texi0000664000175000017500000000525010033557314010660 Network Theory publishes books about free software under free documentation licenses. Our current catalogue includes the following titles: @itemize @bullet @item @uref{http://www.network-theory.co.uk/diff/manual/,@cite{Comparing and Merging Files with GNU diff and patch},@cite{Comparing and Merging Files with GNU diff and patch}} by David MacKenzie, Paul Eggert, and Richard Stallman (ISBN 0-9541617-5-0) $19.95 (@pounds{}12.95) @item @uref{http://www.network-theory.co.uk/cvs/manual/,@cite{Version Management with CVS},@cite{Version Management with CVS}} by Per Cederqvist et al. (ISBN 0-9541617-1-8) $29.95 (@pounds{}19.95) @item @uref{http://www.network-theory.co.uk/bash/manual/,@cite{GNU Bash Reference Manual},@cite{GNU Bash Reference Manual}} by Chet Ramey and Brian Fox (ISBN 0-9541617-7-7) $29.95 (@pounds{}19.95) @item @uref{http://www.network-theory.co.uk/R/manual/,@cite{An Introduction to R},@cite{An Introduction to R}} by W.N. Venables, D.M. Smith and the R Development Core Team (ISBN 0-9541617-4-2) $19.95 (@pounds{}12.95) @item @uref{http://www.network-theory.co.uk/octave/manual/,@cite{GNU Octave Manual},@cite{GNU Octave Manual}} by John W. Eaton (ISBN 0-9541617-2-6) $29.99 (@pounds{}19.99) @item @uref{http://www.network-theory.co.uk/gsl/manual/,@cite{GNU Scientific Library Reference Manual---Second Edition},@cite{GNU Scientific Library Reference Manual---Second Edition}} by M. Galassi, J. Davies, J. Theiler, B. Gough, G. Jungman, M. Booth, F. Rossi (ISBN 0-9541617-3-4) $39.99 (@pounds{}24.99) @item @uref{http://www.network-theory.co.uk/python/manual/,@cite{An Introduction to Python},@cite{An Introduction to Python}} by Guido van Rossum and Fred L. Drake, Jr. (ISBN 0-9541617-6-9) $19.95 (@pounds{}12.95) @item @uref{http://www.network-theory.co.uk/python/language/,@cite{Python Language Reference Manual},@cite{Python Language Reference Manual}} by Guido van Rossum and Fred L. Drake, Jr. (ISBN 0-9541617-8-5) $19.95 (@pounds{}12.95) @item @uref{http://www.network-theory.co.uk/R/base/,@cite{The R Reference Manual---Base Package (Volume 1)},@cite{The R Reference Manual---Base Package (Volume 1)}} by the R Development Core Team (ISBN 0-9546120-0-0) $69.95 (@pounds{}39.95) @item @uref{http://www.network-theory.co.uk/R/base/,@cite{The R Reference Manual---Base Package (Volume 2)},@cite{The R Reference Manual---Base Package (Volume 2)}} by the R Development Core Team (ISBN 0-9546120-1-9) $69.95 (@pounds{}39.95) @end itemize @noindent All titles are available for order from bookstores worldwide. @noindent Sales of the manuals fund the development of more free software and documentation. @noindent For details, visit the website @uref{http://www.network-theory.co.uk/} gccintro-1.0/associations.texi0000664000175000017500000000450310035040064012231 @cindex Free software organizations The GNU Compiler Collection is part of the GNU Project, launched in 1984 to develop a complete Unix-like operating system which is free software: the GNU system. The Free Software Foundation (FSF) is a tax-exempt charity that raises funds for continuing work on the GNU Project. It is dedicated to promoting your right to use, study, copy, modify, and redistribute computer programs. One of the best ways to help the development of free software is to become an associate member of the Free Software Foundation, and pay regular dues to support their efforts---for more information visit the FSF website: @table @cite @item Free Software Foundation (FSF) United States --- @uref{http://www.fsf.org/} @end table @noindent Around the world there are many other local free software membership organizations which support the aims of the Free Software Foundation, including: @table @cite @item Free Software Foundation Europe (FSF Europe) Europe --- @uref{http://www.fsfeurope.org/} @item Association for Free Software (AFFS) United Kingdom --- @uref{http://www.affs.org.uk/} @item Irish Free Software Organisation (IFSO) Ireland --- @uref{http://www.ifso.info/} @item Association for Promotion and Research in Libre Computing (APRIL) France --- @uref{http://www.april.org/} @item Associazione Software Libero Italy --- @uref{http://www.softwarelibero.it/} @item Verein zur F@"orderung Freier Software (FFIS) Germany --- @uref{http://www.ffis.de/} @item Verein zur F@"orderung Freier Software Austria --- @uref{http://www.ffs.or.at/} @item Association Electronique Libre (AEL) Belgium --- @uref{http://www.ael.be/} @item National Association for Free Software (ANSOL) Portugal --- @uref{http://www.ansol.org/} @item Free Software Initiative of Japan (FSIJ) Japan --- @uref{http://www.fsij.org/} @item Free Software Foundation of India (FSF India) India --- @uref{http://www.fsf.org.in/} @end table The @cite{Foundation for a Free Information Infrastructure (FFII)} is an important organization in Europe. FFII is not specific to free software, but works to defend the rights of all programmers and computer users against monopolies in the field of computing, such as patents on software. For more information about FFII, or to support their work with a donation, visit their website at @uref{http://www.ffii.org/}. gccintro-1.0/rms.texi0000664000175000017500000000406710030331154010336 @node Foreword @unnumbered Foreword @i{This foreword has been kindly contributed by Richard M.@: Stallman, the principal author of GCC and founder of the GNU Project.} @vskip 1ex This book is a guide to getting started with GCC, the GNU Compiler Collection. It will tell you how to use GCC as a programming tool. GCC is a programming tool, that's true---but it is also something more. It is part of a 20-year campaign for freedom for computer users. We all want good software, but what does it mean for software to be ``good''? Convenient features and reliability are what it means to be @emph{technically} good, but that is not enough. Good software must also be @emph{ethically} good: it has to respect the users' freedom. As a user of software, you should have the right to run it as you see fit, the right to study the source code and then change it as you see fit, the right to redistribute copies of it to others, and the right to publish a modified version so that you can contribute to building the community. When a program respects your freedom in this way, we call it @dfn{free software}. Before GCC, there were other compilers for C, Fortran, Ada, etc. But they were not free software; you could not use them in freedom. I wrote GCC so we could use a compiler without giving up our freedom. A compiler alone is not enough---to use a computer system, you need a whole operating system. In 1983, all operating system for modern computers were non-free. To remedy this, in 1984 I began developing the GNU operating system, a Unix-like system that would be free software. Developing GCC was one part of developing GNU. By the early 90s, the nearly-finished GNU operating system was completed by the addition of a kernel, Linux, that became free software in 1992. The combined GNU/Linux operating system has achieved the goal of making it possible to use a computer in freedom. But freedom is never automatically secure, and we need to work to defend it. The Free Software Movement needs your support. @vskip 1ex @flushright Richard M.@: Stallman February 2004 @end flushright gccintro-1.0/AUTHORS0000664000175000017500000000000010014170623007675 gccintro-1.0/ChangeLog0000664000175000017500000000000010014170623010377 gccintro-1.0/INSTALL0000644000175000017500000002203007673274427007713 Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002 Free Software Foundation, Inc. This file is free documentation; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. Basic Installation ================== These are generic installation instructions. The `configure' shell script attempts to guess correct values for various system-dependent variables used during compilation. It uses those values to create a `Makefile' in each directory of the package. It may also create one or more `.h' files containing system-dependent definitions. Finally, it creates a shell script `config.status' that you can run in the future to recreate the current configuration, and a file `config.log' containing compiler output (useful mainly for debugging `configure'). It can also use an optional file (typically called `config.cache' and enabled with `--cache-file=config.cache' or simply `-C') that saves the results of its tests to speed up reconfiguring. (Caching is disabled by default to prevent problems with accidental use of stale cache files.) If you need to do unusual things to compile the package, please try to figure out how `configure' could check whether to do them, and mail diffs or instructions to the address given in the `README' so they can be considered for the next release. If you are using the cache, and at some point `config.cache' contains results you don't want to keep, you may remove or edit it. The file `configure.ac' (or `configure.in') is used to create `configure' by a program called `autoconf'. You only need `configure.ac' if you want to change it or regenerate `configure' using a newer version of `autoconf'. The simplest way to compile this package is: 1. `cd' to the directory containing the package's source code and type `./configure' to configure the package for your system. If you're using `csh' on an old version of System V, you might need to type `sh ./configure' instead to prevent `csh' from trying to execute `configure' itself. Running `configure' takes awhile. While running, it prints some messages telling which features it is checking for. 2. Type `make' to compile the package. 3. Optionally, type `make check' to run any self-tests that come with the package. 4. Type `make install' to install the programs and any data files and documentation. 5. You can remove the program binaries and object files from the source code directory by typing `make clean'. To also remove the files that `configure' created (so you can compile the package for a different kind of computer), type `make distclean'. There is also a `make maintainer-clean' target, but that is intended mainly for the package's developers. If you use it, you may have to get all sorts of other programs in order to regenerate files that came with the distribution. Compilers and Options ===================== Some systems require unusual options for compilation or linking that the `configure' script does not know about. Run `./configure --help' for details on some of the pertinent environment variables. You can give `configure' initial values for configuration parameters by setting variables in the command line or in the environment. Here is an example: ./configure CC=c89 CFLAGS=-O2 LIBS=-lposix *Note Defining Variables::, for more details. Compiling For Multiple Architectures ==================================== You can compile the package for more than one kind of computer at the same time, by placing the object files for each architecture in their own directory. To do this, you must use a version of `make' that supports the `VPATH' variable, such as GNU `make'. `cd' to the directory where you want the object files and executables to go and run the `configure' script. `configure' automatically checks for the source code in the directory that `configure' is in and in `..'. If you have to use a `make' that does not support the `VPATH' variable, you have to compile the package for one architecture at a time in the source code directory. After you have installed the package for one architecture, use `make distclean' before reconfiguring for another architecture. Installation Names ================== By default, `make install' will install the package's files in `/usr/local/bin', `/usr/local/man', etc. You can specify an installation prefix other than `/usr/local' by giving `configure' the option `--prefix=PATH'. You can specify separate installation prefixes for architecture-specific files and architecture-independent files. If you give `configure' the option `--exec-prefix=PATH', the package will use PATH as the prefix for installing programs and libraries. Documentation and other data files will still use the regular prefix. In addition, if you use an unusual directory layout you can give options like `--bindir=PATH' to specify different values for particular kinds of files. Run `configure --help' for a list of the directories you can set and what kinds of files go in them. If the package supports it, you can cause programs to be installed with an extra prefix or suffix on their names by giving `configure' the option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'. Optional Features ================= Some packages pay attention to `--enable-FEATURE' options to `configure', where FEATURE indicates an optional part of the package. They may also pay attention to `--with-PACKAGE' options, where PACKAGE is something like `gnu-as' or `x' (for the X Window System). The `README' should mention any `--enable-' and `--with-' options that the package recognizes. For packages that use the X Window System, `configure' can usually find the X include and library files automatically, but if it doesn't, you can use the `configure' options `--x-includes=DIR' and `--x-libraries=DIR' to specify their locations. Specifying the System Type ========================== There may be some features `configure' cannot figure out automatically, but needs to determine by the type of machine the package will run on. Usually, assuming the package is built to be run on the _same_ architectures, `configure' can figure that out, but if it prints a message saying it cannot guess the machine type, give it the `--build=TYPE' option. TYPE can either be a short name for the system type, such as `sun4', or a canonical name which has the form: CPU-COMPANY-SYSTEM where SYSTEM can have one of these forms: OS KERNEL-OS See the file `config.sub' for the possible values of each field. If `config.sub' isn't included in this package, then this package doesn't need to know the machine type. If you are _building_ compiler tools for cross-compiling, you should use the `--target=TYPE' option to select the type of system they will produce code for. If you want to _use_ a cross compiler, that generates code for a platform different from the build platform, you should specify the "host" platform (i.e., that on which the generated programs will eventually be run) with `--host=TYPE'. Sharing Defaults ================ If you want to set default values for `configure' scripts to share, you can create a site shell script called `config.site' that gives default values for variables like `CC', `cache_file', and `prefix'. `configure' looks for `PREFIX/share/config.site' if it exists, then `PREFIX/etc/config.site' if it exists. Or, you can set the `CONFIG_SITE' environment variable to the location of the site script. A warning: not all `configure' scripts look for a site script. Defining Variables ================== Variables not defined in a site shell script can be set in the environment passed to `configure'. However, some packages may run configure again during the build, and the customized values of these variables may be lost. In order to avoid this problem, you should set them in the `configure' command line, using `VAR=value'. For example: ./configure CC=/usr/local2/bin/gcc will cause the specified gcc to be used as the C compiler (unless it is overridden in the site shell script). `configure' Invocation ====================== `configure' recognizes the following options to control how it operates. `--help' `-h' Print a summary of the options to `configure', and exit. `--version' `-V' Print the version of Autoconf used to generate the `configure' script, and exit. `--cache-file=FILE' Enable the cache: use and save the results of the tests in FILE, traditionally `config.cache'. FILE defaults to `/dev/null' to disable caching. `--config-cache' `-C' Alias for `--cache-file=config.cache'. `--quiet' `--silent' `-q' Do not print messages saying which checks are being made. To suppress all normal output, redirect it to `/dev/null' (any error messages will still be shown). `--srcdir=DIR' Look for the package's source code in directory DIR. Usually `configure' can determine that directory automatically. `configure' also accepts some other, not widely useful, options. Run `configure --help' for more details. gccintro-1.0/Makefile.am0000664000175000017500000000122010046440671010676 info_TEXINFOS = gccintro.texi gccintro_TEXINFOS = fdl.texi books.texi associations.texi rms.texi EXTRA_DIST = alpha.c ansi.c bad.c badpow.c bye_fn.c calc.c calc2.c castqual.c check.c collatz.c cov.c dbmain.c dtest.c dtestval.c dtestval2.c dtestval3.c gnuarray.c heap.c hello.c hello2.c hello_fn.c hellobad.c hellocc.c inline.c inline2.c main.c main2.c main3.c main4.c nested.c null.c optim.c pi.c prof.c readdir.c shadow.c shadow2.c stl2.c stl3.c sus.c test.c uninit.c uninit2.c w.c whetstone.c buffer.h grid.h hello.h hello1.h tprint.h hello.cc hellostr.cc string.cc templates.cc templates2.cc tmain.cc tmp.cc tprint.cc tprog.cc cov_c_gcov COPYING.FDL gccintro-1.0/Makefile.in0000664000175000017500000003461510046440735010726 # Makefile.in generated by automake 1.7.5 from Makefile.am. # @configure_input@ # Copyright 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003 # Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ srcdir = @srcdir@ top_srcdir = @top_srcdir@ VPATH = @srcdir@ pkgdatadir = $(datadir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ top_builddir = . am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd INSTALL = @INSTALL@ install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LTLIBOBJS = @LTLIBOBJS@ MAKEINFO = @MAKEINFO@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ ac_ct_STRIP = @ac_ct_STRIP@ am__leading_dot = @am__leading_dot@ bindir = @bindir@ build_alias = @build_alias@ datadir = @datadir@ exec_prefix = @exec_prefix@ host_alias = @host_alias@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localstatedir = @localstatedir@ mandir = @mandir@ oldincludedir = @oldincludedir@ prefix = @prefix@ program_transform_name = @program_transform_name@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ info_TEXINFOS = gccintro.texi gccintro_TEXINFOS = fdl.texi books.texi associations.texi rms.texi EXTRA_DIST = alpha.c ansi.c bad.c badpow.c bye_fn.c calc.c calc2.c castqual.c check.c collatz.c cov.c dbmain.c dtest.c dtestval.c dtestval2.c dtestval3.c gnuarray.c heap.c hello.c hello2.c hello_fn.c hellobad.c hellocc.c inline.c inline2.c main.c main2.c main3.c main4.c nested.c null.c optim.c pi.c prof.c readdir.c shadow.c shadow2.c stl2.c stl3.c sus.c test.c uninit.c uninit2.c w.c whetstone.c buffer.h grid.h hello.h hello1.h tprint.h hello.cc hellostr.cc string.cc templates.cc templates2.cc tmain.cc tmp.cc tprint.cc tprog.cc cov_c_gcov COPYING.FDL subdir = . ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 mkinstalldirs = $(SHELL) $(top_srcdir)/mkinstalldirs CONFIG_CLEAN_FILES = depcomp = am__depfiles_maybe = DIST_SOURCES = am__TEXINFO_TEX_DIR = $(srcdir) INFO_DEPS = gccintro.info DVIS = gccintro.dvi PDFS = gccintro.pdf PSS = gccintro.ps TEXINFOS = gccintro.texi DIST_COMMON = README $(gccintro_TEXINFOS) AUTHORS ChangeLog INSTALL \ Makefile.am Makefile.in NEWS aclocal.m4 configure configure.ac \ install-sh missing mkinstalldirs texinfo.tex all: all-am .SUFFIXES: .SUFFIXES: .dvi .info .pdf .ps .texi am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \ configure.lineno $(srcdir)/Makefile.in: Makefile.am $(top_srcdir)/configure.ac $(ACLOCAL_M4) cd $(top_srcdir) && \ $(AUTOMAKE) --gnu Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe) $(top_builddir)/config.status: $(srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) $(SHELL) ./config.status --recheck $(srcdir)/configure: $(srcdir)/configure.ac $(ACLOCAL_M4) $(CONFIGURE_DEPENDENCIES) cd $(srcdir) && $(AUTOCONF) $(ACLOCAL_M4): configure.ac cd $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS) .texi.info: @rm -f $@ $@-[0-9] $@-[0-9][0-9] $(@:.info=).i[0-9] $(@:.info=).i[0-9][0-9] $(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \ -o $@ `test -f '$<' || echo '$(srcdir)/'`$< .texi.dvi: TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \ $(TEXI2DVI) `test -f '$<' || echo '$(srcdir)/'`$< .texi.pdf: TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \ $(TEXI2PDF) `test -f '$<' || echo '$(srcdir)/'`$< gccintro.info: gccintro.texi $(gccintro_TEXINFOS) gccintro.dvi: gccintro.texi $(gccintro_TEXINFOS) gccintro.pdf: gccintro.texi $(gccintro_TEXINFOS) TEXI2DVI = texi2dvi TEXI2PDF = $(TEXI2DVI) --pdf --batch DVIPS = dvips .dvi.ps: $(DVIPS) -o $@ $< uninstall-info-am: $(PRE_UNINSTALL) @if (install-info --version && \ install-info --version | grep -i -v debian) >/dev/null 2>&1; then \ list='$(INFO_DEPS)'; \ for file in $$list; do \ relfile=`echo "$$file" | sed 's|^.*/||'`; \ echo " install-info --info-dir=$(DESTDIR)$(infodir) --remove $(DESTDIR)$(infodir)/$$relfile"; \ install-info --info-dir=$(DESTDIR)$(infodir) --remove $(DESTDIR)$(infodir)/$$relfile; \ done; \ else :; fi @$(NORMAL_UNINSTALL) @list='$(INFO_DEPS)'; \ for file in $$list; do \ relfile=`echo "$$file" | sed 's|^.*/||'`; \ relfile_i=`echo "$$relfile" | sed 's|\.info$$||;s|$$|.i|'`; \ (if cd $(DESTDIR)$(infodir); then \ echo " rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9])"; \ rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]; \ else :; fi); \ done dist-info: $(INFO_DEPS) list='$(INFO_DEPS)'; \ for base in $$list; do \ if test -f $$base; then d=.; else d=$(srcdir); fi; \ for file in $$d/$$base*; do \ relfile=`expr "$$file" : "$$d/\(.*\)"`; \ test -f $(distdir)/$$relfile || \ cp -p $$file $(distdir)/$$relfile; \ done; \ done mostlyclean-aminfo: -rm -f gccintro.aux gccintro.cp gccintro.cps gccintro.fn gccintro.fns \ gccintro.ky gccintro.kys gccintro.log gccintro.pg \ gccintro.pgs gccintro.tmp gccintro.toc gccintro.tp \ gccintro.tps gccintro.vr gccintro.vrs gccintro.dvi \ gccintro.pdf gccintro.ps maintainer-clean-aminfo: @list='$(INFO_DEPS)'; for i in $$list; do \ i_i=`echo "$$i" | sed 's|\.info$$||;s|$$|.i|'`; \ echo " rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]"; \ rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]; \ done tags: TAGS TAGS: ctags: CTAGS CTAGS: DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) top_distdir = . distdir = $(PACKAGE)-$(VERSION) am__remove_distdir = \ { test ! -d $(distdir) \ || { find $(distdir) -type d ! -perm -200 -exec chmod u+w {} ';' \ && rm -fr $(distdir); }; } GZIP_ENV = --best distuninstallcheck_listfiles = find . -type f -print distcleancheck_listfiles = find . -type f -print distdir: $(DISTFILES) $(am__remove_distdir) mkdir $(distdir) @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \ list='$(DISTFILES)'; for file in $$list; do \ case $$file in \ $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \ $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \ esac; \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \ if test "$$dir" != "$$file" && test "$$dir" != "."; then \ dir="/$$dir"; \ $(mkinstalldirs) "$(distdir)$$dir"; \ else \ dir=''; \ fi; \ if test -d $$d/$$file; then \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \ fi; \ cp -pR $$d/$$file $(distdir)$$dir || exit 1; \ else \ test -f $(distdir)/$$file \ || cp -p $$d/$$file $(distdir)/$$file \ || exit 1; \ fi; \ done $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$(top_distdir)" distdir="$(distdir)" \ dist-info -find $(distdir) -type d ! -perm -777 -exec chmod a+rwx {} \; -o \ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \ ! -type d ! -perm -444 -exec $(SHELL) $(install_sh) -c -m a+r {} {} \; \ || chmod -R a+r $(distdir) dist-gzip: distdir $(AMTAR) chof - $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz $(am__remove_distdir) dist dist-all: distdir $(AMTAR) chof - $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz $(am__remove_distdir) # This target untars the dist file and tries a VPATH configuration. Then # it guarantees that the distribution is self-contained by making another # tarfile. distcheck: dist $(am__remove_distdir) GZIP=$(GZIP_ENV) gunzip -c $(distdir).tar.gz | $(AMTAR) xf - chmod -R a-w $(distdir); chmod a+w $(distdir) mkdir $(distdir)/_build mkdir $(distdir)/_inst chmod a-w $(distdir) dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \ && cd $(distdir)/_build \ && ../configure --srcdir=.. --prefix="$$dc_install_base" \ $(DISTCHECK_CONFIGURE_FLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) dvi \ && $(MAKE) $(AM_MAKEFLAGS) check \ && $(MAKE) $(AM_MAKEFLAGS) install \ && $(MAKE) $(AM_MAKEFLAGS) installcheck \ && $(MAKE) $(AM_MAKEFLAGS) uninstall \ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \ distuninstallcheck \ && chmod -R a-w "$$dc_install_base" \ && ({ \ (cd ../.. && $(mkinstalldirs) "$$dc_destdir") \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \ } || { rm -rf "$$dc_destdir"; exit 1; }) \ && rm -rf "$$dc_destdir" \ && $(MAKE) $(AM_MAKEFLAGS) dist-gzip \ && rm -f $(distdir).tar.gz \ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck $(am__remove_distdir) @echo "$(distdir).tar.gz is ready for distribution" | \ sed 'h;s/./=/g;p;x;p;x' distuninstallcheck: @cd $(distuninstallcheck_dir) \ && test `$(distuninstallcheck_listfiles) | wc -l` -le 1 \ || { echo "ERROR: files left after uninstall:" ; \ if test -n "$(DESTDIR)"; then \ echo " (check DESTDIR support)"; \ fi ; \ $(distuninstallcheck_listfiles) ; \ exit 1; } >&2 distcleancheck: distclean @if test '$(srcdir)' = . ; then \ echo "ERROR: distcleancheck can only run from a VPATH build" ; \ exit 1 ; \ fi @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left in build directory after distclean:" ; \ $(distcleancheck_listfiles) ; \ exit 1; } >&2 check-am: all-am check: check-am all-am: Makefile $(INFO_DEPS) installdirs: $(mkinstalldirs) $(DESTDIR)$(infodir) install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ INSTALL_STRIP_FLAG=-s \ `test -z '$(STRIP)' || \ echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install mostlyclean-generic: clean-generic: distclean-generic: -rm -f Makefile $(CONFIG_CLEAN_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic mostlyclean-am distclean: distclean-am -rm -f $(am__CONFIG_DISTCLEAN_FILES) distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: $(DVIS) info: info-am info-am: $(INFO_DEPS) install-data-am: install-info-am install-exec-am: install-info: install-info-am install-info-am: $(INFO_DEPS) @$(NORMAL_INSTALL) $(mkinstalldirs) $(DESTDIR)$(infodir) @list='$(INFO_DEPS)'; \ for file in $$list; do \ if test -f $$file; then d=.; else d=$(srcdir); fi; \ file_i=`echo "$$file" | sed 's|\.info$$||;s|$$|.i|'`; \ for ifile in $$d/$$file $$d/$$file-[0-9] $$d/$$file-[0-9][0-9] \ $$d/$$file_i[0-9] $$d/$$file_i[0-9][0-9] ; do \ if test -f $$ifile; then \ relfile=`echo "$$ifile" | sed 's|^.*/||'`; \ echo " $(INSTALL_DATA) $$ifile $(DESTDIR)$(infodir)/$$relfile"; \ $(INSTALL_DATA) $$ifile $(DESTDIR)$(infodir)/$$relfile; \ else : ; fi; \ done; \ done @$(POST_INSTALL) @if (install-info --version && \ install-info --version | grep -i -v debian) >/dev/null 2>&1; then \ list='$(INFO_DEPS)'; \ for file in $$list; do \ relfile=`echo "$$file" | sed 's|^.*/||'`; \ echo " install-info --info-dir=$(DESTDIR)$(infodir) $(DESTDIR)$(infodir)/$$relfile";\ install-info --info-dir=$(DESTDIR)$(infodir) $(DESTDIR)$(infodir)/$$relfile || :;\ done; \ else : ; fi install-man: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf autom4te.cache maintainer-clean-am: distclean-am maintainer-clean-aminfo \ maintainer-clean-generic mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-aminfo mostlyclean-generic pdf: pdf-am pdf-am: $(PDFS) ps: ps-am ps-am: $(PSS) uninstall-am: uninstall-info-am .PHONY: all all-am check check-am clean clean-generic dist dist-all \ dist-gzip dist-info distcheck distclean distclean-generic \ distcleancheck distdir distuninstallcheck dvi dvi-am info \ info-am install install-am install-data install-data-am \ install-exec install-exec-am install-info install-info-am \ install-man install-strip installcheck installcheck-am \ installdirs maintainer-clean maintainer-clean-aminfo \ maintainer-clean-generic mostlyclean mostlyclean-aminfo \ mostlyclean-generic pdf pdf-am ps ps-am uninstall uninstall-am \ uninstall-info-am # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: gccintro-1.0/NEWS0000664000175000017500000000000010014170623007324 gccintro-1.0/aclocal.m40000664000175000017500000007276010046436603010523 # generated automatically by aclocal 1.7.5 -*- Autoconf -*- # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002 # Free Software Foundation, Inc. # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. # Do all the work for Automake. -*- Autoconf -*- # This macro actually does too much some checks are only needed if # your package does certain things. But this isn't really a big deal. # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003 # Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # serial 10 AC_PREREQ([2.54]) # Autoconf 2.50 wants to disallow AM_ names. We explicitly allow # the ones we care about. m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl # AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE]) # AM_INIT_AUTOMAKE([OPTIONS]) # ----------------------------------------------- # The call with PACKAGE and VERSION arguments is the old style # call (pre autoconf-2.50), which is being phased out. PACKAGE # and VERSION should now be passed to AC_INIT and removed from # the call to AM_INIT_AUTOMAKE. # We support both call styles for the transition. After # the next Automake release, Autoconf can make the AC_INIT # arguments mandatory, and then we can depend on a new Autoconf # release and drop the old call support. AC_DEFUN([AM_INIT_AUTOMAKE], [AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl AC_REQUIRE([AC_PROG_INSTALL])dnl # test to see if srcdir already configured if test "`cd $srcdir && pwd`" != "`pwd`" && test -f $srcdir/config.status; then AC_MSG_ERROR([source directory already configured; run "make distclean" there first]) fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi AC_SUBST([CYGPATH_W]) # Define the identity of the package. dnl Distinguish between old-style and new-style calls. m4_ifval([$2], [m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl AC_SUBST([PACKAGE], [$1])dnl AC_SUBST([VERSION], [$2])], [_AM_SET_OPTIONS([$1])dnl AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl _AM_IF_OPTION([no-define],, [AC_DEFINE_UNQUOTED(PACKAGE, "$PACKAGE", [Name of package]) AC_DEFINE_UNQUOTED(VERSION, "$VERSION", [Version number of package])])dnl # Some tools Automake needs. AC_REQUIRE([AM_SANITY_CHECK])dnl AC_REQUIRE([AC_ARG_PROGRAM])dnl AM_MISSING_PROG(ACLOCAL, aclocal-${am__api_version}) AM_MISSING_PROG(AUTOCONF, autoconf) AM_MISSING_PROG(AUTOMAKE, automake-${am__api_version}) AM_MISSING_PROG(AUTOHEADER, autoheader) AM_MISSING_PROG(MAKEINFO, makeinfo) AM_MISSING_PROG(AMTAR, tar) AM_PROG_INSTALL_SH AM_PROG_INSTALL_STRIP # We need awk for the "check" target. The system "awk" is bad on # some platforms. AC_REQUIRE([AC_PROG_AWK])dnl AC_REQUIRE([AC_PROG_MAKE_SET])dnl AC_REQUIRE([AM_SET_LEADING_DOT])dnl _AM_IF_OPTION([no-dependencies],, [AC_PROVIDE_IFELSE([AC_PROG_CC], [_AM_DEPENDENCIES(CC)], [define([AC_PROG_CC], defn([AC_PROG_CC])[_AM_DEPENDENCIES(CC)])])dnl AC_PROVIDE_IFELSE([AC_PROG_CXX], [_AM_DEPENDENCIES(CXX)], [define([AC_PROG_CXX], defn([AC_PROG_CXX])[_AM_DEPENDENCIES(CXX)])])dnl ]) ]) # When config.status generates a header, we must update the stamp-h file. # This file resides in the same directory as the config header # that is generated. The stamp files are numbered to have different names. # Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the # loop where config.status creates the headers, so we can generate # our stamp files there. AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK], [# Compute $1's index in $config_headers. _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $1 | $1:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $1" >`AS_DIRNAME([$1])`/stamp-h[]$_am_stamp_count]) # Copyright 2002 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # AM_AUTOMAKE_VERSION(VERSION) # ---------------------------- # Automake X.Y traces this macro to ensure aclocal.m4 has been # generated from the m4 files accompanying Automake X.Y. AC_DEFUN([AM_AUTOMAKE_VERSION],[am__api_version="1.7"]) # AM_SET_CURRENT_AUTOMAKE_VERSION # ------------------------------- # Call AM_AUTOMAKE_VERSION so it can be traced. # This function is AC_REQUIREd by AC_INIT_AUTOMAKE. AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION], [AM_AUTOMAKE_VERSION([1.7.5])]) # Helper functions for option handling. -*- Autoconf -*- # Copyright 2001, 2002 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # serial 2 # _AM_MANGLE_OPTION(NAME) # ----------------------- AC_DEFUN([_AM_MANGLE_OPTION], [[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])]) # _AM_SET_OPTION(NAME) # ------------------------------ # Set option NAME. Presently that only means defining a flag for this option. AC_DEFUN([_AM_SET_OPTION], [m4_define(_AM_MANGLE_OPTION([$1]), 1)]) # _AM_SET_OPTIONS(OPTIONS) # ---------------------------------- # OPTIONS is a space-separated list of Automake options. AC_DEFUN([_AM_SET_OPTIONS], [AC_FOREACH([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])]) # _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET]) # ------------------------------------------- # Execute IF-SET if OPTION is set, IF-NOT-SET otherwise. AC_DEFUN([_AM_IF_OPTION], [m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])]) # # Check to make sure that the build environment is sane. # # Copyright 1996, 1997, 2000, 2001 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # serial 3 # AM_SANITY_CHECK # --------------- AC_DEFUN([AM_SANITY_CHECK], [AC_MSG_CHECKING([whether build environment is sane]) # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$[*]" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$[*]" != "X $srcdir/configure conftest.file" \ && test "$[*]" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken alias in your environment]) fi test "$[2]" = conftest.file ) then # Ok. : else AC_MSG_ERROR([newly created file is older than distributed files! Check your system clock]) fi AC_MSG_RESULT(yes)]) # -*- Autoconf -*- # Copyright 1997, 1999, 2000, 2001 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # serial 3 # AM_MISSING_PROG(NAME, PROGRAM) # ------------------------------ AC_DEFUN([AM_MISSING_PROG], [AC_REQUIRE([AM_MISSING_HAS_RUN]) $1=${$1-"${am_missing_run}$2"} AC_SUBST($1)]) # AM_MISSING_HAS_RUN # ------------------ # Define MISSING if not defined so far and test if it supports --run. # If it does, set am_missing_run to use it, otherwise, to nothing. AC_DEFUN([AM_MISSING_HAS_RUN], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl test x"${MISSING+set}" = xset || MISSING="\${SHELL} $am_aux_dir/missing" # Use eval to expand $SHELL if eval "$MISSING --run true"; then am_missing_run="$MISSING --run " else am_missing_run= AC_MSG_WARN([`missing' script is too old or missing]) fi ]) # AM_AUX_DIR_EXPAND # Copyright 2001 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets # $ac_aux_dir to `$srcdir/foo'. In other projects, it is set to # `$srcdir', `$srcdir/..', or `$srcdir/../..'. # # Of course, Automake must honor this variable whenever it calls a # tool from the auxiliary directory. The problem is that $srcdir (and # therefore $ac_aux_dir as well) can be either absolute or relative, # depending on how configure is run. This is pretty annoying, since # it makes $ac_aux_dir quite unusable in subdirectories: in the top # source directory, any form will work fine, but in subdirectories a # relative path needs to be adjusted first. # # $ac_aux_dir/missing # fails when called from a subdirectory if $ac_aux_dir is relative # $top_srcdir/$ac_aux_dir/missing # fails if $ac_aux_dir is absolute, # fails when called from a subdirectory in a VPATH build with # a relative $ac_aux_dir # # The reason of the latter failure is that $top_srcdir and $ac_aux_dir # are both prefixed by $srcdir. In an in-source build this is usually # harmless because $srcdir is `.', but things will broke when you # start a VPATH build or use an absolute $srcdir. # # So we could use something similar to $top_srcdir/$ac_aux_dir/missing, # iff we strip the leading $srcdir from $ac_aux_dir. That would be: # am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"` # and then we would define $MISSING as # MISSING="\${SHELL} $am_aux_dir/missing" # This will work as long as MISSING is not called from configure, because # unfortunately $(top_srcdir) has no meaning in configure. # However there are other variables, like CC, which are often used in # configure, and could therefore not use this "fixed" $ac_aux_dir. # # Another solution, used here, is to always expand $ac_aux_dir to an # absolute PATH. The drawback is that using absolute paths prevent a # configured tree to be moved without reconfiguration. # Rely on autoconf to set up CDPATH properly. AC_PREREQ([2.50]) AC_DEFUN([AM_AUX_DIR_EXPAND], [ # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` ]) # AM_PROG_INSTALL_SH # ------------------ # Define $install_sh. # Copyright 2001 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. AC_DEFUN([AM_PROG_INSTALL_SH], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl install_sh=${install_sh-"$am_aux_dir/install-sh"} AC_SUBST(install_sh)]) # AM_PROG_INSTALL_STRIP # Copyright 2001 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # One issue with vendor `install' (even GNU) is that you can't # specify the program used to strip binaries. This is especially # annoying in cross-compiling environments, where the build's strip # is unlikely to handle the host's binaries. # Fortunately install-sh will honor a STRIPPROG variable, so we # always use install-sh in `make install-strip', and initialize # STRIPPROG with the value of the STRIP variable (set by the user). AC_DEFUN([AM_PROG_INSTALL_STRIP], [AC_REQUIRE([AM_PROG_INSTALL_SH])dnl # Installed binaries are usually stripped using `strip' when the user # run `make install-strip'. However `strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the `STRIP' environment variable to overrule this program. dnl Don't test for $cross_compiling = yes, because it might be `maybe'. if test "$cross_compiling" != no; then AC_CHECK_TOOL([STRIP], [strip], :) fi INSTALL_STRIP_PROGRAM="\${SHELL} \$(install_sh) -c -s" AC_SUBST([INSTALL_STRIP_PROGRAM])]) # -*- Autoconf -*- # Copyright (C) 2003 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # serial 1 # Check whether the underlying file-system supports filenames # with a leading dot. For instance MS-DOS doesn't. AC_DEFUN([AM_SET_LEADING_DOT], [rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null AC_SUBST([am__leading_dot])]) # serial 5 -*- Autoconf -*- # Copyright (C) 1999, 2000, 2001, 2002, 2003 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # There are a few dirty hacks below to avoid letting `AC_PROG_CC' be # written in clear, in which case automake, when reading aclocal.m4, # will think it sees a *use*, and therefore will trigger all it's # C support machinery. Also note that it means that autoscan, seeing # CC etc. in the Makefile, will ask for an AC_PROG_CC use... # _AM_DEPENDENCIES(NAME) # ---------------------- # See how the compiler implements dependency checking. # NAME is "CC", "CXX", "GCJ", or "OBJC". # We try a few techniques and use that to set a single cache variable. # # We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was # modified to invoke _AM_DEPENDENCIES(CC); we would have a circular # dependency, and given that the user is not expected to run this macro, # just rely on AC_PROG_CC. AC_DEFUN([_AM_DEPENDENCIES], [AC_REQUIRE([AM_SET_DEPDIR])dnl AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl AC_REQUIRE([AM_MAKE_INCLUDE])dnl AC_REQUIRE([AM_DEP_TRACK])dnl ifelse([$1], CC, [depcc="$CC" am_compiler_list=], [$1], CXX, [depcc="$CXX" am_compiler_list=], [$1], OBJC, [depcc="$OBJC" am_compiler_list='gcc3 gcc'], [$1], GCJ, [depcc="$GCJ" am_compiler_list='gcc3 gcc'], [depcc="$$1" am_compiler_list=]) AC_CACHE_CHECK([dependency style of $depcc], [am_cv_$1_dependencies_compiler_type], [if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named `D' -- because `-MD' means `put the output # in D'. mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir am_cv_$1_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp` fi for depmode in $am_compiler_list; do # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. echo '#include "conftest.h"' > conftest.c echo 'int i;' > conftest.h echo "${am__include} ${am__quote}conftest.Po${am__quote}" > confmf case $depmode in nosideeffect) # after this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; none) break ;; esac # We check with `-c' and `-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle `-M -o', and we need to detect this. if depmode=$depmode \ source=conftest.c object=conftest.o \ depfile=conftest.Po tmpdepfile=conftest.TPo \ $SHELL ./depcomp $depcc -c -o conftest.o conftest.c \ >/dev/null 2>conftest.err && grep conftest.h conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # (even with -Werror). So we grep stderr for any message # that says an option was ignored. if grep 'ignoring option' conftest.err >/dev/null 2>&1; then :; else am_cv_$1_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_$1_dependencies_compiler_type=none fi ]) AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type]) AM_CONDITIONAL([am__fastdep$1], [ test "x$enable_dependency_tracking" != xno \ && test "$am_cv_$1_dependencies_compiler_type" = gcc3]) ]) # AM_SET_DEPDIR # ------------- # Choose a directory name for dependency files. # This macro is AC_REQUIREd in _AM_DEPENDENCIES AC_DEFUN([AM_SET_DEPDIR], [AC_REQUIRE([AM_SET_LEADING_DOT])dnl AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl ]) # AM_DEP_TRACK # ------------ AC_DEFUN([AM_DEP_TRACK], [AC_ARG_ENABLE(dependency-tracking, [ --disable-dependency-tracking Speeds up one-time builds --enable-dependency-tracking Do not reject slow dependency extractors]) if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' fi AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno]) AC_SUBST([AMDEPBACKSLASH]) ]) # Generate code to set up dependency tracking. -*- Autoconf -*- # Copyright 1999, 2000, 2001, 2002 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. #serial 2 # _AM_OUTPUT_DEPENDENCY_COMMANDS # ------------------------------ AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS], [for mf in $CONFIG_FILES; do # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. # We used to match only the files named `Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. # So let's grep whole file. if grep '^#.*generated by automake' $mf > /dev/null 2>&1; then dirpart=`AS_DIRNAME("$mf")` else continue fi grep '^DEP_FILES *= *[[^ @%:@]]' < "$mf" > /dev/null || continue # Extract the definition of DEP_FILES from the Makefile without # running `make'. DEPDIR=`sed -n -e '/^DEPDIR = / s///p' < "$mf"` test -z "$DEPDIR" && continue # When using ansi2knr, U may be empty or an underscore; expand it U=`sed -n -e '/^U = / s///p' < "$mf"` test -d "$dirpart/$DEPDIR" || mkdir "$dirpart/$DEPDIR" # We invoke sed twice because it is the simplest approach to # changing $(DEPDIR) to its actual value in the expansion. for file in `sed -n -e ' /^DEP_FILES = .*\\\\$/ { s/^DEP_FILES = // :loop s/\\\\$// p n /\\\\$/ b loop p } /^DEP_FILES = / s/^DEP_FILES = //p' < "$mf" | \ sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`AS_DIRNAME(["$file"])` AS_MKDIR_P([$dirpart/$fdir]) # echo "creating $dirpart/$file" echo '# dummy' > "$dirpart/$file" done done ])# _AM_OUTPUT_DEPENDENCY_COMMANDS # AM_OUTPUT_DEPENDENCY_COMMANDS # ----------------------------- # This macro should only be invoked once -- use via AC_REQUIRE. # # This code is only required when automatic dependency tracking # is enabled. FIXME. This creates each `.P' file that we will # need in order to bootstrap the dependency handling code. AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS], [AC_CONFIG_COMMANDS([depfiles], [test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS], [AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"]) ]) # Check to see how 'make' treats includes. -*- Autoconf -*- # Copyright (C) 2001, 2002, 2003 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # serial 2 # AM_MAKE_INCLUDE() # ----------------- # Check to see how make treats includes. AC_DEFUN([AM_MAKE_INCLUDE], [am_make=${MAKE-make} cat > confinc << 'END' am__doit: @echo done .PHONY: am__doit END # If we don't find an include directive, just comment out the code. AC_MSG_CHECKING([for style of include used by $am_make]) am__include="#" am__quote= _am_result=none # First try GNU make style include. echo "include confinc" > confmf # We grep out `Entering directory' and `Leaving directory' # messages which can occur if `w' ends up in MAKEFLAGS. # In particular we don't look at `^make:' because GNU make might # be invoked under some other name (usually "gmake"), in which # case it prints its new name instead of `make'. if test "`$am_make -s -f confmf 2> /dev/null | grep -v 'ing directory'`" = "done"; then am__include=include am__quote= _am_result=GNU fi # Now try BSD make style include. if test "$am__include" = "#"; then echo '.include "confinc"' > confmf if test "`$am_make -s -f confmf 2> /dev/null`" = "done"; then am__include=.include am__quote="\"" _am_result=BSD fi fi AC_SUBST([am__include]) AC_SUBST([am__quote]) AC_MSG_RESULT([$_am_result]) rm -f confinc confmf ]) # AM_CONDITIONAL -*- Autoconf -*- # Copyright 1997, 2000, 2001 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # serial 5 AC_PREREQ(2.52) # AM_CONDITIONAL(NAME, SHELL-CONDITION) # ------------------------------------- # Define a conditional. AC_DEFUN([AM_CONDITIONAL], [ifelse([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])], [$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl AC_SUBST([$1_TRUE]) AC_SUBST([$1_FALSE]) if $2; then $1_TRUE= $1_FALSE='#' else $1_TRUE='#' $1_FALSE= fi AC_CONFIG_COMMANDS_PRE( [if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then AC_MSG_ERROR([conditional "$1" was never defined. Usually this means the macro was only invoked conditionally.]) fi])]) gccintro-1.0/configure0000775000175000017500000022730510046436620010566 #! /bin/sh # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.57 for gccintro 1.0. # # Copyright 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001, 2002 # Free Software Foundation, Inc. # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. ## --------------------- ## ## M4sh Initialization. ## ## --------------------- ## # Be Bourne compatible if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' elif test -n "${BASH_VERSION+set}" && (set -o posix) >/dev/null 2>&1; then set -o posix fi # Support unset when possible. if (FOO=FOO; unset FOO) >/dev/null 2>&1; then as_unset=unset else as_unset=false fi # Work around bugs in pre-3.0 UWIN ksh. $as_unset ENV MAIL MAILPATH PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. for as_var in \ LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \ LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \ LC_TELEPHONE LC_TIME do if (set +x; test -n "`(eval $as_var=C; export $as_var) 2>&1`"); then eval $as_var=C; export $as_var else $as_unset $as_var fi done # Required to use basename. if expr a : '\(a\)' >/dev/null 2>&1; then as_expr=expr else as_expr=false fi if (basename /) >/dev/null 2>&1 && test "X`basename / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi # Name of the executable. as_me=`$as_basename "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)$' \| \ . : '\(.\)' 2>/dev/null || echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/; q; } /^X\/\(\/\/\)$/{ s//\1/; q; } /^X\/\(\/\).*/{ s//\1/; q; } s/.*/./; q'` # PATH needs CR, and LINENO needs CR and PATH. # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then echo "#! /bin/sh" >conf$$.sh echo "exit 0" >>conf$$.sh chmod +x conf$$.sh if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then PATH_SEPARATOR=';' else PATH_SEPARATOR=: fi rm -f conf$$.sh fi as_lineno_1=$LINENO as_lineno_2=$LINENO as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null` test "x$as_lineno_1" != "x$as_lineno_2" && test "x$as_lineno_3" = "x$as_lineno_2" || { # Find who we are. Look in the path if we contain no path at all # relative or not. case $0 in *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then { echo "$as_me: error: cannot find myself; rerun with an absolute path" >&2 { (exit 1); exit 1; }; } fi case $CONFIG_SHELL in '') as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for as_base in sh bash ksh sh5; do case $as_dir in /*) if ("$as_dir/$as_base" -c ' as_lineno_1=$LINENO as_lineno_2=$LINENO as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null` test "x$as_lineno_1" != "x$as_lineno_2" && test "x$as_lineno_3" = "x$as_lineno_2" ') 2>/dev/null; then $as_unset BASH_ENV || test "${BASH_ENV+set}" != set || { BASH_ENV=; export BASH_ENV; } $as_unset ENV || test "${ENV+set}" != set || { ENV=; export ENV; } CONFIG_SHELL=$as_dir/$as_base export CONFIG_SHELL exec "$CONFIG_SHELL" "$0" ${1+"$@"} fi;; esac done done ;; esac # Create $as_me.lineno as a copy of $as_myself, but with $LINENO # uniformly replaced by the line number. The first 'sed' inserts a # line-number line before each line; the second 'sed' does the real # work. The second script uses 'N' to pair each line-number line # with the numbered line, and appends trailing '-' during # substitution so that $LINENO is not a special case at line end. # (Raja R Harinath suggested sed '=', and Paul Eggert wrote the # second 'sed' script. Blame Lee E. McMahon for sed's syntax. :-) sed '=' <$as_myself | sed ' N s,$,-, : loop s,^\(['$as_cr_digits']*\)\(.*\)[$]LINENO\([^'$as_cr_alnum'_]\),\1\2\1\3, t loop s,-$,, s,^['$as_cr_digits']*\n,, ' >$as_me.lineno && chmod +x $as_me.lineno || { echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2 { (exit 1); exit 1; }; } # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensible to this). . ./$as_me.lineno # Exit status is that of the last command. exit } case `echo "testing\c"; echo 1,2,3`,`echo -n testing; echo 1,2,3` in *c*,-n*) ECHO_N= ECHO_C=' ' ECHO_T=' ' ;; *c*,* ) ECHO_N=-n ECHO_C= ECHO_T= ;; *) ECHO_N= ECHO_C='\c' ECHO_T= ;; esac if expr a : '\(a\)' >/dev/null 2>&1; then as_expr=expr else as_expr=false fi rm -f conf$$ conf$$.exe conf$$.file echo >conf$$.file if ln -s conf$$.file conf$$ 2>/dev/null; then # We could just check for DJGPP; but this test a) works b) is more generic # and c) will remain valid once DJGPP supports symlinks (DJGPP 2.04). if test -f conf$$.exe; then # Don't use ln at all; we don't have any links as_ln_s='cp -p' else as_ln_s='ln -s' fi elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -p' fi rm -f conf$$ conf$$.exe conf$$.file if mkdir -p . 2>/dev/null; then as_mkdir_p=: else as_mkdir_p=false fi as_executable_p="test -f" # Sed expression to map a string onto a valid CPP name. as_tr_cpp="sed y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g" # Sed expression to map a string onto a valid variable name. as_tr_sh="sed y%*+%pp%;s%[^_$as_cr_alnum]%_%g" # IFS # We need space, tab and new line, in precisely that order. as_nl=' ' IFS=" $as_nl" # CDPATH. $as_unset CDPATH # Name of the host. # hostname on some systems (SVR3.2, Linux) returns a bogus exit status, # so uname gets run too. ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q` exec 6>&1 # # Initializations. # ac_default_prefix=/usr/local ac_config_libobj_dir=. cross_compiling=no subdirs= MFLAGS= MAKEFLAGS= SHELL=${CONFIG_SHELL-/bin/sh} # Maximum number of lines to put in a shell here document. # This variable seems obsolete. It should probably be removed, and # only ac_max_sed_lines should be used. : ${ac_max_here_lines=38} # Identity of this package. PACKAGE_NAME='gccintro' PACKAGE_TARNAME='gccintro' PACKAGE_VERSION='1.0' PACKAGE_STRING='gccintro 1.0' PACKAGE_BUGREPORT='' ac_unique_file="gccintro.texi" ac_subst_vars='SHELL PATH_SEPARATOR PACKAGE_NAME PACKAGE_TARNAME PACKAGE_VERSION PACKAGE_STRING PACKAGE_BUGREPORT exec_prefix prefix program_transform_name bindir sbindir libexecdir datadir sysconfdir sharedstatedir localstatedir libdir includedir oldincludedir infodir mandir build_alias host_alias target_alias DEFS ECHO_C ECHO_N ECHO_T LIBS INSTALL_PROGRAM INSTALL_SCRIPT INSTALL_DATA CYGPATH_W PACKAGE VERSION ACLOCAL AUTOCONF AUTOMAKE AUTOHEADER MAKEINFO AMTAR install_sh STRIP ac_ct_STRIP INSTALL_STRIP_PROGRAM AWK SET_MAKE am__leading_dot LIBOBJS LTLIBOBJS' ac_subst_files='' # Initialize some variables set by options. ac_init_help= ac_init_version=false # The variables have the same names as the options, with # dashes changed to underlines. cache_file=/dev/null exec_prefix=NONE no_create= no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= verbose= x_includes=NONE x_libraries=NONE # Installation directory options. # These are left unexpanded so users can "make install exec_prefix=/foo" # and all the variables that are supposed to be based on exec_prefix # by default will actually change. # Use braces instead of parens because sh, perl, etc. also accept them. bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datadir='${prefix}/share' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' libdir='${exec_prefix}/lib' includedir='${prefix}/include' oldincludedir='/usr/include' infodir='${prefix}/info' mandir='${prefix}/man' ac_prev= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval "$ac_prev=\$ac_option" ac_prev= continue fi ac_optarg=`expr "x$ac_option" : 'x[^=]*=\(.*\)'` # Accept the important Cygnus configure options, so we can diagnose typos. case $ac_option in -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir=$ac_optarg ;; -build | --build | --buil | --bui | --bu) ac_prev=build_alias ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build_alias=$ac_optarg ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file=$ac_optarg ;; --config-cache | -C) cache_file=config.cache ;; -datadir | --datadir | --datadi | --datad | --data | --dat | --da) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=* | --data=* | --dat=* \ | --da=*) datadir=$ac_optarg ;; -disable-* | --disable-*) ac_feature=`expr "x$ac_option" : 'x-*disable-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_feature" : ".*[^-_$as_cr_alnum]" >/dev/null && { echo "$as_me: error: invalid feature name: $ac_feature" >&2 { (exit 1); exit 1; }; } ac_feature=`echo $ac_feature | sed 's/-/_/g'` eval "enable_$ac_feature=no" ;; -enable-* | --enable-*) ac_feature=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_feature" : ".*[^-_$as_cr_alnum]" >/dev/null && { echo "$as_me: error: invalid feature name: $ac_feature" >&2 { (exit 1); exit 1; }; } ac_feature=`echo $ac_feature | sed 's/-/_/g'` case $ac_option in *=*) ac_optarg=`echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"`;; *) ac_optarg=yes ;; esac eval "enable_$ac_feature='$ac_optarg'" ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix=$ac_optarg ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he | -h) ac_init_help=long ;; -help=r* | --help=r* | --hel=r* | --he=r* | -hr*) ac_init_help=recursive ;; -help=s* | --help=s* | --hel=s* | --he=s* | -hs*) ac_init_help=short ;; -host | --host | --hos | --ho) ac_prev=host_alias ;; -host=* | --host=* | --hos=* | --ho=*) host_alias=$ac_optarg ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir=$ac_optarg ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir=$ac_optarg ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir=$ac_optarg ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir=$ac_optarg ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst \ | --locals | --local | --loca | --loc | --lo) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* \ | --locals=* | --local=* | --loca=* | --loc=* | --lo=*) localstatedir=$ac_optarg ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir=$ac_optarg ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c | -n) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir=$ac_optarg ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix=$ac_optarg ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix=$ac_optarg ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix=$ac_optarg ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name=$ac_optarg ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir=$ac_optarg ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir=$ac_optarg ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site=$ac_optarg ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir=$ac_optarg ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir=$ac_optarg ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target_alias ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target_alias=$ac_optarg ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers | -V) ac_init_version=: ;; -with-* | --with-*) ac_package=`expr "x$ac_option" : 'x-*with-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_package" : ".*[^-_$as_cr_alnum]" >/dev/null && { echo "$as_me: error: invalid package name: $ac_package" >&2 { (exit 1); exit 1; }; } ac_package=`echo $ac_package| sed 's/-/_/g'` case $ac_option in *=*) ac_optarg=`echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"`;; *) ac_optarg=yes ;; esac eval "with_$ac_package='$ac_optarg'" ;; -without-* | --without-*) ac_package=`expr "x$ac_option" : 'x-*without-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_package" : ".*[^-_$as_cr_alnum]" >/dev/null && { echo "$as_me: error: invalid package name: $ac_package" >&2 { (exit 1); exit 1; }; } ac_package=`echo $ac_package | sed 's/-/_/g'` eval "with_$ac_package=no" ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes=$ac_optarg ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries=$ac_optarg ;; -*) { echo "$as_me: error: unrecognized option: $ac_option Try \`$0 --help' for more information." >&2 { (exit 1); exit 1; }; } ;; *=*) ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='` # Reject names that are not valid shell variable names. expr "x$ac_envvar" : ".*[^_$as_cr_alnum]" >/dev/null && { echo "$as_me: error: invalid variable name: $ac_envvar" >&2 { (exit 1); exit 1; }; } ac_optarg=`echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` eval "$ac_envvar='$ac_optarg'" export $ac_envvar ;; *) # FIXME: should be removed in autoconf 3.0. echo "$as_me: WARNING: you should use --build, --host, --target" >&2 expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null && echo "$as_me: WARNING: invalid host type: $ac_option" >&2 : ${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option} ;; esac done if test -n "$ac_prev"; then ac_option=--`echo $ac_prev | sed 's/_/-/g'` { echo "$as_me: error: missing argument to $ac_option" >&2 { (exit 1); exit 1; }; } fi # Be sure to have absolute paths. for ac_var in exec_prefix prefix do eval ac_val=$`echo $ac_var` case $ac_val in [\\/$]* | ?:[\\/]* | NONE | '' ) ;; *) { echo "$as_me: error: expected an absolute directory name for --$ac_var: $ac_val" >&2 { (exit 1); exit 1; }; };; esac done # Be sure to have absolute paths. for ac_var in bindir sbindir libexecdir datadir sysconfdir sharedstatedir \ localstatedir libdir includedir oldincludedir infodir mandir do eval ac_val=$`echo $ac_var` case $ac_val in [\\/$]* | ?:[\\/]* ) ;; *) { echo "$as_me: error: expected an absolute directory name for --$ac_var: $ac_val" >&2 { (exit 1); exit 1; }; };; esac done # There might be people who depend on the old broken behavior: `$host' # used to hold the argument of --host etc. # FIXME: To remove some day. build=$build_alias host=$host_alias target=$target_alias # FIXME: To remove some day. if test "x$host_alias" != x; then if test "x$build_alias" = x; then cross_compiling=maybe echo "$as_me: WARNING: If you wanted to set the --build type, don't use --host. If a cross compiler is detected then cross compile mode will be used." >&2 elif test "x$build_alias" != "x$host_alias"; then cross_compiling=yes fi fi ac_tool_prefix= test -n "$host_alias" && ac_tool_prefix=$host_alias- test "$silent" = yes && exec 6>/dev/null # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then its parent. ac_confdir=`(dirname "$0") 2>/dev/null || $as_expr X"$0" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$0" : 'X\(//\)[^/]' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| \ . : '\(.\)' 2>/dev/null || echo X"$0" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; } /^X\(\/\/\)[^/].*/{ s//\1/; q; } /^X\(\/\/\)$/{ s//\1/; q; } /^X\(\/\).*/{ s//\1/; q; } s/.*/./; q'` srcdir=$ac_confdir if test ! -r $srcdir/$ac_unique_file; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r $srcdir/$ac_unique_file; then if test "$ac_srcdir_defaulted" = yes; then { echo "$as_me: error: cannot find sources ($ac_unique_file) in $ac_confdir or .." >&2 { (exit 1); exit 1; }; } else { echo "$as_me: error: cannot find sources ($ac_unique_file) in $srcdir" >&2 { (exit 1); exit 1; }; } fi fi (cd $srcdir && test -r ./$ac_unique_file) 2>/dev/null || { echo "$as_me: error: sources are in $srcdir, but \`cd $srcdir' does not work" >&2 { (exit 1); exit 1; }; } srcdir=`echo "$srcdir" | sed 's%\([^\\/]\)[\\/]*$%\1%'` ac_env_build_alias_set=${build_alias+set} ac_env_build_alias_value=$build_alias ac_cv_env_build_alias_set=${build_alias+set} ac_cv_env_build_alias_value=$build_alias ac_env_host_alias_set=${host_alias+set} ac_env_host_alias_value=$host_alias ac_cv_env_host_alias_set=${host_alias+set} ac_cv_env_host_alias_value=$host_alias ac_env_target_alias_set=${target_alias+set} ac_env_target_alias_value=$target_alias ac_cv_env_target_alias_set=${target_alias+set} ac_cv_env_target_alias_value=$target_alias # # Report the --help message. # if test "$ac_init_help" = "long"; then # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF \`configure' configures gccintro 1.0 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... To assign environment variables (e.g., CC, CFLAGS...), specify them as VAR=VALUE. See below for descriptions of some of the useful variables. Defaults for the options are specified in brackets. Configuration: -h, --help display this help and exit --help=short display options specific to this package --help=recursive display the short help of all the included packages -V, --version display version information and exit -q, --quiet, --silent do not print \`checking...' messages --cache-file=FILE cache test results in FILE [disabled] -C, --config-cache alias for \`--cache-file=config.cache' -n, --no-create do not create output files --srcdir=DIR find the sources in DIR [configure dir or \`..'] _ACEOF cat <<_ACEOF Installation directories: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [PREFIX] By default, \`make install' will install all the files in \`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify an installation prefix other than \`$ac_default_prefix' using \`--prefix', for instance \`--prefix=\$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIR user executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIR program executables [EPREFIX/libexec] --datadir=DIR read-only architecture-independent data [PREFIX/share] --sysconfdir=DIR read-only single-machine data [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com] --localstatedir=DIR modifiable single-machine data [PREFIX/var] --libdir=DIR object code libraries [EPREFIX/lib] --includedir=DIR C header files [PREFIX/include] --oldincludedir=DIR C header files for non-gcc [/usr/include] --infodir=DIR info documentation [PREFIX/info] --mandir=DIR man documentation [PREFIX/man] _ACEOF cat <<\_ACEOF Program names: --program-prefix=PREFIX prepend PREFIX to installed program names --program-suffix=SUFFIX append SUFFIX to installed program names --program-transform-name=PROGRAM run sed PROGRAM on installed program names _ACEOF fi if test -n "$ac_init_help"; then case $ac_init_help in short | recursive ) echo "Configuration of gccintro 1.0:";; esac cat <<\_ACEOF _ACEOF fi if test "$ac_init_help" = "recursive"; then # If there are subdirs, report their specific --help. ac_popdir=`pwd` for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue test -d $ac_dir || continue ac_builddir=. if test "$ac_dir" != .; then ac_dir_suffix=/`echo "$ac_dir" | sed 's,^\.[\\/],,'` # A "../" for each directory in $ac_dir_suffix. ac_top_builddir=`echo "$ac_dir_suffix" | sed 's,/[^\\/]*,../,g'` else ac_dir_suffix= ac_top_builddir= fi case $srcdir in .) # No --srcdir option. We are building in place. ac_srcdir=. if test -z "$ac_top_builddir"; then ac_top_srcdir=. else ac_top_srcdir=`echo $ac_top_builddir | sed 's,/$,,'` fi ;; [\\/]* | ?:[\\/]* ) # Absolute path. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ;; *) # Relative path. ac_srcdir=$ac_top_builddir$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_builddir$srcdir ;; esac # Don't blindly perform a `cd "$ac_dir"/$ac_foo && pwd` since $ac_foo can be # absolute. ac_abs_builddir=`cd "$ac_dir" && cd $ac_builddir && pwd` ac_abs_top_builddir=`cd "$ac_dir" && cd ${ac_top_builddir}. && pwd` ac_abs_srcdir=`cd "$ac_dir" && cd $ac_srcdir && pwd` ac_abs_top_srcdir=`cd "$ac_dir" && cd $ac_top_srcdir && pwd` cd $ac_dir # Check for guested configure; otherwise get Cygnus style configure. if test -f $ac_srcdir/configure.gnu; then echo $SHELL $ac_srcdir/configure.gnu --help=recursive elif test -f $ac_srcdir/configure; then echo $SHELL $ac_srcdir/configure --help=recursive elif test -f $ac_srcdir/configure.ac || test -f $ac_srcdir/configure.in; then echo $ac_configure --help else echo "$as_me: WARNING: no configuration information is in $ac_dir" >&2 fi cd $ac_popdir done fi test -n "$ac_init_help" && exit 0 if $ac_init_version; then cat <<\_ACEOF gccintro configure 1.0 generated by GNU Autoconf 2.57 Copyright 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001, 2002 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF exit 0 fi exec 5>config.log cat >&5 <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by gccintro $as_me 1.0, which was generated by GNU Autoconf 2.57. Invocation command line was $ $0 $@ _ACEOF { cat <<_ASUNAME ## --------- ## ## Platform. ## ## --------- ## hostname = `(hostname || uname -n) 2>/dev/null | sed 1q` uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown` /bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown` /bin/arch = `(/bin/arch) 2>/dev/null || echo unknown` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown` hostinfo = `(hostinfo) 2>/dev/null || echo unknown` /bin/machine = `(/bin/machine) 2>/dev/null || echo unknown` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown` /bin/universe = `(/bin/universe) 2>/dev/null || echo unknown` _ASUNAME as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. echo "PATH: $as_dir" done } >&5 cat >&5 <<_ACEOF ## ----------- ## ## Core tests. ## ## ----------- ## _ACEOF # Keep a trace of the command line. # Strip out --no-create and --no-recursion so they do not pile up. # Strip out --silent because we don't want to record it for future runs. # Also quote any args containing shell meta-characters. # Make two passes to allow for proper duplicate-argument suppression. ac_configure_args= ac_configure_args0= ac_configure_args1= ac_sep= ac_must_keep_next=false for ac_pass in 1 2 do for ac_arg do case $ac_arg in -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) continue ;; *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?\"\']*) ac_arg=`echo "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac case $ac_pass in 1) ac_configure_args0="$ac_configure_args0 '$ac_arg'" ;; 2) ac_configure_args1="$ac_configure_args1 '$ac_arg'" if test $ac_must_keep_next = true; then ac_must_keep_next=false # Got value, back to normal. else case $ac_arg in *=* | --config-cache | -C | -disable-* | --disable-* \ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \ | -with-* | --with-* | -without-* | --without-* | --x) case "$ac_configure_args0 " in "$ac_configure_args1"*" '$ac_arg' "* ) continue ;; esac ;; -* ) ac_must_keep_next=true ;; esac fi ac_configure_args="$ac_configure_args$ac_sep'$ac_arg'" # Get rid of the leading space. ac_sep=" " ;; esac done done $as_unset ac_configure_args0 || test "${ac_configure_args0+set}" != set || { ac_configure_args0=; export ac_configure_args0; } $as_unset ac_configure_args1 || test "${ac_configure_args1+set}" != set || { ac_configure_args1=; export ac_configure_args1; } # When interrupted or exit'd, cleanup temporary files, and complete # config.log. We remove comments because anyway the quotes in there # would cause problems or look ugly. # WARNING: Be sure not to use single quotes in there, as some shells, # such as our DU 5.0 friend, will then `close' the trap. trap 'exit_status=$? # Save into config.log some information that might help in debugging. { echo cat <<\_ASBOX ## ---------------- ## ## Cache variables. ## ## ---------------- ## _ASBOX echo # The following way of writing the cache mishandles newlines in values, { (set) 2>&1 | case `(ac_space='"'"' '"'"'; set | grep ac_space) 2>&1` in *ac_space=\ *) sed -n \ "s/'"'"'/'"'"'\\\\'"'"''"'"'/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='"'"'\\2'"'"'/p" ;; *) sed -n \ "s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1=\\2/p" ;; esac; } echo cat <<\_ASBOX ## ----------------- ## ## Output variables. ## ## ----------------- ## _ASBOX echo for ac_var in $ac_subst_vars do eval ac_val=$`echo $ac_var` echo "$ac_var='"'"'$ac_val'"'"'" done | sort echo if test -n "$ac_subst_files"; then cat <<\_ASBOX ## ------------- ## ## Output files. ## ## ------------- ## _ASBOX echo for ac_var in $ac_subst_files do eval ac_val=$`echo $ac_var` echo "$ac_var='"'"'$ac_val'"'"'" done | sort echo fi if test -s confdefs.h; then cat <<\_ASBOX ## ----------- ## ## confdefs.h. ## ## ----------- ## _ASBOX echo sed "/^$/d" confdefs.h | sort echo fi test "$ac_signal" != 0 && echo "$as_me: caught signal $ac_signal" echo "$as_me: exit $exit_status" } >&5 rm -f core core.* *.core && rm -rf conftest* confdefs* conf$$* $ac_clean_files && exit $exit_status ' 0 for ac_signal in 1 2 13 15; do trap 'ac_signal='$ac_signal'; { (exit 1); exit 1; }' $ac_signal done ac_signal=0 # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -rf conftest* confdefs.h # AIX cpp loses on an empty file, so make sure it contains at least a newline. echo >confdefs.h # Predefined preprocessor variables. cat >>confdefs.h <<_ACEOF #define PACKAGE_NAME "$PACKAGE_NAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_TARNAME "$PACKAGE_TARNAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_VERSION "$PACKAGE_VERSION" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_STRING "$PACKAGE_STRING" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_BUGREPORT "$PACKAGE_BUGREPORT" _ACEOF # Let the site file select an alternate cache file if it wants to. # Prefer explicitly selected file to automatically selected ones. if test -z "$CONFIG_SITE"; then if test "x$prefix" != xNONE; then CONFIG_SITE="$prefix/share/config.site $prefix/etc/config.site" else CONFIG_SITE="$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site" fi fi for ac_site_file in $CONFIG_SITE; do if test -r "$ac_site_file"; then { echo "$as_me:$LINENO: loading site script $ac_site_file" >&5 echo "$as_me: loading site script $ac_site_file" >&6;} sed 's/^/| /' "$ac_site_file" >&5 . "$ac_site_file" fi done if test -r "$cache_file"; then # Some versions of bash will fail to source /dev/null (special # files actually), so we avoid doing that. if test -f "$cache_file"; then { echo "$as_me:$LINENO: loading cache $cache_file" >&5 echo "$as_me: loading cache $cache_file" >&6;} case $cache_file in [\\/]* | ?:[\\/]* ) . $cache_file;; *) . ./$cache_file;; esac fi else { echo "$as_me:$LINENO: creating cache $cache_file" >&5 echo "$as_me: creating cache $cache_file" >&6;} >$cache_file fi # Check that the precious variables saved in the cache have kept the same # value. ac_cache_corrupted=false for ac_var in `(set) 2>&1 | sed -n 's/^ac_env_\([a-zA-Z_0-9]*\)_set=.*/\1/p'`; do eval ac_old_set=\$ac_cv_env_${ac_var}_set eval ac_new_set=\$ac_env_${ac_var}_set eval ac_old_val="\$ac_cv_env_${ac_var}_value" eval ac_new_val="\$ac_env_${ac_var}_value" case $ac_old_set,$ac_new_set in set,) { echo "$as_me:$LINENO: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5 echo "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;} ac_cache_corrupted=: ;; ,set) { echo "$as_me:$LINENO: error: \`$ac_var' was not set in the previous run" >&5 echo "$as_me: error: \`$ac_var' was not set in the previous run" >&2;} ac_cache_corrupted=: ;; ,);; *) if test "x$ac_old_val" != "x$ac_new_val"; then { echo "$as_me:$LINENO: error: \`$ac_var' has changed since the previous run:" >&5 echo "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;} { echo "$as_me:$LINENO: former value: $ac_old_val" >&5 echo "$as_me: former value: $ac_old_val" >&2;} { echo "$as_me:$LINENO: current value: $ac_new_val" >&5 echo "$as_me: current value: $ac_new_val" >&2;} ac_cache_corrupted=: fi;; esac # Pass precious variables to config.status. if test "$ac_new_set" = set; then case $ac_new_val in *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?\"\']*) ac_arg=$ac_var=`echo "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;; *) ac_arg=$ac_var=$ac_new_val ;; esac case " $ac_configure_args " in *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy. *) ac_configure_args="$ac_configure_args '$ac_arg'" ;; esac fi done if $ac_cache_corrupted; then { echo "$as_me:$LINENO: error: changes in the environment can compromise the build" >&5 echo "$as_me: error: changes in the environment can compromise the build" >&2;} { { echo "$as_me:$LINENO: error: run \`make distclean' and/or \`rm $cache_file' and start over" >&5 echo "$as_me: error: run \`make distclean' and/or \`rm $cache_file' and start over" >&2;} { (exit 1); exit 1; }; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu am__api_version="1.7" ac_aux_dir= for ac_dir in $srcdir $srcdir/.. $srcdir/../..; do if test -f $ac_dir/install-sh; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install-sh -c" break elif test -f $ac_dir/install.sh; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install.sh -c" break elif test -f $ac_dir/shtool; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/shtool install -c" break fi done if test -z "$ac_aux_dir"; then { { echo "$as_me:$LINENO: error: cannot find install-sh or install.sh in $srcdir $srcdir/.. $srcdir/../.." >&5 echo "$as_me: error: cannot find install-sh or install.sh in $srcdir $srcdir/.. $srcdir/../.." >&2;} { (exit 1); exit 1; }; } fi ac_config_guess="$SHELL $ac_aux_dir/config.guess" ac_config_sub="$SHELL $ac_aux_dir/config.sub" ac_configure="$SHELL $ac_aux_dir/configure" # This should be Cygnus configure. # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or # incompatible versions: # SysV /etc/install, /usr/sbin/install # SunOS /usr/etc/install # IRIX /sbin/install # AIX /bin/install # AmigaOS /C/install, which installs bootblocks on floppy discs # AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag # AFS /usr/afsws/bin/install, which mishandles nonexistent args # SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" # ./install, which can be erroneously created by make from ./install.sh. echo "$as_me:$LINENO: checking for a BSD-compatible install" >&5 echo $ECHO_N "checking for a BSD-compatible install... $ECHO_C" >&6 if test -z "$INSTALL"; then if test "${ac_cv_path_install+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. # Account for people who put trailing slashes in PATH elements. case $as_dir/ in ./ | .// | /cC/* | \ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \ /usr/ucb/* ) ;; *) # OSF1 and SCO ODT 3.0 have their own names for install. # Don't use installbsd from OSF since it installs stuff as root # by default. for ac_prog in ginstall scoinst install; do for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$ac_prog$ac_exec_ext"; then if test $ac_prog = install && grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. : elif test $ac_prog = install && grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # program-specific install script used by HP pwplus--don't use. : else ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c" break 3 fi fi done done ;; esac done fi if test "${ac_cv_path_install+set}" = set; then INSTALL=$ac_cv_path_install else # As a last resort, use the slow shell script. We don't cache a # path for INSTALL within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the path is relative. INSTALL=$ac_install_sh fi fi echo "$as_me:$LINENO: result: $INSTALL" >&5 echo "${ECHO_T}$INSTALL" >&6 # Use test -z because SunOS4 sh mishandles braces in ${var-val}. # It thinks the first close brace ends the variable substitution. test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}' test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6 # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi echo "$as_me:$LINENO: result: yes" >&5 echo "${ECHO_T}yes" >&6 test "$program_prefix" != NONE && program_transform_name="s,^,$program_prefix,;$program_transform_name" # Use a double $ so make ignores it. test "$program_suffix" != NONE && program_transform_name="s,\$,$program_suffix,;$program_transform_name" # Double any \ or $. echo might interpret backslashes. # By default was `s,x,x', remove it if useless. cat <<\_ACEOF >conftest.sed s/[\\$]/&&/g;s/;s,x,x,$// _ACEOF program_transform_name=`echo $program_transform_name | sed -f conftest.sed` rm conftest.sed # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` test x"${MISSING+set}" = xset || MISSING="\${SHELL} $am_aux_dir/missing" # Use eval to expand $SHELL if eval "$MISSING --run true"; then am_missing_run="$MISSING --run " else am_missing_run= { echo "$as_me:$LINENO: WARNING: \`missing' script is too old or missing" >&5 echo "$as_me: WARNING: \`missing' script is too old or missing" >&2;} fi for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 echo "$as_me:$LINENO: checking for $ac_word" >&5 echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6 if test "${ac_cv_prog_AWK+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else if test -n "$AWK"; then ac_cv_prog_AWK="$AWK" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AWK="$ac_prog" echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done fi fi AWK=$ac_cv_prog_AWK if test -n "$AWK"; then echo "$as_me:$LINENO: result: $AWK" >&5 echo "${ECHO_T}$AWK" >&6 else echo "$as_me:$LINENO: result: no" >&5 echo "${ECHO_T}no" >&6 fi test -n "$AWK" && break done echo "$as_me:$LINENO: checking whether ${MAKE-make} sets \$(MAKE)" >&5 echo $ECHO_N "checking whether ${MAKE-make} sets \$(MAKE)... $ECHO_C" >&6 set dummy ${MAKE-make}; ac_make=`echo "$2" | sed 'y,./+-,__p_,'` if eval "test \"\${ac_cv_prog_make_${ac_make}_set+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.make <<\_ACEOF all: @echo 'ac_maketemp="$(MAKE)"' _ACEOF # GNU make sometimes prints "make[1]: Entering...", which would confuse us. eval `${MAKE-make} -f conftest.make 2>/dev/null | grep temp=` if test -n "$ac_maketemp"; then eval ac_cv_prog_make_${ac_make}_set=yes else eval ac_cv_prog_make_${ac_make}_set=no fi rm -f conftest.make fi if eval "test \"`echo '$ac_cv_prog_make_'${ac_make}_set`\" = yes"; then echo "$as_me:$LINENO: result: yes" >&5 echo "${ECHO_T}yes" >&6 SET_MAKE= else echo "$as_me:$LINENO: result: no" >&5 echo "${ECHO_T}no" >&6 SET_MAKE="MAKE=${MAKE-make}" fi rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null # test to see if srcdir already configured if test "`cd $srcdir && pwd`" != "`pwd`" && test -f $srcdir/config.status; then { { echo "$as_me:$LINENO: error: source directory already configured; run \"make distclean\" there first" >&5 echo "$as_me: error: source directory already configured; run \"make distclean\" there first" >&2;} { (exit 1); exit 1; }; } fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi # Define the identity of the package. PACKAGE='gccintro' VERSION='1.0' cat >>confdefs.h <<_ACEOF #define PACKAGE "$PACKAGE" _ACEOF cat >>confdefs.h <<_ACEOF #define VERSION "$VERSION" _ACEOF # Some tools Automake needs. ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"} AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"} AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"} AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"} MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"} AMTAR=${AMTAR-"${am_missing_run}tar"} install_sh=${install_sh-"$am_aux_dir/install-sh"} # Installed binaries are usually stripped using `strip' when the user # run `make install-strip'. However `strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the `STRIP' environment variable to overrule this program. if test "$cross_compiling" != no; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 echo "$as_me:$LINENO: checking for $ac_word" >&5 echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6 if test "${ac_cv_prog_STRIP+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then echo "$as_me:$LINENO: result: $STRIP" >&5 echo "${ECHO_T}$STRIP" >&6 else echo "$as_me:$LINENO: result: no" >&5 echo "${ECHO_T}no" >&6 fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 echo "$as_me:$LINENO: checking for $ac_word" >&5 echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6 if test "${ac_cv_prog_ac_ct_STRIP+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done test -z "$ac_cv_prog_ac_ct_STRIP" && ac_cv_prog_ac_ct_STRIP=":" fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then echo "$as_me:$LINENO: result: $ac_ct_STRIP" >&5 echo "${ECHO_T}$ac_ct_STRIP" >&6 else echo "$as_me:$LINENO: result: no" >&5 echo "${ECHO_T}no" >&6 fi STRIP=$ac_ct_STRIP else STRIP="$ac_cv_prog_STRIP" fi fi INSTALL_STRIP_PROGRAM="\${SHELL} \$(install_sh) -c -s" # We need awk for the "check" target. The system "awk" is bad on # some platforms. ac_config_files="$ac_config_files Makefile" cat >confcache <<\_ACEOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs, see configure's option --config-cache. # It is not useful on other systems. If it contains results you don't # want to keep, you may remove or edit it. # # config.status only pays attention to the cache file if you give it # the --recheck option to rerun configure. # # `ac_cv_env_foo' variables (set or unset) will be overridden when # loading this file, other *unset* `ac_cv_foo' will be assigned the # following values. _ACEOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, don't put newlines in cache variables' values. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. { (set) 2>&1 | case `(ac_space=' '; set | grep ac_space) 2>&1` in *ac_space=\ *) # `set' does not quote correctly, so add quotes (double-quote # substitution turns \\\\ into \\, and sed turns \\ into \). sed -n \ "s/'/'\\\\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p" ;; *) # `set' quotes correctly as required by POSIX, so do not add quotes. sed -n \ "s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1=\\2/p" ;; esac; } | sed ' t clear : clear s/^\([^=]*\)=\(.*[{}].*\)$/test "${\1+set}" = set || &/ t end /^ac_cv_env/!s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/ : end' >>confcache if diff $cache_file confcache >/dev/null 2>&1; then :; else if test -w $cache_file; then test "x$cache_file" != "x/dev/null" && echo "updating cache $cache_file" cat confcache >$cache_file else echo "not updating unwritable cache $cache_file" fi fi rm -f confcache test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' # VPATH may cause trouble with some makes, so we remove $(srcdir), # ${srcdir} and @srcdir@ from VPATH if srcdir is ".", strip leading and # trailing colons and then remove the whole line if VPATH becomes empty # (actually we leave an empty line to preserve line numbers). if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=/{ s/:*\$(srcdir):*/:/; s/:*\${srcdir}:*/:/; s/:*@srcdir@:*/:/; s/^\([^=]*=[ ]*\):*/\1/; s/:*$//; s/^[^=]*=[ ]*$//; }' fi # Transform confdefs.h into DEFS. # Protect against shell expansion while executing Makefile rules. # Protect against Makefile macro expansion. # # If the first sed substitution is executed (which looks for macros that # take arguments), then we branch to the quote section. Otherwise, # look for a macro that doesn't take arguments. cat >confdef2opt.sed <<\_ACEOF t clear : clear s,^[ ]*#[ ]*define[ ][ ]*\([^ (][^ (]*([^)]*)\)[ ]*\(.*\),-D\1=\2,g t quote s,^[ ]*#[ ]*define[ ][ ]*\([^ ][^ ]*\)[ ]*\(.*\),-D\1=\2,g t quote d : quote s,[ `~#$^&*(){}\\|;'"<>?],\\&,g s,\[,\\&,g s,\],\\&,g s,\$,$$,g p _ACEOF # We use echo to avoid assuming a particular line-breaking character. # The extra dot is to prevent the shell from consuming trailing # line-breaks from the sub-command output. A line-break within # single-quotes doesn't work because, if this script is created in a # platform that uses two characters for line-breaks (e.g., DOS), tr # would break. ac_LF_and_DOT=`echo; echo .` DEFS=`sed -n -f confdef2opt.sed confdefs.h | tr "$ac_LF_and_DOT" ' .'` rm -f confdef2opt.sed ac_libobjs= ac_ltlibobjs= for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue # 1. Remove the extension, and $U if already installed. ac_i=`echo "$ac_i" | sed 's/\$U\././;s/\.o$//;s/\.obj$//'` # 2. Add them. ac_libobjs="$ac_libobjs $ac_i\$U.$ac_objext" ac_ltlibobjs="$ac_ltlibobjs $ac_i"'$U.lo' done LIBOBJS=$ac_libobjs LTLIBOBJS=$ac_ltlibobjs : ${CONFIG_STATUS=./config.status} ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files $CONFIG_STATUS" { echo "$as_me:$LINENO: creating $CONFIG_STATUS" >&5 echo "$as_me: creating $CONFIG_STATUS" >&6;} cat >$CONFIG_STATUS <<_ACEOF #! $SHELL # Generated by $as_me. # Run this file to recreate the current configuration. # Compiler output produced by configure, useful for debugging # configure, is in config.log if it exists. debug=false ac_cs_recheck=false ac_cs_silent=false SHELL=\${CONFIG_SHELL-$SHELL} _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF ## --------------------- ## ## M4sh Initialization. ## ## --------------------- ## # Be Bourne compatible if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' elif test -n "${BASH_VERSION+set}" && (set -o posix) >/dev/null 2>&1; then set -o posix fi # Support unset when possible. if (FOO=FOO; unset FOO) >/dev/null 2>&1; then as_unset=unset else as_unset=false fi # Work around bugs in pre-3.0 UWIN ksh. $as_unset ENV MAIL MAILPATH PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. for as_var in \ LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \ LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \ LC_TELEPHONE LC_TIME do if (set +x; test -n "`(eval $as_var=C; export $as_var) 2>&1`"); then eval $as_var=C; export $as_var else $as_unset $as_var fi done # Required to use basename. if expr a : '\(a\)' >/dev/null 2>&1; then as_expr=expr else as_expr=false fi if (basename /) >/dev/null 2>&1 && test "X`basename / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi # Name of the executable. as_me=`$as_basename "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)$' \| \ . : '\(.\)' 2>/dev/null || echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/; q; } /^X\/\(\/\/\)$/{ s//\1/; q; } /^X\/\(\/\).*/{ s//\1/; q; } s/.*/./; q'` # PATH needs CR, and LINENO needs CR and PATH. # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then echo "#! /bin/sh" >conf$$.sh echo "exit 0" >>conf$$.sh chmod +x conf$$.sh if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then PATH_SEPARATOR=';' else PATH_SEPARATOR=: fi rm -f conf$$.sh fi as_lineno_1=$LINENO as_lineno_2=$LINENO as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null` test "x$as_lineno_1" != "x$as_lineno_2" && test "x$as_lineno_3" = "x$as_lineno_2" || { # Find who we are. Look in the path if we contain no path at all # relative or not. case $0 in *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then { { echo "$as_me:$LINENO: error: cannot find myself; rerun with an absolute path" >&5 echo "$as_me: error: cannot find myself; rerun with an absolute path" >&2;} { (exit 1); exit 1; }; } fi case $CONFIG_SHELL in '') as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for as_base in sh bash ksh sh5; do case $as_dir in /*) if ("$as_dir/$as_base" -c ' as_lineno_1=$LINENO as_lineno_2=$LINENO as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null` test "x$as_lineno_1" != "x$as_lineno_2" && test "x$as_lineno_3" = "x$as_lineno_2" ') 2>/dev/null; then $as_unset BASH_ENV || test "${BASH_ENV+set}" != set || { BASH_ENV=; export BASH_ENV; } $as_unset ENV || test "${ENV+set}" != set || { ENV=; export ENV; } CONFIG_SHELL=$as_dir/$as_base export CONFIG_SHELL exec "$CONFIG_SHELL" "$0" ${1+"$@"} fi;; esac done done ;; esac # Create $as_me.lineno as a copy of $as_myself, but with $LINENO # uniformly replaced by the line number. The first 'sed' inserts a # line-number line before each line; the second 'sed' does the real # work. The second script uses 'N' to pair each line-number line # with the numbered line, and appends trailing '-' during # substitution so that $LINENO is not a special case at line end. # (Raja R Harinath suggested sed '=', and Paul Eggert wrote the # second 'sed' script. Blame Lee E. McMahon for sed's syntax. :-) sed '=' <$as_myself | sed ' N s,$,-, : loop s,^\(['$as_cr_digits']*\)\(.*\)[$]LINENO\([^'$as_cr_alnum'_]\),\1\2\1\3, t loop s,-$,, s,^['$as_cr_digits']*\n,, ' >$as_me.lineno && chmod +x $as_me.lineno || { { echo "$as_me:$LINENO: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&5 echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2;} { (exit 1); exit 1; }; } # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensible to this). . ./$as_me.lineno # Exit status is that of the last command. exit } case `echo "testing\c"; echo 1,2,3`,`echo -n testing; echo 1,2,3` in *c*,-n*) ECHO_N= ECHO_C=' ' ECHO_T=' ' ;; *c*,* ) ECHO_N=-n ECHO_C= ECHO_T= ;; *) ECHO_N= ECHO_C='\c' ECHO_T= ;; esac if expr a : '\(a\)' >/dev/null 2>&1; then as_expr=expr else as_expr=false fi rm -f conf$$ conf$$.exe conf$$.file echo >conf$$.file if ln -s conf$$.file conf$$ 2>/dev/null; then # We could just check for DJGPP; but this test a) works b) is more generic # and c) will remain valid once DJGPP supports symlinks (DJGPP 2.04). if test -f conf$$.exe; then # Don't use ln at all; we don't have any links as_ln_s='cp -p' else as_ln_s='ln -s' fi elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -p' fi rm -f conf$$ conf$$.exe conf$$.file if mkdir -p . 2>/dev/null; then as_mkdir_p=: else as_mkdir_p=false fi as_executable_p="test -f" # Sed expression to map a string onto a valid CPP name. as_tr_cpp="sed y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g" # Sed expression to map a string onto a valid variable name. as_tr_sh="sed y%*+%pp%;s%[^_$as_cr_alnum]%_%g" # IFS # We need space, tab and new line, in precisely that order. as_nl=' ' IFS=" $as_nl" # CDPATH. $as_unset CDPATH exec 6>&1 # Open the log real soon, to keep \$[0] and so on meaningful, and to # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. Logging --version etc. is OK. exec 5>>config.log { echo sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX ## Running $as_me. ## _ASBOX } >&5 cat >&5 <<_CSEOF This file was extended by gccintro $as_me 1.0, which was generated by GNU Autoconf 2.57. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS CONFIG_LINKS = $CONFIG_LINKS CONFIG_COMMANDS = $CONFIG_COMMANDS $ $0 $@ _CSEOF echo "on `(hostname || uname -n) 2>/dev/null | sed 1q`" >&5 echo >&5 _ACEOF # Files that config.status was made for. if test -n "$ac_config_files"; then echo "config_files=\"$ac_config_files\"" >>$CONFIG_STATUS fi if test -n "$ac_config_headers"; then echo "config_headers=\"$ac_config_headers\"" >>$CONFIG_STATUS fi if test -n "$ac_config_links"; then echo "config_links=\"$ac_config_links\"" >>$CONFIG_STATUS fi if test -n "$ac_config_commands"; then echo "config_commands=\"$ac_config_commands\"" >>$CONFIG_STATUS fi cat >>$CONFIG_STATUS <<\_ACEOF ac_cs_usage="\ \`$as_me' instantiates files from templates according to the current configuration. Usage: $0 [OPTIONS] [FILE]... -h, --help print this help, then exit -V, --version print version number, then exit -q, --quiet do not print progress messages -d, --debug don't remove temporary files --recheck update $as_me by reconfiguring in the same conditions --file=FILE[:TEMPLATE] instantiate the configuration file FILE Configuration files: $config_files Report bugs to ." _ACEOF cat >>$CONFIG_STATUS <<_ACEOF ac_cs_version="\\ gccintro config.status 1.0 configured by $0, generated by GNU Autoconf 2.57, with options \\"`echo "$ac_configure_args" | sed 's/[\\""\`\$]/\\\\&/g'`\\" Copyright 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001 Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." srcdir=$srcdir INSTALL="$INSTALL" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF # If no file are specified by the user, then we need to provide default # value. By we need to know if files were specified by the user. ac_need_defaults=: while test $# != 0 do case $1 in --*=*) ac_option=`expr "x$1" : 'x\([^=]*\)='` ac_optarg=`expr "x$1" : 'x[^=]*=\(.*\)'` ac_shift=: ;; -*) ac_option=$1 ac_optarg=$2 ac_shift=shift ;; *) # This is not an option, so the user has probably given explicit # arguments. ac_option=$1 ac_need_defaults=false;; esac case $ac_option in # Handling of the options. _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) ac_cs_recheck=: ;; --version | --vers* | -V ) echo "$ac_cs_version"; exit 0 ;; --he | --h) # Conflict between --help and --header { { echo "$as_me:$LINENO: error: ambiguous option: $1 Try \`$0 --help' for more information." >&5 echo "$as_me: error: ambiguous option: $1 Try \`$0 --help' for more information." >&2;} { (exit 1); exit 1; }; };; --help | --hel | -h ) echo "$ac_cs_usage"; exit 0 ;; --debug | --d* | -d ) debug=: ;; --file | --fil | --fi | --f ) $ac_shift CONFIG_FILES="$CONFIG_FILES $ac_optarg" ac_need_defaults=false;; --header | --heade | --head | --hea ) $ac_shift CONFIG_HEADERS="$CONFIG_HEADERS $ac_optarg" ac_need_defaults=false;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil | --si | --s) ac_cs_silent=: ;; # This is an error. -*) { { echo "$as_me:$LINENO: error: unrecognized option: $1 Try \`$0 --help' for more information." >&5 echo "$as_me: error: unrecognized option: $1 Try \`$0 --help' for more information." >&2;} { (exit 1); exit 1; }; } ;; *) ac_config_targets="$ac_config_targets $1" ;; esac shift done ac_configure_extra_args= if $ac_cs_silent; then exec 6>/dev/null ac_configure_extra_args="$ac_configure_extra_args --silent" fi _ACEOF cat >>$CONFIG_STATUS <<_ACEOF if \$ac_cs_recheck; then echo "running $SHELL $0 " $ac_configure_args \$ac_configure_extra_args " --no-create --no-recursion" >&6 exec $SHELL $0 $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion fi _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF for ac_config_target in $ac_config_targets do case "$ac_config_target" in # Handling of arguments. "Makefile" ) CONFIG_FILES="$CONFIG_FILES Makefile" ;; *) { { echo "$as_me:$LINENO: error: invalid argument: $ac_config_target" >&5 echo "$as_me: error: invalid argument: $ac_config_target" >&2;} { (exit 1); exit 1; }; };; esac done # If the user did not use the arguments to specify the items to instantiate, # then the envvar interface is used. Set only those that are not. # We use the long form for the default assignment because of an extremely # bizarre bug on SunOS 4.1.3. if $ac_need_defaults; then test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files fi # Have a temporary directory for convenience. Make it in the build tree # simply because there is no reason to put it here, and in addition, # creating and moving files from /tmp can sometimes cause problems. # Create a temporary directory, and hook for its removal unless debugging. $debug || { trap 'exit_status=$?; rm -rf $tmp && exit $exit_status' 0 trap '{ (exit 1); exit 1; }' 1 2 13 15 } # Create a (secure) tmp directory for tmp files. { tmp=`(umask 077 && mktemp -d -q "./confstatXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" } || { tmp=./confstat$$-$RANDOM (umask 077 && mkdir $tmp) } || { echo "$me: cannot create a temporary directory in ." >&2 { (exit 1); exit 1; } } _ACEOF cat >>$CONFIG_STATUS <<_ACEOF # # CONFIG_FILES section. # # No need to generate the scripts if there are no CONFIG_FILES. # This happens for instance when ./config.status config.h if test -n "\$CONFIG_FILES"; then # Protect against being on the right side of a sed subst in config.status. sed 's/,@/@@/; s/@,/@@/; s/,;t t\$/@;t t/; /@;t t\$/s/[\\\\&,]/\\\\&/g; s/@@/,@/; s/@@/@,/; s/@;t t\$/,;t t/' >\$tmp/subs.sed <<\\CEOF s,@SHELL@,$SHELL,;t t s,@PATH_SEPARATOR@,$PATH_SEPARATOR,;t t s,@PACKAGE_NAME@,$PACKAGE_NAME,;t t s,@PACKAGE_TARNAME@,$PACKAGE_TARNAME,;t t s,@PACKAGE_VERSION@,$PACKAGE_VERSION,;t t s,@PACKAGE_STRING@,$PACKAGE_STRING,;t t s,@PACKAGE_BUGREPORT@,$PACKAGE_BUGREPORT,;t t s,@exec_prefix@,$exec_prefix,;t t s,@prefix@,$prefix,;t t s,@program_transform_name@,$program_transform_name,;t t s,@bindir@,$bindir,;t t s,@sbindir@,$sbindir,;t t s,@libexecdir@,$libexecdir,;t t s,@datadir@,$datadir,;t t s,@sysconfdir@,$sysconfdir,;t t s,@sharedstatedir@,$sharedstatedir,;t t s,@localstatedir@,$localstatedir,;t t s,@libdir@,$libdir,;t t s,@includedir@,$includedir,;t t s,@oldincludedir@,$oldincludedir,;t t s,@infodir@,$infodir,;t t s,@mandir@,$mandir,;t t s,@build_alias@,$build_alias,;t t s,@host_alias@,$host_alias,;t t s,@target_alias@,$target_alias,;t t s,@DEFS@,$DEFS,;t t s,@ECHO_C@,$ECHO_C,;t t s,@ECHO_N@,$ECHO_N,;t t s,@ECHO_T@,$ECHO_T,;t t s,@LIBS@,$LIBS,;t t s,@INSTALL_PROGRAM@,$INSTALL_PROGRAM,;t t s,@INSTALL_SCRIPT@,$INSTALL_SCRIPT,;t t s,@INSTALL_DATA@,$INSTALL_DATA,;t t s,@CYGPATH_W@,$CYGPATH_W,;t t s,@PACKAGE@,$PACKAGE,;t t s,@VERSION@,$VERSION,;t t s,@ACLOCAL@,$ACLOCAL,;t t s,@AUTOCONF@,$AUTOCONF,;t t s,@AUTOMAKE@,$AUTOMAKE,;t t s,@AUTOHEADER@,$AUTOHEADER,;t t s,@MAKEINFO@,$MAKEINFO,;t t s,@AMTAR@,$AMTAR,;t t s,@install_sh@,$install_sh,;t t s,@STRIP@,$STRIP,;t t s,@ac_ct_STRIP@,$ac_ct_STRIP,;t t s,@INSTALL_STRIP_PROGRAM@,$INSTALL_STRIP_PROGRAM,;t t s,@AWK@,$AWK,;t t s,@SET_MAKE@,$SET_MAKE,;t t s,@am__leading_dot@,$am__leading_dot,;t t s,@LIBOBJS@,$LIBOBJS,;t t s,@LTLIBOBJS@,$LTLIBOBJS,;t t CEOF _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF # Split the substitutions into bite-sized pieces for seds with # small command number limits, like on Digital OSF/1 and HP-UX. ac_max_sed_lines=48 ac_sed_frag=1 # Number of current file. ac_beg=1 # First line for current file. ac_end=$ac_max_sed_lines # Line after last line for current file. ac_more_lines=: ac_sed_cmds= while $ac_more_lines; do if test $ac_beg -gt 1; then sed "1,${ac_beg}d; ${ac_end}q" $tmp/subs.sed >$tmp/subs.frag else sed "${ac_end}q" $tmp/subs.sed >$tmp/subs.frag fi if test ! -s $tmp/subs.frag; then ac_more_lines=false else # The purpose of the label and of the branching condition is to # speed up the sed processing (if there are no `@' at all, there # is no need to browse any of the substitutions). # These are the two extra sed commands mentioned above. (echo ':t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b' && cat $tmp/subs.frag) >$tmp/subs-$ac_sed_frag.sed if test -z "$ac_sed_cmds"; then ac_sed_cmds="sed -f $tmp/subs-$ac_sed_frag.sed" else ac_sed_cmds="$ac_sed_cmds | sed -f $tmp/subs-$ac_sed_frag.sed" fi ac_sed_frag=`expr $ac_sed_frag + 1` ac_beg=$ac_end ac_end=`expr $ac_end + $ac_max_sed_lines` fi done if test -z "$ac_sed_cmds"; then ac_sed_cmds=cat fi fi # test -n "$CONFIG_FILES" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF for ac_file in : $CONFIG_FILES; do test "x$ac_file" = x: && continue # Support "outfile[:infile[:infile...]]", defaulting infile="outfile.in". case $ac_file in - | *:- | *:-:* ) # input from stdin cat >$tmp/stdin ac_file_in=`echo "$ac_file" | sed 's,[^:]*:,,'` ac_file=`echo "$ac_file" | sed 's,:.*,,'` ;; *:* ) ac_file_in=`echo "$ac_file" | sed 's,[^:]*:,,'` ac_file=`echo "$ac_file" | sed 's,:.*,,'` ;; * ) ac_file_in=$ac_file.in ;; esac # Compute @srcdir@, @top_srcdir@, and @INSTALL@ for subdirectories. ac_dir=`(dirname "$ac_file") 2>/dev/null || $as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$ac_file" : 'X\(//\)[^/]' \| \ X"$ac_file" : 'X\(//\)$' \| \ X"$ac_file" : 'X\(/\)' \| \ . : '\(.\)' 2>/dev/null || echo X"$ac_file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; } /^X\(\/\/\)[^/].*/{ s//\1/; q; } /^X\(\/\/\)$/{ s//\1/; q; } /^X\(\/\).*/{ s//\1/; q; } s/.*/./; q'` { if $as_mkdir_p; then mkdir -p "$ac_dir" else as_dir="$ac_dir" as_dirs= while test ! -d "$as_dir"; do as_dirs="$as_dir $as_dirs" as_dir=`(dirname "$as_dir") 2>/dev/null || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| \ . : '\(.\)' 2>/dev/null || echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; } /^X\(\/\/\)[^/].*/{ s//\1/; q; } /^X\(\/\/\)$/{ s//\1/; q; } /^X\(\/\).*/{ s//\1/; q; } s/.*/./; q'` done test ! -n "$as_dirs" || mkdir $as_dirs fi || { { echo "$as_me:$LINENO: error: cannot create directory \"$ac_dir\"" >&5 echo "$as_me: error: cannot create directory \"$ac_dir\"" >&2;} { (exit 1); exit 1; }; }; } ac_builddir=. if test "$ac_dir" != .; then ac_dir_suffix=/`echo "$ac_dir" | sed 's,^\.[\\/],,'` # A "../" for each directory in $ac_dir_suffix. ac_top_builddir=`echo "$ac_dir_suffix" | sed 's,/[^\\/]*,../,g'` else ac_dir_suffix= ac_top_builddir= fi case $srcdir in .) # No --srcdir option. We are building in place. ac_srcdir=. if test -z "$ac_top_builddir"; then ac_top_srcdir=. else ac_top_srcdir=`echo $ac_top_builddir | sed 's,/$,,'` fi ;; [\\/]* | ?:[\\/]* ) # Absolute path. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ;; *) # Relative path. ac_srcdir=$ac_top_builddir$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_builddir$srcdir ;; esac # Don't blindly perform a `cd "$ac_dir"/$ac_foo && pwd` since $ac_foo can be # absolute. ac_abs_builddir=`cd "$ac_dir" && cd $ac_builddir && pwd` ac_abs_top_builddir=`cd "$ac_dir" && cd ${ac_top_builddir}. && pwd` ac_abs_srcdir=`cd "$ac_dir" && cd $ac_srcdir && pwd` ac_abs_top_srcdir=`cd "$ac_dir" && cd $ac_top_srcdir && pwd` case $INSTALL in [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;; *) ac_INSTALL=$ac_top_builddir$INSTALL ;; esac if test x"$ac_file" != x-; then { echo "$as_me:$LINENO: creating $ac_file" >&5 echo "$as_me: creating $ac_file" >&6;} rm -f "$ac_file" fi # Let's still pretend it is `configure' which instantiates (i.e., don't # use $as_me), people would be surprised to read: # /* config.h. Generated by config.status. */ if test x"$ac_file" = x-; then configure_input= else configure_input="$ac_file. " fi configure_input=$configure_input"Generated from `echo $ac_file_in | sed 's,.*/,,'` by configure." # First look for the input files in the build tree, otherwise in the # src tree. ac_file_inputs=`IFS=: for f in $ac_file_in; do case $f in -) echo $tmp/stdin ;; [\\/$]*) # Absolute (can't be DOS-style, as IFS=:) test -f "$f" || { { echo "$as_me:$LINENO: error: cannot find input file: $f" >&5 echo "$as_me: error: cannot find input file: $f" >&2;} { (exit 1); exit 1; }; } echo $f;; *) # Relative if test -f "$f"; then # Build tree echo $f elif test -f "$srcdir/$f"; then # Source tree echo $srcdir/$f else # /dev/null tree { { echo "$as_me:$LINENO: error: cannot find input file: $f" >&5 echo "$as_me: error: cannot find input file: $f" >&2;} { (exit 1); exit 1; }; } fi;; esac done` || { (exit 1); exit 1; } _ACEOF cat >>$CONFIG_STATUS <<_ACEOF sed "$ac_vpsub $extrasub _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF :t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b s,@configure_input@,$configure_input,;t t s,@srcdir@,$ac_srcdir,;t t s,@abs_srcdir@,$ac_abs_srcdir,;t t s,@top_srcdir@,$ac_top_srcdir,;t t s,@abs_top_srcdir@,$ac_abs_top_srcdir,;t t s,@builddir@,$ac_builddir,;t t s,@abs_builddir@,$ac_abs_builddir,;t t s,@top_builddir@,$ac_top_builddir,;t t s,@abs_top_builddir@,$ac_abs_top_builddir,;t t s,@INSTALL@,$ac_INSTALL,;t t " $ac_file_inputs | (eval "$ac_sed_cmds") >$tmp/out rm -f $tmp/stdin if test x"$ac_file" != x-; then mv $tmp/out $ac_file else cat $tmp/out rm -f $tmp/out fi done _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF { (exit 0); exit 0; } _ACEOF chmod +x $CONFIG_STATUS ac_clean_files=$ac_clean_files_save # configure is writing to config.log, and then calls config.status. # config.status does its own redirection, appending to config.log. # Unfortunately, on DOS this fails, as config.log is still kept open # by configure, so config.status won't be able to write to it; its # output is simply discarded. So we exec the FD to /dev/null, # effectively closing config.log, so it can be properly (re)opened and # appended to by config.status. When coming back to configure, we # need to make the FD available again. if test "$no_create" != yes; then ac_cs_success=: ac_config_status_args= test "$silent" = yes && ac_config_status_args="$ac_config_status_args --quiet" exec 5>/dev/null $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false exec 5>>config.log # Use ||, not &&, to avoid exiting from the if with $? = 1, which # would make configure fail if this is the last instruction. $ac_cs_success || { (exit 1); exit 1; } fi gccintro-1.0/configure.ac0000664000175000017500000000015610046436572011144 AC_INIT(gccintro,1.0) AC_CONFIG_SRCDIR(gccintro.texi) AM_INIT_AUTOMAKE([no-dependencies]) AC_OUTPUT(Makefile) gccintro-1.0/install-sh0000775000175000017500000001124510014170231010641 #! /bin/sh # # install - install a program, script, or datafile # This comes from X11R5. # # Calling this script install-sh is preferred over install.sh, to prevent # `make' implicit rules from creating a file called install from it # when there is no Makefile. # # This script is compatible with the BSD install script, but was written # from scratch. # # set DOITPROG to echo to test this script # Don't use :- since 4.3BSD and earlier shells don't like it. doit="${DOITPROG-}" # put in absolute paths if you don't have them in your path; or use env. vars. mvprog="${MVPROG-mv}" cpprog="${CPPROG-cp}" chmodprog="${CHMODPROG-chmod}" chownprog="${CHOWNPROG-chown}" chgrpprog="${CHGRPPROG-chgrp}" stripprog="${STRIPPROG-strip}" rmprog="${RMPROG-rm}" mkdirprog="${MKDIRPROG-mkdir}" transformbasename="" transform_arg="" instcmd="$mvprog" chmodcmd="$chmodprog 0755" chowncmd="" chgrpcmd="" stripcmd="" rmcmd="$rmprog -f" mvcmd="$mvprog" src="" dst="" dir_arg="" while [ x"$1" != x ]; do case $1 in -c) instcmd="$cpprog" shift continue;; -d) dir_arg=true shift continue;; -m) chmodcmd="$chmodprog $2" shift shift continue;; -o) chowncmd="$chownprog $2" shift shift continue;; -g) chgrpcmd="$chgrpprog $2" shift shift continue;; -s) stripcmd="$stripprog" shift continue;; -t=*) transformarg=`echo $1 | sed 's/-t=//'` shift continue;; -b=*) transformbasename=`echo $1 | sed 's/-b=//'` shift continue;; *) if [ x"$src" = x ] then src=$1 else # this colon is to work around a 386BSD /bin/sh bug : dst=$1 fi shift continue;; esac done if [ x"$src" = x ] then echo "install: no input file specified" exit 1 else true fi if [ x"$dir_arg" != x ]; then dst=$src src="" if [ -d $dst ]; then instcmd=: else instcmd=mkdir fi else # Waiting for this to be detected by the "$instcmd $src $dsttmp" command # might cause directories to be created, which would be especially bad # if $src (and thus $dsttmp) contains '*'. if [ -f $src -o -d $src ] then true else echo "install: $src does not exist" exit 1 fi if [ x"$dst" = x ] then echo "install: no destination specified" exit 1 else true fi # If destination is a directory, append the input filename; if your system # does not like double slashes in filenames, you may need to add some logic if [ -d $dst ] then dst="$dst"/`basename $src` else true fi fi ## this sed command emulates the dirname command dstdir=`echo $dst | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'` # Make sure that the destination directory exists. # this part is taken from Noah Friedman's mkinstalldirs script # Skip lots of stat calls in the usual case. if [ ! -d "$dstdir" ]; then defaultIFS=' ' IFS="${IFS-${defaultIFS}}" oIFS="${IFS}" # Some sh's can't handle IFS=/ for some reason. IFS='%' set - `echo ${dstdir} | sed -e 's@/@%@g' -e 's@^%@/@'` IFS="${oIFS}" pathcomp='' while [ $# -ne 0 ] ; do pathcomp="${pathcomp}${1}" shift if [ ! -d "${pathcomp}" ] ; then $mkdirprog "${pathcomp}" else true fi pathcomp="${pathcomp}/" done fi if [ x"$dir_arg" != x ] then $doit $instcmd $dst && if [ x"$chowncmd" != x ]; then $doit $chowncmd $dst; else true ; fi && if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dst; else true ; fi && if [ x"$stripcmd" != x ]; then $doit $stripcmd $dst; else true ; fi && if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dst; else true ; fi else # If we're going to rename the final executable, determine the name now. if [ x"$transformarg" = x ] then dstfile=`basename $dst` else dstfile=`basename $dst $transformbasename | sed $transformarg`$transformbasename fi # don't allow the sed command to completely eliminate the filename if [ x"$dstfile" = x ] then dstfile=`basename $dst` else true fi # Make a temp file name in the proper directory. dsttmp=$dstdir/#inst.$$# # Move or copy the file name to the temp name $doit $instcmd $src $dsttmp && trap "rm -f ${dsttmp}" 0 && # and set any options; do chmod last to preserve setuid bits # If any of these fail, we abort the whole thing. If we want to # ignore errors from any of these, just make sure not to ignore # errors from the above "$doit $instcmd $src $dsttmp" command. if [ x"$chowncmd" != x ]; then $doit $chowncmd $dsttmp; else true;fi && if [ x"$chgrpcmd" != x ]; then $doit $chgrpcmd $dsttmp; else true;fi && if [ x"$stripcmd" != x ]; then $doit $stripcmd $dsttmp; else true;fi && if [ x"$chmodcmd" != x ]; then $doit $chmodcmd $dsttmp; else true;fi && # Now rename the file to the real destination. $doit $rmcmd -f $dstdir/$dstfile && $doit $mvcmd $dsttmp $dstdir/$dstfile fi && exit 0 gccintro-1.0/missing0000755000175000017500000002403607673274427010271 #! /bin/sh # Common stub for a few missing GNU programs while installing. # Copyright (C) 1996, 1997, 1999, 2000, 2002 Free Software Foundation, Inc. # Originally by Fran,cois Pinard , 1996. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA # 02111-1307, USA. # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. if test $# -eq 0; then echo 1>&2 "Try \`$0 --help' for more information" exit 1 fi run=: # In the cases where this matters, `missing' is being run in the # srcdir already. if test -f configure.ac; then configure_ac=configure.ac else configure_ac=configure.in fi case "$1" in --run) # Try to run requested program, and just exit if it succeeds. run= shift "$@" && exit 0 ;; esac # If it does not exist, or fails to run (possibly an outdated version), # try to emulate it. case "$1" in -h|--h|--he|--hel|--help) echo "\ $0 [OPTION]... PROGRAM [ARGUMENT]... Handle \`PROGRAM [ARGUMENT]...' for when PROGRAM is missing, or return an error status if there is no known handling for PROGRAM. Options: -h, --help display this help and exit -v, --version output version information and exit --run try to run the given command, and emulate it if it fails Supported PROGRAM values: aclocal touch file \`aclocal.m4' autoconf touch file \`configure' autoheader touch file \`config.h.in' automake touch all \`Makefile.in' files bison create \`y.tab.[ch]', if possible, from existing .[ch] flex create \`lex.yy.c', if possible, from existing .c help2man touch the output file lex create \`lex.yy.c', if possible, from existing .c makeinfo touch the output file tar try tar, gnutar, gtar, then tar without non-portable flags yacc create \`y.tab.[ch]', if possible, from existing .[ch]" ;; -v|--v|--ve|--ver|--vers|--versi|--versio|--version) echo "missing 0.4 - GNU automake" ;; -*) echo 1>&2 "$0: Unknown \`$1' option" echo 1>&2 "Try \`$0 --help' for more information" exit 1 ;; aclocal*) if test -z "$run" && ($1 --version) > /dev/null 2>&1; then # We have it, but it failed. exit 1 fi echo 1>&2 "\ WARNING: \`$1' is missing on your system. You should only need it if you modified \`acinclude.m4' or \`${configure_ac}'. You might want to install the \`Automake' and \`Perl' packages. Grab them from any GNU archive site." touch aclocal.m4 ;; autoconf) if test -z "$run" && ($1 --version) > /dev/null 2>&1; then # We have it, but it failed. exit 1 fi echo 1>&2 "\ WARNING: \`$1' is missing on your system. You should only need it if you modified \`${configure_ac}'. You might want to install the \`Autoconf' and \`GNU m4' packages. Grab them from any GNU archive site." touch configure ;; autoheader) if test -z "$run" && ($1 --version) > /dev/null 2>&1; then # We have it, but it failed. exit 1 fi echo 1>&2 "\ WARNING: \`$1' is missing on your system. You should only need it if you modified \`acconfig.h' or \`${configure_ac}'. You might want to install the \`Autoconf' and \`GNU m4' packages. Grab them from any GNU archive site." files=`sed -n 's/^[ ]*A[CM]_CONFIG_HEADER(\([^)]*\)).*/\1/p' ${configure_ac}` test -z "$files" && files="config.h" touch_files= for f in $files; do case "$f" in *:*) touch_files="$touch_files "`echo "$f" | sed -e 's/^[^:]*://' -e 's/:.*//'`;; *) touch_files="$touch_files $f.in";; esac done touch $touch_files ;; automake*) if test -z "$run" && ($1 --version) > /dev/null 2>&1; then # We have it, but it failed. exit 1 fi echo 1>&2 "\ WARNING: \`$1' is missing on your system. You should only need it if you modified \`Makefile.am', \`acinclude.m4' or \`${configure_ac}'. You might want to install the \`Automake' and \`Perl' packages. Grab them from any GNU archive site." find . -type f -name Makefile.am -print | sed 's/\.am$/.in/' | while read f; do touch "$f"; done ;; autom4te) if test -z "$run" && ($1 --version) > /dev/null 2>&1; then # We have it, but it failed. exit 1 fi echo 1>&2 "\ WARNING: \`$1' is needed, and you do not seem to have it handy on your system. You might have modified some files without having the proper tools for further handling them. You can get \`$1Help2man' as part of \`Autoconf' from any GNU archive site." file=`echo "$*" | sed -n 's/.*--output[ =]*\([^ ]*\).*/\1/p'` test -z "$file" && file=`echo "$*" | sed -n 's/.*-o[ ]*\([^ ]*\).*/\1/p'` if test -f "$file"; then touch $file else test -z "$file" || exec >$file echo "#! /bin/sh" echo "# Created by GNU Automake missing as a replacement of" echo "# $ $@" echo "exit 0" chmod +x $file exit 1 fi ;; bison|yacc) echo 1>&2 "\ WARNING: \`$1' is missing on your system. You should only need it if you modified a \`.y' file. You may need the \`Bison' package in order for those modifications to take effect. You can get \`Bison' from any GNU archive site." rm -f y.tab.c y.tab.h if [ $# -ne 1 ]; then eval LASTARG="\${$#}" case "$LASTARG" in *.y) SRCFILE=`echo "$LASTARG" | sed 's/y$/c/'` if [ -f "$SRCFILE" ]; then cp "$SRCFILE" y.tab.c fi SRCFILE=`echo "$LASTARG" | sed 's/y$/h/'` if [ -f "$SRCFILE" ]; then cp "$SRCFILE" y.tab.h fi ;; esac fi if [ ! -f y.tab.h ]; then echo >y.tab.h fi if [ ! -f y.tab.c ]; then echo 'main() { return 0; }' >y.tab.c fi ;; lex|flex) echo 1>&2 "\ WARNING: \`$1' is missing on your system. You should only need it if you modified a \`.l' file. You may need the \`Flex' package in order for those modifications to take effect. You can get \`Flex' from any GNU archive site." rm -f lex.yy.c if [ $# -ne 1 ]; then eval LASTARG="\${$#}" case "$LASTARG" in *.l) SRCFILE=`echo "$LASTARG" | sed 's/l$/c/'` if [ -f "$SRCFILE" ]; then cp "$SRCFILE" lex.yy.c fi ;; esac fi if [ ! -f lex.yy.c ]; then echo 'main() { return 0; }' >lex.yy.c fi ;; help2man) if test -z "$run" && ($1 --version) > /dev/null 2>&1; then # We have it, but it failed. exit 1 fi echo 1>&2 "\ WARNING: \`$1' is missing on your system. You should only need it if you modified a dependency of a manual page. You may need the \`Help2man' package in order for those modifications to take effect. You can get \`Help2man' from any GNU archive site." file=`echo "$*" | sed -n 's/.*-o \([^ ]*\).*/\1/p'` if test -z "$file"; then file=`echo "$*" | sed -n 's/.*--output=\([^ ]*\).*/\1/p'` fi if [ -f "$file" ]; then touch $file else test -z "$file" || exec >$file echo ".ab help2man is required to generate this page" exit 1 fi ;; makeinfo) if test -z "$run" && (makeinfo --version) > /dev/null 2>&1; then # We have makeinfo, but it failed. exit 1 fi echo 1>&2 "\ WARNING: \`$1' is missing on your system. You should only need it if you modified a \`.texi' or \`.texinfo' file, or any other file indirectly affecting the aspect of the manual. The spurious call might also be the consequence of using a buggy \`make' (AIX, DU, IRIX). You might want to install the \`Texinfo' package or the \`GNU make' package. Grab either from any GNU archive site." file=`echo "$*" | sed -n 's/.*-o \([^ ]*\).*/\1/p'` if test -z "$file"; then file=`echo "$*" | sed 's/.* \([^ ]*\) *$/\1/'` file=`sed -n '/^@setfilename/ { s/.* \([^ ]*\) *$/\1/; p; q; }' $file` fi touch $file ;; tar) shift if test -n "$run"; then echo 1>&2 "ERROR: \`tar' requires --run" exit 1 fi # We have already tried tar in the generic part. # Look for gnutar/gtar before invocation to avoid ugly error # messages. if (gnutar --version > /dev/null 2>&1); then gnutar "$@" && exit 0 fi if (gtar --version > /dev/null 2>&1); then gtar "$@" && exit 0 fi firstarg="$1" if shift; then case "$firstarg" in *o*) firstarg=`echo "$firstarg" | sed s/o//` tar "$firstarg" "$@" && exit 0 ;; esac case "$firstarg" in *h*) firstarg=`echo "$firstarg" | sed s/h//` tar "$firstarg" "$@" && exit 0 ;; esac fi echo 1>&2 "\ WARNING: I can't seem to be able to run \`tar' with the given arguments. You may want to install GNU tar or Free paxutils, or check the command line arguments." exit 1 ;; *) echo 1>&2 "\ WARNING: \`$1' is needed, and you do not seem to have it handy on your system. You might have modified some files without having the proper tools for further handling them. Check the \`README' file, it often tells you about the needed prerequirements for installing this package. You may also peek at any GNU archive site, in case some other package would contain this missing \`$1' program." exit 1 ;; esac exit 0 gccintro-1.0/mkinstalldirs0000755000175000017500000000370407673274427011477 #! /bin/sh # mkinstalldirs --- make directory hierarchy # Author: Noah Friedman # Created: 1993-05-16 # Public domain errstatus=0 dirmode="" usage="\ Usage: mkinstalldirs [-h] [--help] [-m mode] dir ..." # process command line arguments while test $# -gt 0 ; do case $1 in -h | --help | --h*) # -h for help echo "$usage" 1>&2 exit 0 ;; -m) # -m PERM arg shift test $# -eq 0 && { echo "$usage" 1>&2; exit 1; } dirmode=$1 shift ;; --) # stop option processing shift break ;; -*) # unknown option echo "$usage" 1>&2 exit 1 ;; *) # first non-opt arg break ;; esac done for file do if test -d "$file"; then shift else break fi done case $# in 0) exit 0 ;; esac case $dirmode in '') if mkdir -p -- . 2>/dev/null; then echo "mkdir -p -- $*" exec mkdir -p -- "$@" fi ;; *) if mkdir -m "$dirmode" -p -- . 2>/dev/null; then echo "mkdir -m $dirmode -p -- $*" exec mkdir -m "$dirmode" -p -- "$@" fi ;; esac for file do set fnord `echo ":$file" | sed -ne 's/^:\//#/;s/^://;s/\// /g;s/^#/\//;p'` shift pathcomp= for d do pathcomp="$pathcomp$d" case $pathcomp in -*) pathcomp=./$pathcomp ;; esac if test ! -d "$pathcomp"; then echo "mkdir $pathcomp" mkdir "$pathcomp" || lasterr=$? if test ! -d "$pathcomp"; then errstatus=$lasterr else if test ! -z "$dirmode"; then echo "chmod $dirmode $pathcomp" lasterr="" chmod "$dirmode" "$pathcomp" || lasterr=$? if test ! -z "$lasterr"; then errstatus=$lasterr fi fi fi fi pathcomp="$pathcomp/" done done exit $errstatus # Local Variables: # mode: shell-script # sh-indentation: 2 # End: # mkinstalldirs ends here gccintro-1.0/texinfo.tex0000664000175000017500000066463510046440014011060 % texinfo.tex -- TeX macros to handle Texinfo files. % % Load plain if necessary, i.e., if running under initex. \expandafter\ifx\csname fmtname\endcsname\relax\input plain\fi % \def\texinfoversion{2004-04-07.08} % % Copyright (C) 1985, 1986, 1988, 1990, 1991, 1992, 1993, 1994, 1995, % 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004 Free Software % Foundation, Inc. % % This texinfo.tex file is free software; you can redistribute it and/or % modify it under the terms of the GNU General Public License as % published by the Free Software Foundation; either version 2, or (at % your option) any later version. % % This texinfo.tex file is distributed in the hope that it will be % useful, but WITHOUT ANY WARRANTY; without even the implied warranty % of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU % General Public License for more details. % % You should have received a copy of the GNU General Public License % along with this texinfo.tex file; see the file COPYING. If not, write % to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, % Boston, MA 02111-1307, USA. % % As a special exception, when this file is read by TeX when processing % a Texinfo source document, you may use the result without % restriction. (This has been our intent since Texinfo was invented.) % % Please try the latest version of texinfo.tex before submitting bug % reports; you can get the latest version from: % http://www.gnu.org/software/texinfo/ (the Texinfo home page), or % ftp://tug.org/tex/texinfo.tex % (and all CTAN mirrors, see http://www.ctan.org). % The texinfo.tex in any given distribution could well be out % of date, so if that's what you're using, please check. % % Send bug reports to bug-texinfo@gnu.org. Please include including a % complete document in each bug report with which we can reproduce the % problem. Patches are, of course, greatly appreciated. % % To process a Texinfo manual with TeX, it's most reliable to use the % texi2dvi shell script that comes with the distribution. For a simple % manual foo.texi, however, you can get away with this: % tex foo.texi % texindex foo.?? % tex foo.texi % tex foo.texi % dvips foo.dvi -o # or whatever; this makes foo.ps. % The extra TeX runs get the cross-reference information correct. % Sometimes one run after texindex suffices, and sometimes you need more % than two; texi2dvi does it as many times as necessary. % % It is possible to adapt texinfo.tex for other languages, to some % extent. You can get the existing language-specific files from the % full Texinfo distribution. % % The GNU Texinfo home page is http://www.gnu.org/software/texinfo. \message{Loading texinfo [version \texinfoversion]:} % If in a .fmt file, print the version number % and turn on active characters that we couldn't do earlier because % they might have appeared in the input file name. \everyjob{\message{[Texinfo version \texinfoversion]}% \catcode`+=\active \catcode`\_=\active} \message{Basics,} \chardef\other=12 % We never want plain's \outer definition of \+ in Texinfo. % For @tex, we can use \tabalign. \let\+ = \relax % Save some plain tex macros whose names we will redefine. \let\ptexb=\b \let\ptexbullet=\bullet \let\ptexc=\c \let\ptexcomma=\, \let\ptexdot=\. \let\ptexdots=\dots \let\ptexend=\end \let\ptexequiv=\equiv \let\ptexexclam=\! \let\ptexfootnote=\footnote \let\ptexgtr=> \let\ptexhat=^ \let\ptexi=\i \let\ptexindent=\indent \let\ptexnoindent=\noindent \let\ptexinsert=\insert \let\ptexlbrace=\{ \let\ptexless=< \let\ptexplus=+ \let\ptexrbrace=\} \let\ptexslash=\/ \let\ptexstar=\* \let\ptext=\t % If this character appears in an error message or help string, it % starts a new line in the output. \newlinechar = `^^J % Use TeX 3.0's \inputlineno to get the line number, for better error % messages, but if we're using an old version of TeX, don't do anything. % \ifx\inputlineno\thisisundefined \let\linenumber = \empty % Pre-3.0. \else \def\linenumber{l.\the\inputlineno:\space} \fi % Set up fixed words for English if not already set. \ifx\putwordAppendix\undefined \gdef\putwordAppendix{Appendix}\fi \ifx\putwordChapter\undefined \gdef\putwordChapter{Chapter}\fi \ifx\putwordfile\undefined \gdef\putwordfile{file}\fi \ifx\putwordin\undefined \gdef\putwordin{in}\fi \ifx\putwordIndexIsEmpty\undefined \gdef\putwordIndexIsEmpty{(Index is empty)}\fi \ifx\putwordIndexNonexistent\undefined \gdef\putwordIndexNonexistent{(Index is nonexistent)}\fi \ifx\putwordInfo\undefined \gdef\putwordInfo{Info}\fi \ifx\putwordInstanceVariableof\undefined \gdef\putwordInstanceVariableof{Instance Variable of}\fi \ifx\putwordMethodon\undefined \gdef\putwordMethodon{Method on}\fi \ifx\putwordNoTitle\undefined \gdef\putwordNoTitle{No Title}\fi \ifx\putwordof\undefined \gdef\putwordof{of}\fi \ifx\putwordon\undefined \gdef\putwordon{on}\fi \ifx\putwordpage\undefined \gdef\putwordpage{page}\fi \ifx\putwordsection\undefined \gdef\putwordsection{section}\fi \ifx\putwordSection\undefined \gdef\putwordSection{Section}\fi \ifx\putwordsee\undefined \gdef\putwordsee{see}\fi \ifx\putwordSee\undefined \gdef\putwordSee{See}\fi \ifx\putwordShortTOC\undefined \gdef\putwordShortTOC{Short Contents}\fi \ifx\putwordTOC\undefined \gdef\putwordTOC{Table of Contents}\fi % \ifx\putwordMJan\undefined \gdef\putwordMJan{January}\fi \ifx\putwordMFeb\undefined \gdef\putwordMFeb{February}\fi \ifx\putwordMMar\undefined \gdef\putwordMMar{March}\fi \ifx\putwordMApr\undefined \gdef\putwordMApr{April}\fi \ifx\putwordMMay\undefined \gdef\putwordMMay{May}\fi \ifx\putwordMJun\undefined \gdef\putwordMJun{June}\fi \ifx\putwordMJul\undefined \gdef\putwordMJul{July}\fi \ifx\putwordMAug\undefined \gdef\putwordMAug{August}\fi \ifx\putwordMSep\undefined \gdef\putwordMSep{September}\fi \ifx\putwordMOct\undefined \gdef\putwordMOct{October}\fi \ifx\putwordMNov\undefined \gdef\putwordMNov{November}\fi \ifx\putwordMDec\undefined \gdef\putwordMDec{December}\fi % \ifx\putwordDefmac\undefined \gdef\putwordDefmac{Macro}\fi \ifx\putwordDefspec\undefined \gdef\putwordDefspec{Special Form}\fi \ifx\putwordDefvar\undefined \gdef\putwordDefvar{Variable}\fi \ifx\putwordDefopt\undefined \gdef\putwordDefopt{User Option}\fi \ifx\putwordDeffunc\undefined \gdef\putwordDeffunc{Function}\fi % In some macros, we cannot use the `\? notation---the left quote is % in some cases the escape char. \chardef\colonChar = `\: \chardef\commaChar = `\, \chardef\dotChar = `\. \chardef\exclamChar= `\! \chardef\questChar = `\? \chardef\semiChar = `\; \chardef\underChar = `\_ \chardef\spaceChar = `\ % \chardef\spacecat = 10 \def\spaceisspace{\catcode\spaceChar=\spacecat} % Ignore a token. % \def\gobble#1{} % The following is used inside several \edef's. \def\makecsname#1{\expandafter\noexpand\csname#1\endcsname} % Hyphenation fixes. \hyphenation{ Flor-i-da Ghost-script Ghost-view Mac-OS Post-Script ap-pen-dix bit-map bit-maps data-base data-bases eshell fall-ing half-way long-est man-u-script man-u-scripts mini-buf-fer mini-buf-fers over-view par-a-digm par-a-digms rath-er rec-tan-gu-lar ro-bot-ics se-vere-ly set-up spa-ces spell-ing spell-ings stand-alone strong-est time-stamp time-stamps which-ever white-space wide-spread wrap-around } % Margin to add to right of even pages, to left of odd pages. \newdimen\bindingoffset \newdimen\normaloffset \newdimen\pagewidth \newdimen\pageheight % For a final copy, take out the rectangles % that mark overfull boxes (in case you have decided % that the text looks ok even though it passes the margin). % \def\finalout{\overfullrule=0pt} % @| inserts a changebar to the left of the current line. It should % surround any changed text. This approach does *not* work if the % change spans more than two lines of output. To handle that, we would % have adopt a much more difficult approach (putting marks into the main % vertical list for the beginning and end of each change). % \def\|{% % \vadjust can only be used in horizontal mode. \leavevmode % % Append this vertical mode material after the current line in the output. \vadjust{% % We want to insert a rule with the height and depth of the current % leading; that is exactly what \strutbox is supposed to record. \vskip-\baselineskip % % \vadjust-items are inserted at the left edge of the type. So % the \llap here moves out into the left-hand margin. \llap{% % % For a thicker or thinner bar, change the `1pt'. \vrule height\baselineskip width1pt % % This is the space between the bar and the text. \hskip 12pt }% }% } % Sometimes it is convenient to have everything in the transcript file % and nothing on the terminal. We don't just call \tracingall here, % since that produces some useless output on the terminal. We also make % some effort to order the tracing commands to reduce output in the log % file; cf. trace.sty in LaTeX. % \def\gloggingall{\begingroup \globaldefs = 1 \loggingall \endgroup}% \def\loggingall{% \tracingstats2 \tracingpages1 \tracinglostchars2 % 2 gives us more in etex \tracingparagraphs1 \tracingoutput1 \tracingmacros2 \tracingrestores1 \showboxbreadth\maxdimen \showboxdepth\maxdimen \ifx\eTeXversion\undefined\else % etex gives us more logging \tracingscantokens1 \tracingifs1 \tracinggroups1 \tracingnesting2 \tracingassigns1 \fi \tracingcommands3 % 3 gives us more in etex \errorcontextlines16 }% % add check for \lastpenalty to plain's definitions. If the last thing % we did was a \nobreak, we don't want to insert more space. % \def\smallbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\smallskipamount \removelastskip\penalty-50\smallskip\fi\fi} \def\medbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\medskipamount \removelastskip\penalty-100\medskip\fi\fi} \def\bigbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\bigskipamount \removelastskip\penalty-200\bigskip\fi\fi} % For @cropmarks command. % Do @cropmarks to get crop marks. % \newif\ifcropmarks \let\cropmarks = \cropmarkstrue % % Dimensions to add cropmarks at corners. % Added by P. A. MacKay, 12 Nov. 1986 % \newdimen\outerhsize \newdimen\outervsize % set by the paper size routines \newdimen\cornerlong \cornerlong=1pc \newdimen\cornerthick \cornerthick=.3pt \newdimen\topandbottommargin \topandbottommargin=.75in % Main output routine. \chardef\PAGE = 255 \output = {\onepageout{\pagecontents\PAGE}} \newbox\headlinebox \newbox\footlinebox % \onepageout takes a vbox as an argument. Note that \pagecontents % does insertions, but you have to call it yourself. \def\onepageout#1{% \ifcropmarks \hoffset=0pt \else \hoffset=\normaloffset \fi % \ifodd\pageno \advance\hoffset by \bindingoffset \else \advance\hoffset by -\bindingoffset\fi % % Do this outside of the \shipout so @code etc. will be expanded in % the headline as they should be, not taken literally (outputting ''code). \setbox\headlinebox = \vbox{\let\hsize=\pagewidth \makeheadline}% \setbox\footlinebox = \vbox{\let\hsize=\pagewidth \makefootline}% % {% % Have to do this stuff outside the \shipout because we want it to % take effect in \write's, yet the group defined by the \vbox ends % before the \shipout runs. % \escapechar = `\\ % use backslash in output files. \indexdummies % don't expand commands in the output. \normalturnoffactive % \ in index entries must not stay \, e.g., if % the page break happens to be in the middle of an example. \shipout\vbox{% % Do this early so pdf references go to the beginning of the page. \ifpdfmakepagedest \pdfdest name{\the\pageno} xyz\fi % \ifcropmarks \vbox to \outervsize\bgroup \hsize = \outerhsize \vskip-\topandbottommargin \vtop to0pt{% \line{\ewtop\hfil\ewtop}% \nointerlineskip \line{% \vbox{\moveleft\cornerthick\nstop}% \hfill \vbox{\moveright\cornerthick\nstop}% }% \vss}% \vskip\topandbottommargin \line\bgroup \hfil % center the page within the outer (page) hsize. \ifodd\pageno\hskip\bindingoffset\fi \vbox\bgroup \fi % \unvbox\headlinebox \pagebody{#1}% \ifdim\ht\footlinebox > 0pt % Only leave this space if the footline is nonempty. % (We lessened \vsize for it in \oddfootingxxx.) % The \baselineskip=24pt in plain's \makefootline has no effect. \vskip 2\baselineskip \unvbox\footlinebox \fi % \ifcropmarks \egroup % end of \vbox\bgroup \hfil\egroup % end of (centering) \line\bgroup \vskip\topandbottommargin plus1fill minus1fill \boxmaxdepth = \cornerthick \vbox to0pt{\vss \line{% \vbox{\moveleft\cornerthick\nsbot}% \hfill \vbox{\moveright\cornerthick\nsbot}% }% \nointerlineskip \line{\ewbot\hfil\ewbot}% }% \egroup % \vbox from first cropmarks clause \fi }% end of \shipout\vbox }% end of group with \normalturnoffactive \advancepageno \ifnum\outputpenalty>-20000 \else\dosupereject\fi } \newinsert\margin \dimen\margin=\maxdimen \def\pagebody#1{\vbox to\pageheight{\boxmaxdepth=\maxdepth #1}} {\catcode`\@ =11 \gdef\pagecontents#1{\ifvoid\topins\else\unvbox\topins\fi % marginal hacks, juha@viisa.uucp (Juha Takala) \ifvoid\margin\else % marginal info is present \rlap{\kern\hsize\vbox to\z@{\kern1pt\box\margin \vss}}\fi \dimen@=\dp#1 \unvbox#1 \ifvoid\footins\else\vskip\skip\footins\footnoterule \unvbox\footins\fi \ifr@ggedbottom \kern-\dimen@ \vfil \fi} } % Here are the rules for the cropmarks. Note that they are % offset so that the space between them is truly \outerhsize or \outervsize % (P. A. MacKay, 12 November, 1986) % \def\ewtop{\vrule height\cornerthick depth0pt width\cornerlong} \def\nstop{\vbox {\hrule height\cornerthick depth\cornerlong width\cornerthick}} \def\ewbot{\vrule height0pt depth\cornerthick width\cornerlong} \def\nsbot{\vbox {\hrule height\cornerlong depth\cornerthick width\cornerthick}} % Parse an argument, then pass it to #1. The argument is the rest of % the input line (except we remove a trailing comment). #1 should be a % macro which expects an ordinary undelimited TeX argument. % \def\parsearg{\parseargusing{}} \def\parseargusing#1#2{% \def\next{#2}% \begingroup \obeylines \spaceisspace #1% \parseargline\empty% Insert the \empty token, see \finishparsearg below. } {\obeylines % \gdef\parseargline#1^^M{% \endgroup % End of the group started in \parsearg. \argremovecomment #1\comment\ArgTerm% }% } % First remove any @comment, then any @c comment. \def\argremovecomment#1\comment#2\ArgTerm{\argremovec #1\c\ArgTerm} \def\argremovec#1\c#2\ArgTerm{\argcheckspaces#1\^^M\ArgTerm} % Each occurence of `\^^M' or `\^^M' is replaced by a single space. % % \argremovec might leave us with trailing space, e.g., % @end itemize @c foo % This space token undergoes the same procedure and is eventually removed % by \finishparsearg. % \def\argcheckspaces#1\^^M{\argcheckspacesX#1\^^M \^^M} \def\argcheckspacesX#1 \^^M{\argcheckspacesY#1\^^M} \def\argcheckspacesY#1\^^M#2\^^M#3\ArgTerm{% \def\temp{#3}% \ifx\temp\empty % We cannot use \next here, as it holds the macro to run; % thus we reuse \temp. \let\temp\finishparsearg \else \let\temp\argcheckspaces \fi % Put the space token in: \temp#1 #3\ArgTerm } % If a _delimited_ argument is enclosed in braces, they get stripped; so % to get _exactly_ the rest of the line, we had to prevent such situation. % We prepended an \empty token at the very beginning and we expand it now, % just before passing the control to \next. % (Similarily, we have to think about #3 of \argcheckspacesY above: it is % either the null string, or it ends with \^^M---thus there is no danger % that a pair of braces would be stripped. % % But first, we have to remove the trailing space token. % \def\finishparsearg#1 \ArgTerm{\expandafter\next\expandafter{#1}} % \parseargdef\foo{...} % is roughly equivalent to % \def\foo{\parsearg\Xfoo} % \def\Xfoo#1{...} % % Actually, I use \csname\string\foo\endcsname, ie. \\foo, as it is my % favourite TeX trick. --kasal, 16nov03 \def\parseargdef#1{% \expandafter \doparseargdef \csname\string#1\endcsname #1% } \def\doparseargdef#1#2{% \def#2{\parsearg#1}% \def#1##1% } % Several utility definitions with active space: { \obeyspaces \gdef\obeyedspace{ } % Make each space character in the input produce a normal interword % space in the output. Don't allow a line break at this space, as this % is used only in environments like @example, where each line of input % should produce a line of output anyway. % \gdef\sepspaces{\obeyspaces\let =\tie} % If an index command is used in an @example environment, any spaces % therein should become regular spaces in the raw index file, not the % expansion of \tie (\leavevmode \penalty \@M \ ). \gdef\unsepspaces{\let =\space} } \def\flushcr{\ifx\par\lisppar \def\next##1{}\else \let\next=\relax \fi \next} % Define the framework for environments in texinfo.tex. It's used like this: % % \envdef\foo{...} % \def\Efoo{...} % % It's the responsibility of \envdef to insert \begingroup before the % actual body; @end closes the group after calling \Efoo. \envdef also % defines \thisenv, so the current environment is known; @end checks % whether the environment name matches. The \checkenv macro can also be % used to check whether the current environment is the one expected. % % Non-false conditionals (@iftex, @ifset) don't fit into this, so they % are not treated as enviroments; they don't open a group. (The % implementation of @end takes care not to call \endgroup in this % special case.) % At runtime, environments start with this: \def\startenvironment#1{\begingroup\def\thisenv{#1}} % initialize \let\thisenv\empty % ... but they get defined via ``\envdef\foo{...}'': \long\def\envdef#1#2{\def#1{\startenvironment#1#2}} \def\envparseargdef#1#2{\parseargdef#1{\startenvironment#1#2}} % Check whether we're in the right environment: \def\checkenv#1{% \def\temp{#1}% \ifx\thisenv\temp \else \badenverr \fi } % Evironment mismatch, #1 expected: \def\badenverr{% \errhelp = \EMsimple \errmessage{This command can appear only \inenvironment\temp, not \inenvironment\thisenv}% } \def\inenvironment#1{% \ifx#1\empty out of any environment% \else in environment \expandafter\string#1% \fi } % @end foo executes the definition of \Efoo. % But first, it executes a specialized version of \checkenv % \parseargdef\end{% \if 1\csname iscond.#1\endcsname \else % The general wording of \badenverr may not be ideal, but... --kasal, 06nov03 \expandafter\checkenv\csname#1\endcsname \csname E#1\endcsname \endgroup \fi } \newhelp\EMsimple{Press RETURN to continue.} %% Simple single-character @ commands % @@ prints an @ % Kludge this until the fonts are right (grr). \def\@{{\tt\char64}} % This is turned off because it was never documented % and you can use @w{...} around a quote to suppress ligatures. %% Define @` and @' to be the same as ` and ' %% but suppressing ligatures. %\def\`{{`}} %\def\'{{'}} % Used to generate quoted braces. \def\mylbrace {{\tt\char123}} \def\myrbrace {{\tt\char125}} \let\{=\mylbrace \let\}=\myrbrace \begingroup % Definitions to produce \{ and \} commands for indices, % and @{ and @} for the aux file. \catcode`\{ = \other \catcode`\} = \other \catcode`\[ = 1 \catcode`\] = 2 \catcode`\! = 0 \catcode`\\ = \other !gdef!lbracecmd[\{]% !gdef!rbracecmd[\}]% !gdef!lbraceatcmd[@{]% !gdef!rbraceatcmd[@}]% !endgroup % @comma{} to avoid , parsing problems. \let\comma = , % Accents: @, @dotaccent @ringaccent @ubaraccent @udotaccent % Others are defined by plain TeX: @` @' @" @^ @~ @= @u @v @H. \let\, = \c \let\dotaccent = \. \def\ringaccent#1{{\accent23 #1}} \let\tieaccent = \t \let\ubaraccent = \b \let\udotaccent = \d % Other special characters: @questiondown @exclamdown @ordf @ordm % Plain TeX defines: @AA @AE @O @OE @L (plus lowercase versions) @ss. \def\questiondown{?`} \def\exclamdown{!`} \def\ordf{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{a}}} \def\ordm{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{o}}} % Dotless i and dotless j, used for accents. \def\imacro{i} \def\jmacro{j} \def\dotless#1{% \def\temp{#1}% \ifx\temp\imacro \ptexi \else\ifx\temp\jmacro \j \else \errmessage{@dotless can be used only with i or j}% \fi\fi } % The \TeX{} logo, as in plain, but resetting the spacing so that a % period following counts as ending a sentence. (Idea found in latex.) % \edef\TeX{\TeX \spacefactor=3000 } % @LaTeX{} logo. Not quite the same results as the definition in % latex.ltx, since we use a different font for the raised A; it's most % convenient for us to use an explicitly smaller font, rather than using % the \scriptstyle font (since we don't reset \scriptstyle and % \scriptscriptstyle). % \def\LaTeX{% L\kern-.36em {\setbox0=\hbox{T}% \vbox to \ht0{\hbox{\selectfonts\lllsize A}\vss}}% \kern-.15em \TeX } % Be sure we're in horizontal mode when doing a tie, since we make space % equivalent to this in @example-like environments. Otherwise, a space % at the beginning of a line will start with \penalty -- and % since \penalty is valid in vertical mode, we'd end up putting the % penalty on the vertical list instead of in the new paragraph. {\catcode`@ = 11 % Avoid using \@M directly, because that causes trouble % if the definition is written into an index file. \global\let\tiepenalty = \@M \gdef\tie{\leavevmode\penalty\tiepenalty\ } } % @: forces normal size whitespace following. \def\:{\spacefactor=1000 } % @* forces a line break. \def\*{\hfil\break\hbox{}\ignorespaces} % @/ allows a line break. \let\/=\allowbreak % @. is an end-of-sentence period. \def\.{.\spacefactor=3000 } % @! is an end-of-sentence bang. \def\!{!\spacefactor=3000 } % @? is an end-of-sentence query. \def\?{?\spacefactor=3000 } % @w prevents a word break. Without the \leavevmode, @w at the % beginning of a paragraph, when TeX is still in vertical mode, would % produce a whole line of output instead of starting the paragraph. \def\w#1{\leavevmode\hbox{#1}} % @group ... @end group forces ... to be all on one page, by enclosing % it in a TeX vbox. We use \vtop instead of \vbox to construct the box % to keep its height that of a normal line. According to the rules for % \topskip (p.114 of the TeXbook), the glue inserted is % max (\topskip - \ht (first item), 0). If that height is large, % therefore, no glue is inserted, and the space between the headline and % the text is small, which looks bad. % % Another complication is that the group might be very large. This can % cause the glue on the previous page to be unduly stretched, because it % does not have much material. In this case, it's better to add an % explicit \vfill so that the extra space is at the bottom. The % threshold for doing this is if the group is more than \vfilllimit % percent of a page (\vfilllimit can be changed inside of @tex). % \newbox\groupbox \def\vfilllimit{0.7} % \envdef\group{% \ifnum\catcode`\^^M=\active \else \errhelp = \groupinvalidhelp \errmessage{@group invalid in context where filling is enabled}% \fi \startsavinginserts % \setbox\groupbox = \vtop\bgroup % Do @comment since we are called inside an environment such as % @example, where each end-of-line in the input causes an % end-of-line in the output. We don't want the end-of-line after % the `@group' to put extra space in the output. Since @group % should appear on a line by itself (according to the Texinfo % manual), we don't worry about eating any user text. \comment } % % The \vtop produces a box with normal height and large depth; thus, TeX puts % \baselineskip glue before it, and (when the next line of text is done) % \lineskip glue after it. Thus, space below is not quite equal to space % above. But it's pretty close. \def\Egroup{% % To get correct interline space between the last line of the group % and the first line afterwards, we have to propagate \prevdepth. \endgraf % Not \par, as it may have been set to \lisppar. \global\dimen1 = \prevdepth \egroup % End the \vtop. % \dimen0 is the vertical size of the group's box. \dimen0 = \ht\groupbox \advance\dimen0 by \dp\groupbox % \dimen2 is how much space is left on the page (more or less). \dimen2 = \pageheight \advance\dimen2 by -\pagetotal % if the group doesn't fit on the current page, and it's a big big % group, force a page break. \ifdim \dimen0 > \dimen2 \ifdim \pagetotal < \vfilllimit\pageheight \page \fi \fi \box\groupbox \prevdepth = \dimen1 \checkinserts } % % TeX puts in an \escapechar (i.e., `@') at the beginning of the help % message, so this ends up printing `@group can only ...'. % \newhelp\groupinvalidhelp{% group can only be used in environments such as @example,^^J% where each line of input produces a line of output.} % @need space-in-mils % forces a page break if there is not space-in-mils remaining. \newdimen\mil \mil=0.001in % Old definition--didn't work. %\parseargdef\need{\par % %% This method tries to make TeX break the page naturally %% if the depth of the box does not fit. %{\baselineskip=0pt% %\vtop to #1\mil{\vfil}\kern -#1\mil\nobreak %\prevdepth=-1000pt %}} \parseargdef\need{% % Ensure vertical mode, so we don't make a big box in the middle of a % paragraph. \par % % If the @need value is less than one line space, it's useless. \dimen0 = #1\mil \dimen2 = \ht\strutbox \advance\dimen2 by \dp\strutbox \ifdim\dimen0 > \dimen2 % % Do a \strut just to make the height of this box be normal, so the % normal leading is inserted relative to the preceding line. % And a page break here is fine. \vtop to #1\mil{\strut\vfil}% % % TeX does not even consider page breaks if a penalty added to the % main vertical list is 10000 or more. But in order to see if the % empty box we just added fits on the page, we must make it consider % page breaks. On the other hand, we don't want to actually break the % page after the empty box. So we use a penalty of 9999. % % There is an extremely small chance that TeX will actually break the % page at this \penalty, if there are no other feasible breakpoints in % sight. (If the user is using lots of big @group commands, which % almost-but-not-quite fill up a page, TeX will have a hard time doing % good page breaking, for example.) However, I could not construct an % example where a page broke at this \penalty; if it happens in a real % document, then we can reconsider our strategy. \penalty9999 % % Back up by the size of the box, whether we did a page break or not. \kern -#1\mil % % Do not allow a page break right after this kern. \nobreak \fi } % @br forces paragraph break (and is undocumented). \let\br = \par % @page forces the start of a new page. % \def\page{\par\vfill\supereject} % @exdent text.... % outputs text on separate line in roman font, starting at standard page margin % This records the amount of indent in the innermost environment. % That's how much \exdent should take out. \newskip\exdentamount % This defn is used inside fill environments such as @defun. \parseargdef\exdent{\hfil\break\hbox{\kern -\exdentamount{\rm#1}}\hfil\break} % This defn is used inside nofill environments such as @example. \parseargdef\nofillexdent{{\advance \leftskip by -\exdentamount \leftline{\hskip\leftskip{\rm#1}}}} % @inmargin{WHICH}{TEXT} puts TEXT in the WHICH margin next to the current % paragraph. For more general purposes, use the \margin insertion % class. WHICH is `l' or `r'. % \newskip\inmarginspacing \inmarginspacing=1cm \def\strutdepth{\dp\strutbox} % \def\doinmargin#1#2{\strut\vadjust{% \nobreak \kern-\strutdepth \vtop to \strutdepth{% \baselineskip=\strutdepth \vss % if you have multiple lines of stuff to put here, you'll need to % make the vbox yourself of the appropriate size. \ifx#1l% \llap{\ignorespaces #2\hskip\inmarginspacing}% \else \rlap{\hskip\hsize \hskip\inmarginspacing \ignorespaces #2}% \fi \null }% }} \def\inleftmargin{\doinmargin l} \def\inrightmargin{\doinmargin r} % % @inmargin{TEXT [, RIGHT-TEXT]} % (if RIGHT-TEXT is given, use TEXT for left page, RIGHT-TEXT for right; % else use TEXT for both). % \def\inmargin#1{\parseinmargin #1,,\finish} \def\parseinmargin#1,#2,#3\finish{% not perfect, but better than nothing. \setbox0 = \hbox{\ignorespaces #2}% \ifdim\wd0 > 0pt \def\lefttext{#1}% have both texts \def\righttext{#2}% \else \def\lefttext{#1}% have only one text \def\righttext{#1}% \fi % \ifodd\pageno \def\temp{\inrightmargin\righttext}% odd page -> outside is right margin \else \def\temp{\inleftmargin\lefttext}% \fi \temp } % @include file insert text of that file as input. % \def\include{\parseargusing\filenamecatcodes\includezzz} \def\includezzz#1{% \pushthisfilestack \def\thisfile{#1}% {% \makevalueexpandable \def\temp{\input #1 }% \expandafter }\temp \popthisfilestack } \def\filenamecatcodes{% \catcode`\\=\other \catcode`~=\other \catcode`^=\other \catcode`_=\other \catcode`|=\other \catcode`<=\other \catcode`>=\other \catcode`+=\other \catcode`-=\other } \def\pushthisfilestack{% \expandafter\pushthisfilestackX\popthisfilestack\StackTerm } \def\pushthisfilestackX{% \expandafter\pushthisfilestackY\thisfile\StackTerm } \def\pushthisfilestackY #1\StackTerm #2\StackTerm {% \gdef\popthisfilestack{\gdef\thisfile{#1}\gdef\popthisfilestack{#2}}% } \def\popthisfilestack{\errthisfilestackempty} \def\errthisfilestackempty{\errmessage{Internal error: the stack of filenames is empty.}} \def\thisfile{} % @center line % outputs that line, centered. % \parseargdef\center{% \ifhmode \let\next\centerH \else \let\next\centerV \fi \next{\hfil \ignorespaces#1\unskip \hfil}% } \def\centerH#1{% {% \hfil\break \advance\hsize by -\leftskip \advance\hsize by -\rightskip \line{#1}% \break }% } \def\centerV#1{\line{\kern\leftskip #1\kern\rightskip}} % @sp n outputs n lines of vertical space \parseargdef\sp{\vskip #1\baselineskip} % @comment ...line which is ignored... % @c is the same as @comment % @ignore ... @end ignore is another way to write a comment \def\comment{\begingroup \catcode`\^^M=\other% \catcode`\@=\other \catcode`\{=\other \catcode`\}=\other% \commentxxx} {\catcode`\^^M=\other \gdef\commentxxx#1^^M{\endgroup}} \let\c=\comment % @paragraphindent NCHARS % We'll use ems for NCHARS, close enough. % NCHARS can also be the word `asis' or `none'. % We cannot feasibly implement @paragraphindent asis, though. % \def\asisword{asis} % no translation, these are keywords \def\noneword{none} % \parseargdef\paragraphindent{% \def\temp{#1}% \ifx\temp\asisword \else \ifx\temp\noneword \defaultparindent = 0pt \else \defaultparindent = #1em \fi \fi \parindent = \defaultparindent } % @exampleindent NCHARS % We'll use ems for NCHARS like @paragraphindent. % It seems @exampleindent asis isn't necessary, but % I preserve it to make it similar to @paragraphindent. \parseargdef\exampleindent{% \def\temp{#1}% \ifx\temp\asisword \else \ifx\temp\noneword \lispnarrowing = 0pt \else \lispnarrowing = #1em \fi \fi } % @firstparagraphindent WORD % If WORD is `none', then suppress indentation of the first paragraph % after a section heading. If WORD is `insert', then do indent at such % paragraphs. % % The paragraph indentation is suppressed or not by calling % \suppressfirstparagraphindent, which the sectioning commands do. % We switch the definition of this back and forth according to WORD. % By default, we suppress indentation. % \def\suppressfirstparagraphindent{\dosuppressfirstparagraphindent} \def\insertword{insert} % \parseargdef\firstparagraphindent{% \def\temp{#1}% \ifx\temp\noneword \let\suppressfirstparagraphindent = \dosuppressfirstparagraphindent \else\ifx\temp\insertword \let\suppressfirstparagraphindent = \relax \else \errhelp = \EMsimple \errmessage{Unknown @firstparagraphindent option `\temp'}% \fi\fi } % Here is how we actually suppress indentation. Redefine \everypar to % \kern backwards by \parindent, and then reset itself to empty. % % We also make \indent itself not actually do anything until the next % paragraph. % \gdef\dosuppressfirstparagraphindent{% \gdef\indent{% \restorefirstparagraphindent \indent }% \gdef\noindent{% \restorefirstparagraphindent \noindent }% \global\everypar = {% \kern -\parindent \restorefirstparagraphindent }% } \gdef\restorefirstparagraphindent{% \global \let \indent = \ptexindent \global \let \noindent = \ptexnoindent \global \everypar = {}% } % @asis just yields its argument. Used with @table, for example. % \def\asis#1{#1} % @math outputs its argument in math mode. % % One complication: _ usually means subscripts, but it could also mean % an actual _ character, as in @math{@var{some_variable} + 1}. So make % _ active, and distinguish by seeing if the current family is \slfam, % which is what @var uses. { \catcode\underChar = \active \gdef\mathunderscore{% \catcode\underChar=\active \def_{\ifnum\fam=\slfam \_\else\sb\fi}% } } % Another complication: we want \\ (and @\) to output a \ character. % FYI, plain.tex uses \\ as a temporary control sequence (why?), but % this is not advertised and we don't care. Texinfo does not % otherwise define @\. % % The \mathchar is class=0=ordinary, family=7=ttfam, position=5C=\. \def\mathbackslash{\ifnum\fam=\ttfam \mathchar"075C \else\backslash \fi} % \def\math{% \tex \mathunderscore \let\\ = \mathbackslash \mathactive $\finishmath } \def\finishmath#1{#1$\endgroup} % Close the group opened by \tex. % Some active characters (such as <) are spaced differently in math. % We have to reset their definitions in case the @math was an argument % to a command which sets the catcodes (such as @item or @section). % { \catcode`^ = \active \catcode`< = \active \catcode`> = \active \catcode`+ = \active \gdef\mathactive{% \let^ = \ptexhat \let< = \ptexless \let> = \ptexgtr \let+ = \ptexplus } } % @bullet and @minus need the same treatment as @math, just above. \def\bullet{$\ptexbullet$} \def\minus{$-$} % @dots{} outputs an ellipsis using the current font. % We do .5em per period so that it has the same spacing in a typewriter % font as three actual period characters. % \def\dots{% \leavevmode \hbox to 1.5em{% \hskip 0pt plus 0.25fil .\hfil.\hfil.% \hskip 0pt plus 0.5fil }% } % @enddots{} is an end-of-sentence ellipsis. % \def\enddots{% \dots \spacefactor=3000 } % @comma{} is so commas can be inserted into text without messing up % Texinfo's parsing. % \let\comma = , % @refill is a no-op. \let\refill=\relax % If working on a large document in chapters, it is convenient to % be able to disable indexing, cross-referencing, and contents, for test runs. % This is done with @novalidate (before @setfilename). % \newif\iflinks \linkstrue % by default we want the aux files. \let\novalidate = \linksfalse % @setfilename is done at the beginning of every texinfo file. % So open here the files we need to have open while reading the input. % This makes it possible to make a .fmt file for texinfo. \def\setfilename{% \fixbackslash % Turn off hack to swallow `\input texinfo'. \iflinks \tryauxfile % Open the new aux file. TeX will close it automatically at exit. \immediate\openout\auxfile=\jobname.aux \fi % \openindices needs to do some work in any case. \openindices \let\setfilename=\comment % Ignore extra @setfilename cmds. % % If texinfo.cnf is present on the system, read it. % Useful for site-wide @afourpaper, etc. \openin 1 texinfo.cnf \ifeof 1 \else \input texinfo.cnf \fi \closein 1 % \comment % Ignore the actual filename. } % Called from \setfilename. % \def\openindices{% \newindex{cp}% \newcodeindex{fn}% \newcodeindex{vr}% \newcodeindex{tp}% \newcodeindex{ky}% \newcodeindex{pg}% } % @bye. \outer\def\bye{\pagealignmacro\tracingstats=1\ptexend} \message{pdf,} % adobe `portable' document format \newcount\tempnum \newcount\lnkcount \newtoks\filename \newcount\filenamelength \newcount\pgn \newtoks\toksA \newtoks\toksB \newtoks\toksC \newtoks\toksD \newbox\boxA \newcount\countA \newif\ifpdf \newif\ifpdfmakepagedest % when pdftex is run in dvi mode, \pdfoutput is defined (so \pdfoutput=1 % can be set). So we test for \relax and 0 as well as \undefined, % borrowed from ifpdf.sty. \ifx\pdfoutput\undefined \else \ifx\pdfoutput\relax \else \ifcase\pdfoutput \else \pdftrue \fi \fi \fi % \ifpdf \input pdfcolor \pdfcatalog{/PageMode /UseOutlines}% \def\dopdfimage#1#2#3{% \def\imagewidth{#2}% \def\imageheight{#3}% % without \immediate, pdftex seg faults when the same image is % included twice. (Version 3.14159-pre-1.0-unofficial-20010704.) \ifnum\pdftexversion < 14 \immediate\pdfimage \else \immediate\pdfximage \fi \ifx\empty\imagewidth\else width \imagewidth \fi \ifx\empty\imageheight\else height \imageheight \fi \ifnum\pdftexversion<13 #1.pdf% \else {#1.pdf}% \fi \ifnum\pdftexversion < 14 \else \pdfrefximage \pdflastximage \fi} \def\pdfmkdest#1{{% % We have to set dummies so commands such as @code in a section title % aren't expanded. \atdummies \normalturnoffactive \pdfdest name{#1} xyz% }} \def\pdfmkpgn#1{#1} \let\linkcolor = \Blue % was Cyan, but that seems light? \def\endlink{\Black\pdfendlink} % Adding outlines to PDF; macros for calculating structure of outlines % come from Petr Olsak \def\expnumber#1{\expandafter\ifx\csname#1\endcsname\relax 0% \else \csname#1\endcsname \fi} \def\advancenumber#1{\tempnum=\expnumber{#1}\relax \advance\tempnum by 1 \expandafter\xdef\csname#1\endcsname{\the\tempnum}} % % #1 is the section text. #2 is the pdf expression for the number % of subentries (or empty, for subsubsections). #3 is the node % text, which might be empty if this toc entry had no % corresponding node. #4 is the page number. % \def\dopdfoutline#1#2#3#4{% % Generate a link to the node text if that exists; else, use the % page number. We could generate a destination for the section % text in the case where a section has no node, but it doesn't % seem worthwhile, since most documents are normally structured. \def\pdfoutlinedest{#3}% \ifx\pdfoutlinedest\empty \def\pdfoutlinedest{#4}\fi % \pdfoutline goto name{\pdfmkpgn{\pdfoutlinedest}}#2{#1}% } % \def\pdfmakeoutlines{% \begingroup % Thanh's hack / proper braces in bookmarks \edef\mylbrace{\iftrue \string{\else}\fi}\let\{=\mylbrace \edef\myrbrace{\iffalse{\else\string}\fi}\let\}=\myrbrace % % Read toc silently, to get counts of subentries for \pdfoutline. \def\numchapentry##1##2##3##4{% \def\thischapnum{##2}% \let\thissecnum\empty \let\thissubsecnum\empty }% \def\numsecentry##1##2##3##4{% \advancenumber{chap\thischapnum}% \def\thissecnum{##2}% \let\thissubsecnum\empty }% \def\numsubsecentry##1##2##3##4{% \advancenumber{sec\thissecnum}% \def\thissubsecnum{##2}% }% \def\numsubsubsecentry##1##2##3##4{% \advancenumber{subsec\thissubsecnum}% }% \let\thischapnum\empty \let\thissecnum\empty \let\thissubsecnum\empty % % use \def rather than \let here because we redefine \chapentry et % al. a second time, below. \def\appentry{\numchapentry}% \def\appsecentry{\numsecentry}% \def\appsubsecentry{\numsubsecentry}% \def\appsubsubsecentry{\numsubsubsecentry}% \def\unnchapentry{\numchapentry}% \def\unnsecentry{\numsecentry}% \def\unnsubsecentry{\numsubsecentry}% \def\unnsubsubsecentry{\numsubsubsecentry}% \input \jobname.toc % % Read toc second time, this time actually producing the outlines. % The `-' means take the \expnumber as the absolute number of % subentries, which we calculated on our first read of the .toc above. % % We use the node names as the destinations. \def\numchapentry##1##2##3##4{% \dopdfoutline{##1}{count-\expnumber{chap##2}}{##3}{##4}}% \def\numsecentry##1##2##3##4{% \dopdfoutline{##1}{count-\expnumber{sec##2}}{##3}{##4}}% \def\numsubsecentry##1##2##3##4{% \dopdfoutline{##1}{count-\expnumber{subsec##2}}{##3}{##4}}% \def\numsubsubsecentry##1##2##3##4{% count is always zero \dopdfoutline{##1}{}{##3}{##4}}% % % PDF outlines are displayed using system fonts, instead of % document fonts. Therefore we cannot use special characters, % since the encoding is unknown. For example, the eogonek from % Latin 2 (0xea) gets translated to a | character. Info from % Staszek Wawrykiewicz, 19 Jan 2004 04:09:24 +0100. % % xx to do this right, we have to translate 8-bit characters to % their "best" equivalent, based on the @documentencoding. Right % now, I guess we'll just let the pdf reader have its way. \indexnofonts \turnoffactive \input \jobname.toc \endgroup } % \def\makelinks #1,{% \def\params{#1}\def\E{END}% \ifx\params\E \let\nextmakelinks=\relax \else \let\nextmakelinks=\makelinks \ifnum\lnkcount>0,\fi \picknum{#1}% \startlink attr{/Border [0 0 0]} goto name{\pdfmkpgn{\the\pgn}}% \linkcolor #1% \advance\lnkcount by 1% \endlink \fi \nextmakelinks } \def\picknum#1{\expandafter\pn#1} \def\pn#1{% \def\p{#1}% \ifx\p\lbrace \let\nextpn=\ppn \else \let\nextpn=\ppnn \def\first{#1} \fi \nextpn } \def\ppn#1{\pgn=#1\gobble} \def\ppnn{\pgn=\first} \def\pdfmklnk#1{\lnkcount=0\makelinks #1,END,} \def\skipspaces#1{\def\PP{#1}\def\D{|}% \ifx\PP\D\let\nextsp\relax \else\let\nextsp\skipspaces \ifx\p\space\else\addtokens{\filename}{\PP}% \advance\filenamelength by 1 \fi \fi \nextsp} \def\getfilename#1{\filenamelength=0\expandafter\skipspaces#1|\relax} \ifnum\pdftexversion < 14 \let \startlink \pdfannotlink \else \let \startlink \pdfstartlink \fi \def\pdfurl#1{% \begingroup \normalturnoffactive\def\@{@}% \makevalueexpandable \leavevmode\Red \startlink attr{/Border [0 0 0]}% user{/Subtype /Link /A << /S /URI /URI (#1) >>}% \endgroup} \def\pdfgettoks#1.{\setbox\boxA=\hbox{\toksA={#1.}\toksB={}\maketoks}} \def\addtokens#1#2{\edef\addtoks{\noexpand#1={\the#1#2}}\addtoks} \def\adn#1{\addtokens{\toksC}{#1}\global\countA=1\let\next=\maketoks} \def\poptoks#1#2|ENDTOKS|{\let\first=#1\toksD={#1}\toksA={#2}} \def\maketoks{% \expandafter\poptoks\the\toksA|ENDTOKS|\relax \ifx\first0\adn0 \else\ifx\first1\adn1 \else\ifx\first2\adn2 \else\ifx\first3\adn3 \else\ifx\first4\adn4 \else\ifx\first5\adn5 \else\ifx\first6\adn6 \else\ifx\first7\adn7 \else\ifx\first8\adn8 \else\ifx\first9\adn9 \else \ifnum0=\countA\else\makelink\fi \ifx\first.\let\next=\done\else \let\next=\maketoks \addtokens{\toksB}{\the\toksD} \ifx\first,\addtokens{\toksB}{\space}\fi \fi \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi \next} \def\makelink{\addtokens{\toksB}% {\noexpand\pdflink{\the\toksC}}\toksC={}\global\countA=0} \def\pdflink#1{% \startlink attr{/Border [0 0 0]} goto name{\pdfmkpgn{#1}} \linkcolor #1\endlink} \def\done{\edef\st{\global\noexpand\toksA={\the\toksB}}\st} \else \let\pdfmkdest = \gobble \let\pdfurl = \gobble \let\endlink = \relax \let\linkcolor = \relax \let\pdfmakeoutlines = \relax \fi % \ifx\pdfoutput \message{fonts,} % Change the current font style to #1, remembering it in \curfontstyle. % For now, we do not accumulate font styles: @b{@i{foo}} prints foo in % italics, not bold italics. % \def\setfontstyle#1{% \def\curfontstyle{#1}% not as a control sequence, because we are \edef'd. \csname ten#1\endcsname % change the current font } % Select #1 fonts with the current style. % \def\selectfonts#1{\csname #1fonts\endcsname \csname\curfontstyle\endcsname} \def\rm{\fam=0 \setfontstyle{rm}} \def\it{\fam=\itfam \setfontstyle{it}} \def\sl{\fam=\slfam \setfontstyle{sl}} \def\bf{\fam=\bffam \setfontstyle{bf}} \def\tt{\fam=\ttfam \setfontstyle{tt}} % Texinfo sort of supports the sans serif font style, which plain TeX does not. % So we set up a \sf. \newfam\sffam \def\sf{\fam=\sffam \setfontstyle{sf}} \let\li = \sf % Sometimes we call it \li, not \sf. % We don't need math for this font style. \def\ttsl{\setfontstyle{ttsl}} % Default leading. \newdimen\textleading \textleading = 13.2pt % Set the baselineskip to #1, and the lineskip and strut size % correspondingly. There is no deep meaning behind these magic numbers % used as factors; they just match (closely enough) what Knuth defined. % \def\lineskipfactor{.08333} \def\strutheightpercent{.70833} \def\strutdepthpercent {.29167} % \def\setleading#1{% \normalbaselineskip = #1\relax \normallineskip = \lineskipfactor\normalbaselineskip \normalbaselines \setbox\strutbox =\hbox{% \vrule width0pt height\strutheightpercent\baselineskip depth \strutdepthpercent \baselineskip }% } % Set the font macro #1 to the font named #2, adding on the % specified font prefix (normally `cm'). % #3 is the font's design size, #4 is a scale factor \def\setfont#1#2#3#4{\font#1=\fontprefix#2#3 scaled #4} % Use cm as the default font prefix. % To specify the font prefix, you must define \fontprefix % before you read in texinfo.tex. \ifx\fontprefix\undefined \def\fontprefix{cm} \fi % Support font families that don't use the same naming scheme as CM. \def\rmshape{r} \def\rmbshape{bx} %where the normal face is bold \def\bfshape{b} \def\bxshape{bx} \def\ttshape{tt} \def\ttbshape{tt} \def\ttslshape{sltt} \def\itshape{ti} \def\itbshape{bxti} \def\slshape{sl} \def\slbshape{bxsl} \def\sfshape{ss} \def\sfbshape{ss} \def\scshape{csc} \def\scbshape{csc} % Text fonts (11.2pt, magstep1). \newcount\mainmagstep \ifx\bigger\relax % not really supported. \mainmagstep=\magstep1 \setfont\textrm\rmshape{12}{1000} \setfont\texttt\ttshape{12}{1000} \else \mainmagstep=\magstephalf \setfont\textrm\rmshape{10}{\mainmagstep} \setfont\texttt\ttshape{10}{\mainmagstep} \fi \setfont\textbf\bfshape{10}{\mainmagstep} \setfont\textit\itshape{10}{\mainmagstep} \setfont\textsl\slshape{10}{\mainmagstep} \setfont\textsf\sfshape{10}{\mainmagstep} \setfont\textsc\scshape{10}{\mainmagstep} \setfont\textttsl\ttslshape{10}{\mainmagstep} \font\texti=cmmi10 scaled \mainmagstep \font\textsy=cmsy10 scaled \mainmagstep % A few fonts for @defun names and args. \setfont\defbf\bfshape{10}{\magstep1} \setfont\deftt\ttshape{10}{\magstep1} \setfont\defttsl\ttslshape{10}{\magstep1} \def\df{\let\tentt=\deftt \let\tenbf = \defbf \let\tenttsl=\defttsl \bf} % Fonts for indices, footnotes, small examples (9pt). \setfont\smallrm\rmshape{9}{1000} \setfont\smalltt\ttshape{9}{1000} \setfont\smallbf\bfshape{10}{900} \setfont\smallit\itshape{9}{1000} \setfont\smallsl\slshape{9}{1000} \setfont\smallsf\sfshape{9}{1000} \setfont\smallsc\scshape{10}{900} \setfont\smallttsl\ttslshape{10}{900} \font\smalli=cmmi9 \font\smallsy=cmsy9 % Fonts for small examples (8pt). \setfont\smallerrm\rmshape{8}{1000} \setfont\smallertt\ttshape{8}{1000} \setfont\smallerbf\bfshape{10}{800} \setfont\smallerit\itshape{8}{1000} \setfont\smallersl\slshape{8}{1000} \setfont\smallersf\sfshape{8}{1000} \setfont\smallersc\scshape{10}{800} \setfont\smallerttsl\ttslshape{10}{800} \font\smalleri=cmmi8 \font\smallersy=cmsy8 % Fonts for title page (20.4pt): \setfont\titlerm\rmbshape{12}{\magstep3} \setfont\titleit\itbshape{10}{\magstep4} \setfont\titlesl\slbshape{10}{\magstep4} \setfont\titlett\ttbshape{12}{\magstep3} \setfont\titlettsl\ttslshape{10}{\magstep4} \setfont\titlesf\sfbshape{17}{\magstep1} \let\titlebf=\titlerm \setfont\titlesc\scbshape{10}{\magstep4} \font\titlei=cmmi12 scaled \magstep3 \font\titlesy=cmsy10 scaled \magstep4 \def\authorrm{\secrm} \def\authortt{\sectt} % Chapter (and unnumbered) fonts (17.28pt). \setfont\chaprm\rmbshape{12}{\magstep2} \setfont\chapit\itbshape{10}{\magstep3} \setfont\chapsl\slbshape{10}{\magstep3} \setfont\chaptt\ttbshape{12}{\magstep2} \setfont\chapttsl\ttslshape{10}{\magstep3} \setfont\chapsf\sfbshape{17}{1000} \let\chapbf=\chaprm \setfont\chapsc\scbshape{10}{\magstep3} \font\chapi=cmmi12 scaled \magstep2 \font\chapsy=cmsy10 scaled \magstep3 % Section fonts (14.4pt). \setfont\secrm\rmbshape{12}{\magstep1} \setfont\secit\itbshape{10}{\magstep2} \setfont\secsl\slbshape{10}{\magstep2} \setfont\sectt\ttbshape{12}{\magstep1} \setfont\secttsl\ttslshape{10}{\magstep2} \setfont\secsf\sfbshape{12}{\magstep1} \let\secbf\secrm \setfont\secsc\scbshape{10}{\magstep2} \font\seci=cmmi12 scaled \magstep1 \font\secsy=cmsy10 scaled \magstep2 % Subsection fonts (13.15pt). \setfont\ssecrm\rmbshape{12}{\magstephalf} \setfont\ssecit\itbshape{10}{1315} \setfont\ssecsl\slbshape{10}{1315} \setfont\ssectt\ttbshape{12}{\magstephalf} \setfont\ssecttsl\ttslshape{10}{1315} \setfont\ssecsf\sfbshape{12}{\magstephalf} \let\ssecbf\ssecrm \setfont\ssecsc\scbshape{10}{1315} \font\sseci=cmmi12 scaled \magstephalf \font\ssecsy=cmsy10 scaled 1315 % Reduced fonts for @acro in text (10pt). \setfont\reducedrm\rmshape{10}{1000} \setfont\reducedtt\ttshape{10}{1000} \setfont\reducedbf\bfshape{10}{1000} \setfont\reducedit\itshape{10}{1000} \setfont\reducedsl\slshape{10}{1000} \setfont\reducedsf\sfshape{10}{1000} \setfont\reducedsc\scshape{10}{1000} \setfont\reducedttsl\ttslshape{10}{1000} \font\reducedi=cmmi10 \font\reducedsy=cmsy10 % In order for the font changes to affect most math symbols and letters, % we have to define the \textfont of the standard families. Since % texinfo doesn't allow for producing subscripts and superscripts except % in the main text, we don't bother to reset \scriptfont and % \scriptscriptfont (which would also require loading a lot more fonts). % \def\resetmathfonts{% \textfont0=\tenrm \textfont1=\teni \textfont2=\tensy \textfont\itfam=\tenit \textfont\slfam=\tensl \textfont\bffam=\tenbf \textfont\ttfam=\tentt \textfont\sffam=\tensf } % The font-changing commands redefine the meanings of \tenSTYLE, instead % of just \STYLE. We do this because \STYLE needs to also set the % current \fam for math mode. Our \STYLE (e.g., \rm) commands hardwire % \tenSTYLE to set the current font. % % Each font-changing command also sets the names \lsize (one size lower) % and \lllsize (three sizes lower). These relative commands are used in % the LaTeX logo and acronyms. % % This all needs generalizing, badly. % \def\textfonts{% \let\tenrm=\textrm \let\tenit=\textit \let\tensl=\textsl \let\tenbf=\textbf \let\tentt=\texttt \let\smallcaps=\textsc \let\tensf=\textsf \let\teni=\texti \let\tensy=\textsy \let\tenttsl=\textttsl \def\lsize{reduced}\def\lllsize{smaller}% \resetmathfonts \setleading{\textleading}} \def\titlefonts{% \let\tenrm=\titlerm \let\tenit=\titleit \let\tensl=\titlesl \let\tenbf=\titlebf \let\tentt=\titlett \let\smallcaps=\titlesc \let\tensf=\titlesf \let\teni=\titlei \let\tensy=\titlesy \let\tenttsl=\titlettsl \def\lsize{chap}\def\lllsize{subsec}% \resetmathfonts \setleading{25pt}} \def\titlefont#1{{\titlefonts\rm #1}} \def\chapfonts{% \let\tenrm=\chaprm \let\tenit=\chapit \let\tensl=\chapsl \let\tenbf=\chapbf \let\tentt=\chaptt \let\smallcaps=\chapsc \let\tensf=\chapsf \let\teni=\chapi \let\tensy=\chapsy \let\tenttsl=\chapttsl \def\lsize{sec}\def\lllsize{text}% \resetmathfonts \setleading{19pt}} \def\secfonts{% \let\tenrm=\secrm \let\tenit=\secit \let\tensl=\secsl \let\tenbf=\secbf \let\tentt=\sectt \let\smallcaps=\secsc \let\tensf=\secsf \let\teni=\seci \let\tensy=\secsy \let\tenttsl=\secttsl \def\lsize{subsec}\def\lllsize{reduced}% \resetmathfonts \setleading{16pt}} \def\subsecfonts{% \let\tenrm=\ssecrm \let\tenit=\ssecit \let\tensl=\ssecsl \let\tenbf=\ssecbf \let\tentt=\ssectt \let\smallcaps=\ssecsc \let\tensf=\ssecsf \let\teni=\sseci \let\tensy=\ssecsy \let\tenttsl=\ssecttsl \def\lsize{text}\def\lllsize{small}% \resetmathfonts \setleading{15pt}} \let\subsubsecfonts = \subsecfonts \def\reducedfonts{% \let\tenrm=\reducedrm \let\tenit=\reducedit \let\tensl=\reducedsl \let\tenbf=\reducedbf \let\tentt=\reducedtt \let\reducedcaps=\reducedsc \let\tensf=\reducedsf \let\teni=\reducedi \let\tensy=\reducedsy \let\tenttsl=\reducedttsl \def\lsize{small}\def\lllsize{smaller}% \resetmathfonts \setleading{10.5pt}} \def\smallfonts{% \let\tenrm=\smallrm \let\tenit=\smallit \let\tensl=\smallsl \let\tenbf=\smallbf \let\tentt=\smalltt \let\smallcaps=\smallsc \let\tensf=\smallsf \let\teni=\smalli \let\tensy=\smallsy \let\tenttsl=\smallttsl \def\lsize{smaller}\def\lllsize{smaller}% \resetmathfonts \setleading{10.5pt}} \def\smallerfonts{% \let\tenrm=\smallerrm \let\tenit=\smallerit \let\tensl=\smallersl \let\tenbf=\smallerbf \let\tentt=\smallertt \let\smallcaps=\smallersc \let\tensf=\smallersf \let\teni=\smalleri \let\tensy=\smallersy \let\tenttsl=\smallerttsl \def\lsize{smaller}\def\lllsize{smaller}% \resetmathfonts \setleading{9.5pt}} % Set the fonts to use with the @small... environments. \let\smallexamplefonts = \smallfonts % About \smallexamplefonts. If we use \smallfonts (9pt), @smallexample % can fit this many characters: % 8.5x11=86 smallbook=72 a4=90 a5=69 % If we use \scriptfonts (8pt), then we can fit this many characters: % 8.5x11=90+ smallbook=80 a4=90+ a5=77 % For me, subjectively, the few extra characters that fit aren't worth % the additional smallness of 8pt. So I'm making the default 9pt. % % By the way, for comparison, here's what fits with @example (10pt): % 8.5x11=71 smallbook=60 a4=75 a5=58 % % I wish the USA used A4 paper. % --karl, 24jan03. % Set up the default fonts, so we can use them for creating boxes. % \textfonts \rm % Define these so they can be easily changed for other fonts. \def\angleleft{$\langle$} \def\angleright{$\rangle$} % Count depth in font-changes, for error checks \newcount\fontdepth \fontdepth=0 % Fonts for short table of contents. \setfont\shortcontrm\rmshape{12}{1000} \setfont\shortcontbf\bfshape{10}{\magstep1} % no cmb12 \setfont\shortcontsl\slshape{12}{1000} \setfont\shortconttt\ttshape{12}{1000} %% Add scribe-like font environments, plus @l for inline lisp (usually sans %% serif) and @ii for TeX italic % \smartitalic{ARG} outputs arg in italics, followed by an italic correction % unless the following character is such as not to need one. \def\smartitalicx{\ifx\next,\else\ifx\next-\else\ifx\next.\else \ptexslash\fi\fi\fi} \def\smartslanted#1{{\ifusingtt\ttsl\sl #1}\futurelet\next\smartitalicx} \def\smartitalic#1{{\ifusingtt\ttsl\it #1}\futurelet\next\smartitalicx} % like \smartslanted except unconditionally uses \ttsl. % @var is set to this for defun arguments. \def\ttslanted#1{{\ttsl #1}\futurelet\next\smartitalicx} % like \smartslanted except unconditionally use \sl. We never want % ttsl for book titles, do we? \def\cite#1{{\sl #1}\futurelet\next\smartitalicx} \let\i=\smartitalic \let\var=\smartslanted \let\dfn=\smartslanted \let\emph=\smartitalic \def\b#1{{\bf #1}} \let\strong=\b % We can't just use \exhyphenpenalty, because that only has effect at % the end of a paragraph. Restore normal hyphenation at the end of the % group within which \nohyphenation is presumably called. % \def\nohyphenation{\hyphenchar\font = -1 \aftergroup\restorehyphenation} \def\restorehyphenation{\hyphenchar\font = `- } % Set sfcode to normal for the chars that usually have another value. % Can't use plain's \frenchspacing because it uses the `\x notation, and % sometimes \x has an active definition that messes things up. % \catcode`@=11 \def\frenchspacing{% \sfcode\dotChar =\@m \sfcode\questChar=\@m \sfcode\exclamChar=\@m \sfcode\colonChar=\@m \sfcode\semiChar =\@m \sfcode\commaChar =\@m } \catcode`@=\other \def\t#1{% {\tt \rawbackslash \frenchspacing #1}% \null } \def\samp#1{`\tclose{#1}'\null} \setfont\keyrm\rmshape{8}{1000} \font\keysy=cmsy9 \def\key#1{{\keyrm\textfont2=\keysy \leavevmode\hbox{% \raise0.4pt\hbox{\angleleft}\kern-.08em\vtop{% \vbox{\hrule\kern-0.4pt \hbox{\raise0.4pt\hbox{\vphantom{\angleleft}}#1}}% \kern-0.4pt\hrule}% \kern-.06em\raise0.4pt\hbox{\angleright}}}} % The old definition, with no lozenge: %\def\key #1{{\ttsl \nohyphenation \uppercase{#1}}\null} \def\ctrl #1{{\tt \rawbackslash \hat}#1} % @file, @option are the same as @samp. \let\file=\samp \let\option=\samp % @code is a modification of @t, % which makes spaces the same size as normal in the surrounding text. \def\tclose#1{% {% % Change normal interword space to be same as for the current font. \spaceskip = \fontdimen2\font % % Switch to typewriter. \tt % % But `\ ' produces the large typewriter interword space. \def\ {{\spaceskip = 0pt{} }}% % % Turn off hyphenation. \nohyphenation % \rawbackslash \frenchspacing #1% }% \null } % We *must* turn on hyphenation at `-' and `_' in @code. % Otherwise, it is too hard to avoid overfull hboxes % in the Emacs manual, the Library manual, etc. % Unfortunately, TeX uses one parameter (\hyphenchar) to control % both hyphenation at - and hyphenation within words. % We must therefore turn them both off (\tclose does that) % and arrange explicitly to hyphenate at a dash. % -- rms. { \catcode`\-=\active \catcode`\_=\active % \global\def\code{\begingroup \catcode`\-=\active \let-\codedash \catcode`\_=\active \let_\codeunder \codex } } \def\realdash{-} \def\codedash{-\discretionary{}{}{}} \def\codeunder{% % this is all so @math{@code{var_name}+1} can work. In math mode, _ % is "active" (mathcode"8000) and \normalunderscore (or \char95, etc.) % will therefore expand the active definition of _, which is us % (inside @code that is), therefore an endless loop. \ifusingtt{\ifmmode \mathchar"075F % class 0=ordinary, family 7=ttfam, pos 0x5F=_. \else\normalunderscore \fi \discretionary{}{}{}}% {\_}% } \def\codex #1{\tclose{#1}\endgroup} % @kbd is like @code, except that if the argument is just one @key command, % then @kbd has no effect. % @kbdinputstyle -- arg is `distinct' (@kbd uses slanted tty font always), % `example' (@kbd uses ttsl only inside of @example and friends), % or `code' (@kbd uses normal tty font always). \parseargdef\kbdinputstyle{% \def\arg{#1}% \ifx\arg\worddistinct \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\ttsl}% \else\ifx\arg\wordexample \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\tt}% \else\ifx\arg\wordcode \gdef\kbdexamplefont{\tt}\gdef\kbdfont{\tt}% \else \errhelp = \EMsimple \errmessage{Unknown @kbdinputstyle option `\arg'}% \fi\fi\fi } \def\worddistinct{distinct} \def\wordexample{example} \def\wordcode{code} % Default is `distinct.' \kbdinputstyle distinct \def\xkey{\key} \def\kbdfoo#1#2#3\par{\def\one{#1}\def\three{#3}\def\threex{??}% \ifx\one\xkey\ifx\threex\three \key{#2}% \else{\tclose{\kbdfont\look}}\fi \else{\tclose{\kbdfont\look}}\fi} % For @indicateurl, @env, @command quotes seem unnecessary, so use \code. \let\indicateurl=\code \let\env=\code \let\command=\code % @uref (abbreviation for `urlref') takes an optional (comma-separated) % second argument specifying the text to display and an optional third % arg as text to display instead of (rather than in addition to) the url % itself. First (mandatory) arg is the url. Perhaps eventually put in % a hypertex \special here. % \def\uref#1{\douref #1,,,\finish} \def\douref#1,#2,#3,#4\finish{\begingroup \unsepspaces \pdfurl{#1}% \setbox0 = \hbox{\ignorespaces #3}% \ifdim\wd0 > 0pt \unhbox0 % third arg given, show only that \else \setbox0 = \hbox{\ignorespaces #2}% \ifdim\wd0 > 0pt \ifpdf \unhbox0 % PDF: 2nd arg given, show only it \else \unhbox0\ (\code{#1})% DVI: 2nd arg given, show both it and url \fi \else \code{#1}% only url given, so show it \fi \fi \endlink \endgroup} % @url synonym for @uref, since that's how everyone uses it. % \let\url=\uref % rms does not like angle brackets --karl, 17may97. % So now @email is just like @uref, unless we are pdf. % %\def\email#1{\angleleft{\tt #1}\angleright} \ifpdf \def\email#1{\doemail#1,,\finish} \def\doemail#1,#2,#3\finish{\begingroup \unsepspaces \pdfurl{mailto:#1}% \setbox0 = \hbox{\ignorespaces #2}% \ifdim\wd0>0pt\unhbox0\else\code{#1}\fi \endlink \endgroup} \else \let\email=\uref \fi % Check if we are currently using a typewriter font. Since all the % Computer Modern typewriter fonts have zero interword stretch (and % shrink), and it is reasonable to expect all typewriter fonts to have % this property, we can check that font parameter. % \def\ifmonospace{\ifdim\fontdimen3\font=0pt } % Typeset a dimension, e.g., `in' or `pt'. The only reason for the % argument is to make the input look right: @dmn{pt} instead of @dmn{}pt. % \def\dmn#1{\thinspace #1} \def\kbd#1{\def\look{#1}\expandafter\kbdfoo\look??\par} % @l was never documented to mean ``switch to the Lisp font'', % and it is not used as such in any manual I can find. We need it for % Polish suppressed-l. --karl, 22sep96. %\def\l#1{{\li #1}\null} % Explicit font changes: @r, @sc, undocumented @ii. \def\r#1{{\rm #1}} % roman font \def\sc#1{{\smallcaps#1}} % smallcaps font \def\ii#1{{\it #1}} % italic font \def\acronym#1{\doacronym #1,,\finish} \def\doacronym#1,#2,#3\finish{% {\selectfonts\lsize #1}% \def\temp{#2}% \ifx\temp\empty \else \space ({\unsepspaces \ignorespaces \temp \unskip})% \fi } % @pounds{} is a sterling sign, which is in the CM italic font. % \def\pounds{{\it\$}} % @registeredsymbol - R in a circle. The font for the R should really % be smaller yet, but lllsize is the best we can do for now. % Adapted from the plain.tex definition of \copyright. % \def\registeredsymbol{% $^{{\ooalign{\hfil\raise.07ex\hbox{\selectfonts\lllsize R}% \hfil\crcr\Orb}}% }$% } \message{page headings,} \newskip\titlepagetopglue \titlepagetopglue = 1.5in \newskip\titlepagebottomglue \titlepagebottomglue = 2pc % First the title page. Must do @settitle before @titlepage. \newif\ifseenauthor \newif\iffinishedtitlepage % Do an implicit @contents or @shortcontents after @end titlepage if the % user says @setcontentsaftertitlepage or @setshortcontentsaftertitlepage. % \newif\ifsetcontentsaftertitlepage \let\setcontentsaftertitlepage = \setcontentsaftertitlepagetrue \newif\ifsetshortcontentsaftertitlepage \let\setshortcontentsaftertitlepage = \setshortcontentsaftertitlepagetrue \parseargdef\shorttitlepage{\begingroup\hbox{}\vskip 1.5in \chaprm \centerline{#1}% \endgroup\page\hbox{}\page} \envdef\titlepage{% % Open one extra group, as we want to close it in the middle of \Etitlepage. \begingroup \parindent=0pt \textfonts % Leave some space at the very top of the page. \vglue\titlepagetopglue % No rule at page bottom unless we print one at the top with @title. \finishedtitlepagetrue % % Most title ``pages'' are actually two pages long, with space % at the top of the second. We don't want the ragged left on the second. \let\oldpage = \page \def\page{% \iffinishedtitlepage\else \finishtitlepage \fi \let\page = \oldpage \page \null }% } \def\Etitlepage{% \iffinishedtitlepage\else \finishtitlepage \fi % It is important to do the page break before ending the group, % because the headline and footline are only empty inside the group. % If we use the new definition of \page, we always get a blank page % after the title page, which we certainly don't want. \oldpage \endgroup % % Need this before the \...aftertitlepage checks so that if they are % in effect the toc pages will come out with page numbers. \HEADINGSon % % If they want short, they certainly want long too. \ifsetshortcontentsaftertitlepage \shortcontents \contents \global\let\shortcontents = \relax \global\let\contents = \relax \fi % \ifsetcontentsaftertitlepage \contents \global\let\contents = \relax \global\let\shortcontents = \relax \fi } \def\finishtitlepage{% \vskip4pt \hrule height 2pt width \hsize \vskip\titlepagebottomglue \finishedtitlepagetrue } %%% Macros to be used within @titlepage: \let\subtitlerm=\tenrm \def\subtitlefont{\subtitlerm \normalbaselineskip = 13pt \normalbaselines} \def\authorfont{\authorrm \normalbaselineskip = 16pt \normalbaselines \let\tt=\authortt} \parseargdef\title{% \checkenv\titlepage \leftline{\titlefonts\rm #1} % print a rule at the page bottom also. \finishedtitlepagefalse \vskip4pt \hrule height 4pt width \hsize \vskip4pt } \parseargdef\subtitle{% \checkenv\titlepage {\subtitlefont \rightline{#1}}% } % @author should come last, but may come many times. % It can also be used inside @quotation. % \parseargdef\author{% \def\temp{\quotation}% \ifx\thisenv\temp \def\quotationauthor{#1}% printed in \Equotation. \else \checkenv\titlepage \ifseenauthor\else \vskip 0pt plus 1filll \seenauthortrue \fi {\authorfont \leftline{#1}}% \fi } %%% Set up page headings and footings. \let\thispage=\folio \newtoks\evenheadline % headline on even pages \newtoks\oddheadline % headline on odd pages \newtoks\evenfootline % footline on even pages \newtoks\oddfootline % footline on odd pages % Now make TeX use those variables \headline={{\textfonts\rm \ifodd\pageno \the\oddheadline \else \the\evenheadline \fi}} \footline={{\textfonts\rm \ifodd\pageno \the\oddfootline \else \the\evenfootline \fi}\HEADINGShook} \let\HEADINGShook=\relax % Commands to set those variables. % For example, this is what @headings on does % @evenheading @thistitle|@thispage|@thischapter % @oddheading @thischapter|@thispage|@thistitle % @evenfooting @thisfile|| % @oddfooting ||@thisfile \def\evenheading{\parsearg\evenheadingxxx} \def\evenheadingxxx #1{\evenheadingyyy #1\|\|\|\|\finish} \def\evenheadingyyy #1\|#2\|#3\|#4\finish{% \global\evenheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}} \def\oddheading{\parsearg\oddheadingxxx} \def\oddheadingxxx #1{\oddheadingyyy #1\|\|\|\|\finish} \def\oddheadingyyy #1\|#2\|#3\|#4\finish{% \global\oddheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}} \parseargdef\everyheading{\oddheadingxxx{#1}\evenheadingxxx{#1}}% \def\evenfooting{\parsearg\evenfootingxxx} \def\evenfootingxxx #1{\evenfootingyyy #1\|\|\|\|\finish} \def\evenfootingyyy #1\|#2\|#3\|#4\finish{% \global\evenfootline={\rlap{\centerline{#2}}\line{#1\hfil#3}}} \def\oddfooting{\parsearg\oddfootingxxx} \def\oddfootingxxx #1{\oddfootingyyy #1\|\|\|\|\finish} \def\oddfootingyyy #1\|#2\|#3\|#4\finish{% \global\oddfootline = {\rlap{\centerline{#2}}\line{#1\hfil#3}}% % % Leave some space for the footline. Hopefully ok to assume % @evenfooting will not be used by itself. \global\advance\pageheight by -\baselineskip \global\advance\vsize by -\baselineskip } \parseargdef\everyfooting{\oddfootingxxx{#1}\evenfootingxxx{#1}} % @headings double turns headings on for double-sided printing. % @headings single turns headings on for single-sided printing. % @headings off turns them off. % @headings on same as @headings double, retained for compatibility. % @headings after turns on double-sided headings after this page. % @headings doubleafter turns on double-sided headings after this page. % @headings singleafter turns on single-sided headings after this page. % By default, they are off at the start of a document, % and turned `on' after @end titlepage. \def\headings #1 {\csname HEADINGS#1\endcsname} \def\HEADINGSoff{% \global\evenheadline={\hfil} \global\evenfootline={\hfil} \global\oddheadline={\hfil} \global\oddfootline={\hfil}} \HEADINGSoff % When we turn headings on, set the page number to 1. % For double-sided printing, put current file name in lower left corner, % chapter name on inside top of right hand pages, document % title on inside top of left hand pages, and page numbers on outside top % edge of all pages. \def\HEADINGSdouble{% \global\pageno=1 \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\folio\hfil\thistitle}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chapoddpage } \let\contentsalignmacro = \chappager % For single-sided printing, chapter title goes across top left of page, % page number on top right. \def\HEADINGSsingle{% \global\pageno=1 \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\thischapter\hfil\folio}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chappager } \def\HEADINGSon{\HEADINGSdouble} \def\HEADINGSafter{\let\HEADINGShook=\HEADINGSdoublex} \let\HEADINGSdoubleafter=\HEADINGSafter \def\HEADINGSdoublex{% \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\folio\hfil\thistitle}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chapoddpage } \def\HEADINGSsingleafter{\let\HEADINGShook=\HEADINGSsinglex} \def\HEADINGSsinglex{% \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\thischapter\hfil\folio}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chappager } % Subroutines used in generating headings % This produces Day Month Year style of output. % Only define if not already defined, in case a txi-??.tex file has set % up a different format (e.g., txi-cs.tex does this). \ifx\today\undefined \def\today{% \number\day\space \ifcase\month \or\putwordMJan\or\putwordMFeb\or\putwordMMar\or\putwordMApr \or\putwordMMay\or\putwordMJun\or\putwordMJul\or\putwordMAug \or\putwordMSep\or\putwordMOct\or\putwordMNov\or\putwordMDec \fi \space\number\year} \fi % @settitle line... specifies the title of the document, for headings. % It generates no output of its own. \def\thistitle{\putwordNoTitle} \def\settitle{\parsearg{\gdef\thistitle}} \message{tables,} % Tables -- @table, @ftable, @vtable, @item(x). % default indentation of table text \newdimen\tableindent \tableindent=.8in % default indentation of @itemize and @enumerate text \newdimen\itemindent \itemindent=.3in % margin between end of table item and start of table text. \newdimen\itemmargin \itemmargin=.1in % used internally for \itemindent minus \itemmargin \newdimen\itemmax % Note @table, @ftable, and @vtable define @item, @itemx, etc., with % these defs. % They also define \itemindex % to index the item name in whatever manner is desired (perhaps none). \newif\ifitemxneedsnegativevskip \def\itemxpar{\par\ifitemxneedsnegativevskip\nobreak\vskip-\parskip\nobreak\fi} \def\internalBitem{\smallbreak \parsearg\itemzzz} \def\internalBitemx{\itemxpar \parsearg\itemzzz} \def\itemzzz #1{\begingroup % \advance\hsize by -\rightskip \advance\hsize by -\tableindent \setbox0=\hbox{\itemindicate{#1}}% \itemindex{#1}% \nobreak % This prevents a break before @itemx. % % If the item text does not fit in the space we have, put it on a line % by itself, and do not allow a page break either before or after that % line. We do not start a paragraph here because then if the next % command is, e.g., @kindex, the whatsit would get put into the % horizontal list on a line by itself, resulting in extra blank space. \ifdim \wd0>\itemmax % % Make this a paragraph so we get the \parskip glue and wrapping, % but leave it ragged-right. \begingroup \advance\leftskip by-\tableindent \advance\hsize by\tableindent \advance\rightskip by0pt plus1fil \leavevmode\unhbox0\par \endgroup % % We're going to be starting a paragraph, but we don't want the % \parskip glue -- logically it's part of the @item we just started. \nobreak \vskip-\parskip % % Stop a page break at the \parskip glue coming up. (Unfortunately % we can't prevent a possible page break at the following % \baselineskip glue.) However, if what follows is an environment % such as @example, there will be no \parskip glue; then % the negative vskip we just would cause the example and the item to % crash together. So we use this bizarre value of 10001 as a signal % to \aboveenvbreak to insert \parskip glue after all. % (Possibly there are other commands that could be followed by % @example which need the same treatment, but not section titles; or % maybe section titles are the only special case and they should be % penalty 10001...) \penalty 10001 \endgroup \itemxneedsnegativevskipfalse \else % The item text fits into the space. Start a paragraph, so that the % following text (if any) will end up on the same line. \noindent % Do this with kerns and \unhbox so that if there is a footnote in % the item text, it can migrate to the main vertical list and % eventually be printed. \nobreak\kern-\tableindent \dimen0 = \itemmax \advance\dimen0 by \itemmargin \advance\dimen0 by -\wd0 \unhbox0 \nobreak\kern\dimen0 \endgroup \itemxneedsnegativevskiptrue \fi } \def\item{\errmessage{@item while not in a list environment}} \def\itemx{\errmessage{@itemx while not in a list environment}} % @table, @ftable, @vtable. \envdef\table{% \let\itemindex\gobble \tablex } \envdef\ftable{% \def\itemindex ##1{\doind {fn}{\code{##1}}}% \tablex } \envdef\vtable{% \def\itemindex ##1{\doind {vr}{\code{##1}}}% \tablex } \def\tablex#1{% \def\itemindicate{#1}% \parsearg\tabley } \def\tabley#1{% {% \makevalueexpandable \edef\temp{\noexpand\tablez #1\space\space\space}% \expandafter }\temp \endtablez } \def\tablez #1 #2 #3 #4\endtablez{% \aboveenvbreak \ifnum 0#1>0 \advance \leftskip by #1\mil \fi \ifnum 0#2>0 \tableindent=#2\mil \fi \ifnum 0#3>0 \advance \rightskip by #3\mil \fi \itemmax=\tableindent \advance \itemmax by -\itemmargin \advance \leftskip by \tableindent \exdentamount=\tableindent \parindent = 0pt \parskip = \smallskipamount \ifdim \parskip=0pt \parskip=2pt \fi \let\item = \internalBitem \let\itemx = \internalBitemx } \def\Etable{\endgraf\afterenvbreak} \let\Eftable\Etable \let\Evtable\Etable \let\Eitemize\Etable \let\Eenumerate\Etable % This is the counter used by @enumerate, which is really @itemize \newcount \itemno \envdef\itemize{\parsearg\doitemize} \def\doitemize#1{% \aboveenvbreak \itemmax=\itemindent \advance\itemmax by -\itemmargin \advance\leftskip by \itemindent \exdentamount=\itemindent \parindent=0pt \parskip=\smallskipamount \ifdim\parskip=0pt \parskip=2pt \fi \def\itemcontents{#1}% % @itemize with no arg is equivalent to @itemize @bullet. \ifx\itemcontents\empty\def\itemcontents{\bullet}\fi \let\item=\itemizeitem } % Definition of @item while inside @itemize and @enumerate. % \def\itemizeitem{% \advance\itemno by 1 % for enumerations {\let\par=\endgraf \smallbreak}% reasonable place to break {% % If the document has an @itemize directly after a section title, a % \nobreak will be last on the list, and \sectionheading will have % done a \vskip-\parskip. In that case, we don't want to zero % parskip, or the item text will crash with the heading. On the % other hand, when there is normal text preceding the item (as there % usually is), we do want to zero parskip, or there would be too much % space. In that case, we won't have a \nobreak before. At least % that's the theory. \ifnum\lastpenalty<10000 \parskip=0in \fi \noindent \hbox to 0pt{\hss \itemcontents \kern\itemmargin}% \vadjust{\penalty 1200}}% not good to break after first line of item. \flushcr } % \splitoff TOKENS\endmark defines \first to be the first token in % TOKENS, and \rest to be the remainder. % \def\splitoff#1#2\endmark{\def\first{#1}\def\rest{#2}}% % Allow an optional argument of an uppercase letter, lowercase letter, % or number, to specify the first label in the enumerated list. No % argument is the same as `1'. % \envparseargdef\enumerate{\enumeratey #1 \endenumeratey} \def\enumeratey #1 #2\endenumeratey{% % If we were given no argument, pretend we were given `1'. \def\thearg{#1}% \ifx\thearg\empty \def\thearg{1}\fi % % Detect if the argument is a single token. If so, it might be a % letter. Otherwise, the only valid thing it can be is a number. % (We will always have one token, because of the test we just made. % This is a good thing, since \splitoff doesn't work given nothing at % all -- the first parameter is undelimited.) \expandafter\splitoff\thearg\endmark \ifx\rest\empty % Only one token in the argument. It could still be anything. % A ``lowercase letter'' is one whose \lccode is nonzero. % An ``uppercase letter'' is one whose \lccode is both nonzero, and % not equal to itself. % Otherwise, we assume it's a number. % % We need the \relax at the end of the \ifnum lines to stop TeX from % continuing to look for a . % \ifnum\lccode\expandafter`\thearg=0\relax \numericenumerate % a number (we hope) \else % It's a letter. \ifnum\lccode\expandafter`\thearg=\expandafter`\thearg\relax \lowercaseenumerate % lowercase letter \else \uppercaseenumerate % uppercase letter \fi \fi \else % Multiple tokens in the argument. We hope it's a number. \numericenumerate \fi } % An @enumerate whose labels are integers. The starting integer is % given in \thearg. % \def\numericenumerate{% \itemno = \thearg \startenumeration{\the\itemno}% } % The starting (lowercase) letter is in \thearg. \def\lowercaseenumerate{% \itemno = \expandafter`\thearg \startenumeration{% % Be sure we're not beyond the end of the alphabet. \ifnum\itemno=0 \errmessage{No more lowercase letters in @enumerate; get a bigger alphabet}% \fi \char\lccode\itemno }% } % The starting (uppercase) letter is in \thearg. \def\uppercaseenumerate{% \itemno = \expandafter`\thearg \startenumeration{% % Be sure we're not beyond the end of the alphabet. \ifnum\itemno=0 \errmessage{No more uppercase letters in @enumerate; get a bigger alphabet} \fi \char\uccode\itemno }% } % Call \doitemize, adding a period to the first argument and supplying the % common last two arguments. Also subtract one from the initial value in % \itemno, since @item increments \itemno. % \def\startenumeration#1{% \advance\itemno by -1 \doitemize{#1.}\flushcr } % @alphaenumerate and @capsenumerate are abbreviations for giving an arg % to @enumerate. % \def\alphaenumerate{\enumerate{a}} \def\capsenumerate{\enumerate{A}} \def\Ealphaenumerate{\Eenumerate} \def\Ecapsenumerate{\Eenumerate} % @multitable macros % Amy Hendrickson, 8/18/94, 3/6/96 % % @multitable ... @end multitable will make as many columns as desired. % Contents of each column will wrap at width given in preamble. Width % can be specified either with sample text given in a template line, % or in percent of \hsize, the current width of text on page. % Table can continue over pages but will only break between lines. % To make preamble: % % Either define widths of columns in terms of percent of \hsize: % @multitable @columnfractions .25 .3 .45 % @item ... % % Numbers following @columnfractions are the percent of the total % current hsize to be used for each column. You may use as many % columns as desired. % Or use a template: % @multitable {Column 1 template} {Column 2 template} {Column 3 template} % @item ... % using the widest term desired in each column. % Each new table line starts with @item, each subsequent new column % starts with @tab. Empty columns may be produced by supplying @tab's % with nothing between them for as many times as empty columns are needed, % ie, @tab@tab@tab will produce two empty columns. % @item, @tab do not need to be on their own lines, but it will not hurt % if they are. % Sample multitable: % @multitable {Column 1 template} {Column 2 template} {Column 3 template} % @item first col stuff @tab second col stuff @tab third col % @item % first col stuff % @tab % second col stuff % @tab % third col % @item first col stuff @tab second col stuff % @tab Many paragraphs of text may be used in any column. % % They will wrap at the width determined by the template. % @item@tab@tab This will be in third column. % @end multitable % Default dimensions may be reset by user. % @multitableparskip is vertical space between paragraphs in table. % @multitableparindent is paragraph indent in table. % @multitablecolmargin is horizontal space to be left between columns. % @multitablelinespace is space to leave between table items, baseline % to baseline. % 0pt means it depends on current normal line spacing. % \newskip\multitableparskip \newskip\multitableparindent \newdimen\multitablecolspace \newskip\multitablelinespace \multitableparskip=0pt \multitableparindent=6pt \multitablecolspace=12pt \multitablelinespace=0pt % Macros used to set up halign preamble: % \let\endsetuptable\relax \def\xendsetuptable{\endsetuptable} \let\columnfractions\relax \def\xcolumnfractions{\columnfractions} \newif\ifsetpercent % #1 is the @columnfraction, usually a decimal number like .5, but might % be just 1. We just use it, whatever it is. % \def\pickupwholefraction#1 {% \global\advance\colcount by 1 \expandafter\xdef\csname col\the\colcount\endcsname{#1\hsize}% \setuptable } \newcount\colcount \def\setuptable#1{% \def\firstarg{#1}% \ifx\firstarg\xendsetuptable \let\go = \relax \else \ifx\firstarg\xcolumnfractions \global\setpercenttrue \else \ifsetpercent \let\go\pickupwholefraction \else \global\advance\colcount by 1 \setbox0=\hbox{#1\unskip\space}% Add a normal word space as a % separator; typically that is always in the input, anyway. \expandafter\xdef\csname col\the\colcount\endcsname{\the\wd0}% \fi \fi \ifx\go\pickupwholefraction % Put the argument back for the \pickupwholefraction call, so % we'll always have a period there to be parsed. \def\go{\pickupwholefraction#1}% \else \let\go = \setuptable \fi% \fi \go } % multitable-only commands. % % @headitem starts a heading row, which we typeset in bold. % Assignments have to be global since we are inside the implicit group % of an alignment entry. Note that \everycr resets \everytab. \def\headitem{\checkenv\multitable \crcr \global\everytab={\bf}\the\everytab}% % % A \tab used to include \hskip1sp. But then the space in a template % line is not enough. That is bad. So let's go back to just `&' until % we encounter the problem it was intended to solve again. % --karl, nathan@acm.org, 20apr99. \def\tab{\checkenv\multitable &\the\everytab}% % @multitable ... @end multitable definitions: % \newtoks\everytab % insert after every tab. % \envdef\multitable{% \vskip\parskip \startsavinginserts % % @item within a multitable starts a normal row. \let\item\crcr % \tolerance=9500 \hbadness=9500 \setmultitablespacing \parskip=\multitableparskip \parindent=\multitableparindent \overfullrule=0pt \global\colcount=0 % \everycr = {% \noalign{% \global\everytab={}% \global\colcount=0 % Reset the column counter. % Check for saved footnotes, etc. \checkinserts % Keeps underfull box messages off when table breaks over pages. %\filbreak % Maybe so, but it also creates really weird page breaks when the % table breaks over pages. Wouldn't \vfil be better? Wait until the % problem manifests itself, so it can be fixed for real --karl. }% }% % \parsearg\domultitable } \def\domultitable#1{% % To parse everything between @multitable and @item: \setuptable#1 \endsetuptable % % This preamble sets up a generic column definition, which will % be used as many times as user calls for columns. % \vtop will set a single line and will also let text wrap and % continue for many paragraphs if desired. \halign\bgroup &% \global\advance\colcount by 1 \multistrut \vtop{% % Use the current \colcount to find the correct column width: \hsize=\expandafter\csname col\the\colcount\endcsname % % In order to keep entries from bumping into each other % we will add a \leftskip of \multitablecolspace to all columns after % the first one. % % If a template has been used, we will add \multitablecolspace % to the width of each template entry. % % If the user has set preamble in terms of percent of \hsize we will % use that dimension as the width of the column, and the \leftskip % will keep entries from bumping into each other. Table will start at % left margin and final column will justify at right margin. % % Make sure we don't inherit \rightskip from the outer environment. \rightskip=0pt \ifnum\colcount=1 % The first column will be indented with the surrounding text. \advance\hsize by\leftskip \else \ifsetpercent \else % If user has not set preamble in terms of percent of \hsize % we will advance \hsize by \multitablecolspace. \advance\hsize by \multitablecolspace \fi % In either case we will make \leftskip=\multitablecolspace: \leftskip=\multitablecolspace \fi % Ignoring space at the beginning and end avoids an occasional spurious % blank line, when TeX decides to break the line at the space before the % box from the multistrut, so the strut ends up on a line by itself. % For example: % @multitable @columnfractions .11 .89 % @item @code{#} % @tab Legal holiday which is valid in major parts of the whole country. % Is automatically provided with highlighting sequences respectively % marking characters. \noindent\ignorespaces##\unskip\multistrut }\cr } \def\Emultitable{% \crcr \egroup % end the \halign \global\setpercentfalse } \def\setmultitablespacing{% test to see if user has set \multitablelinespace. % If so, do nothing. If not, give it an appropriate dimension based on % current baselineskip. \ifdim\multitablelinespace=0pt \setbox0=\vbox{X}\global\multitablelinespace=\the\baselineskip \global\advance\multitablelinespace by-\ht0 %% strut to put in table in case some entry doesn't have descenders, %% to keep lines equally spaced \let\multistrut = \strut \else %% FIXME: what is \box0 supposed to be? \gdef\multistrut{\vrule height\multitablelinespace depth\dp0 width0pt\relax} \fi %% Test to see if parskip is larger than space between lines of %% table. If not, do nothing. %% If so, set to same dimension as multitablelinespace. \ifdim\multitableparskip>\multitablelinespace \global\multitableparskip=\multitablelinespace \global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller %% than skip between lines in the table. \fi% \ifdim\multitableparskip=0pt \global\multitableparskip=\multitablelinespace \global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller %% than skip between lines in the table. \fi} \message{conditionals,} % @iftex, @ifnotdocbook, @ifnothtml, @ifnotinfo, @ifnotplaintext, % @ifnotxml always succeed. They currently do nothing; we don't % attempt to check whether the conditionals are properly nested. But we % have to remember that they are conditionals, so that @end doesn't % attempt to close an environment group. % \def\makecond#1{% \expandafter\let\csname #1\endcsname = \relax \expandafter\let\csname iscond.#1\endcsname = 1 } \makecond{iftex} \makecond{ifnotdocbook} \makecond{ifnothtml} \makecond{ifnotinfo} \makecond{ifnotplaintext} \makecond{ifnotxml} % Ignore @ignore, @ifhtml, @ifinfo, and the like. % \def\direntry{\doignore{direntry}} \def\documentdescription{\doignore{documentdescription}} \def\docbook{\doignore{docbook}} \def\html{\doignore{html}} \def\ifdocbook{\doignore{ifdocbook}} \def\ifhtml{\doignore{ifhtml}} \def\ifinfo{\doignore{ifinfo}} \def\ifnottex{\doignore{ifnottex}} \def\ifplaintext{\doignore{ifplaintext}} \def\ifxml{\doignore{ifxml}} \def\ignore{\doignore{ignore}} \def\menu{\doignore{menu}} \def\xml{\doignore{xml}} % Ignore text until a line `@end #1', keeping track of nested conditionals. % % A count to remember the depth of nesting. \newcount\doignorecount \def\doignore#1{\begingroup % Scan in ``verbatim'' mode: \catcode`\@ = \other \catcode`\{ = \other \catcode`\} = \other % % Make sure that spaces turn into tokens that match what \doignoretext wants. \spaceisspace % % Count number of #1's that we've seen. \doignorecount = 0 % % Swallow text until we reach the matching `@end #1'. \dodoignore {#1}% } { \catcode`_=11 % We want to use \_STOP_ which cannot appear in texinfo source. \obeylines % % \gdef\dodoignore#1{% % #1 contains the string `ifinfo'. % % Define a command to find the next `@end #1', which must be on a line % by itself. \long\def\doignoretext##1^^M@end #1{\doignoretextyyy##1^^M@#1\_STOP_}% % And this command to find another #1 command, at the beginning of a % line. (Otherwise, we would consider a line `@c @ifset', for % example, to count as an @ifset for nesting.) \long\def\doignoretextyyy##1^^M@#1##2\_STOP_{\doignoreyyy{##2}\_STOP_}% % % And now expand that command. \obeylines % \doignoretext ^^M% }% } \def\doignoreyyy#1{% \def\temp{#1}% \ifx\temp\empty % Nothing found. \let\next\doignoretextzzz \else % Found a nested condition, ... \advance\doignorecount by 1 \let\next\doignoretextyyy % ..., look for another. % If we're here, #1 ends with ^^M\ifinfo (for example). \fi \next #1% the token \_STOP_ is present just after this macro. } % We have to swallow the remaining "\_STOP_". % \def\doignoretextzzz#1{% \ifnum\doignorecount = 0 % We have just found the outermost @end. \let\next\enddoignore \else % Still inside a nested condition. \advance\doignorecount by -1 \let\next\doignoretext % Look for the next @end. \fi \next } % Finish off ignored text. \def\enddoignore{\endgroup\ignorespaces} % @set VAR sets the variable VAR to an empty value. % @set VAR REST-OF-LINE sets VAR to the value REST-OF-LINE. % % Since we want to separate VAR from REST-OF-LINE (which might be % empty), we can't just use \parsearg; we have to insert a space of our % own to delimit the rest of the line, and then take it out again if we % didn't need it. % We rely on the fact that \parsearg sets \catcode`\ =10. % \parseargdef\set{\setyyy#1 \endsetyyy} \def\setyyy#1 #2\endsetyyy{% {% \makevalueexpandable \def\temp{#2}% \edef\next{\gdef\makecsname{SET#1}}% \ifx\temp\empty \next{}% \else \setzzz#2\endsetzzz \fi }% } % Remove the trailing space \setxxx inserted. \def\setzzz#1 \endsetzzz{\next{#1}} % @clear VAR clears (i.e., unsets) the variable VAR. % \parseargdef\clear{% {% \makevalueexpandable \global\expandafter\let\csname SET#1\endcsname=\relax }% } % @value{foo} gets the text saved in variable foo. \def\value{\begingroup\makevalueexpandable\valuexxx} \def\valuexxx#1{\expandablevalue{#1}\endgroup} { \catcode`\- = \active \catcode`\_ = \active % \gdef\makevalueexpandable{% \let\value = \expandablevalue % We don't want these characters active, ... \catcode`\-=\other \catcode`\_=\other % ..., but we might end up with active ones in the argument if % we're called from @code, as @code{@value{foo-bar_}}, though. % So \let them to their normal equivalents. \let-\realdash \let_\normalunderscore } } % We have this subroutine so that we can handle at least some @value's % properly in indexes (we call \makevalueexpandable in \indexdummies). % The command has to be fully expandable (if the variable is set), since % the result winds up in the index file. This means that if the % variable's value contains other Texinfo commands, it's almost certain % it will fail (although perhaps we could fix that with sufficient work % to do a one-level expansion on the result, instead of complete). % \def\expandablevalue#1{% \expandafter\ifx\csname SET#1\endcsname\relax {[No value for ``#1'']}% \message{Variable `#1', used in @value, is not set.}% \else \csname SET#1\endcsname \fi } % @ifset VAR ... @end ifset reads the `...' iff VAR has been defined % with @set. % % To get special treatment of `@end ifset,' call \makeond and the redefine. % \makecond{ifset} \def\ifset{\parsearg{\doifset{\let\next=\ifsetfail}}} \def\doifset#1#2{% {% \makevalueexpandable \let\next=\empty \expandafter\ifx\csname SET#2\endcsname\relax #1% If not set, redefine \next. \fi \expandafter }\next } \def\ifsetfail{\doignore{ifset}} % @ifclear VAR ... @end ifclear reads the `...' iff VAR has never been % defined with @set, or has been undefined with @clear. % % The `\else' inside the `\doifset' parameter is a trick to reuse the % above code: if the variable is not set, do nothing, if it is set, % then redefine \next to \ifclearfail. % \makecond{ifclear} \def\ifclear{\parsearg{\doifset{\else \let\next=\ifclearfail}}} \def\ifclearfail{\doignore{ifclear}} % @dircategory CATEGORY -- specify a category of the dir file % which this file should belong to. Ignore this in TeX. \let\dircategory=\comment % @defininfoenclose. \let\definfoenclose=\comment \message{indexing,} % Index generation facilities % Define \newwrite to be identical to plain tex's \newwrite % except not \outer, so it can be used within \newindex. {\catcode`\@=11 \gdef\newwrite{\alloc@7\write\chardef\sixt@@n}} % \newindex {foo} defines an index named foo. % It automatically defines \fooindex such that % \fooindex ...rest of line... puts an entry in the index foo. % It also defines \fooindfile to be the number of the output channel for % the file that accumulates this index. The file's extension is foo. % The name of an index should be no more than 2 characters long % for the sake of vms. % \def\newindex#1{% \iflinks \expandafter\newwrite \csname#1indfile\endcsname \openout \csname#1indfile\endcsname \jobname.#1 % Open the file \fi \expandafter\xdef\csname#1index\endcsname{% % Define @#1index \noexpand\doindex{#1}} } % @defindex foo == \newindex{foo} % \def\defindex{\parsearg\newindex} % Define @defcodeindex, like @defindex except put all entries in @code. % \def\defcodeindex{\parsearg\newcodeindex} % \def\newcodeindex#1{% \iflinks \expandafter\newwrite \csname#1indfile\endcsname \openout \csname#1indfile\endcsname \jobname.#1 \fi \expandafter\xdef\csname#1index\endcsname{% \noexpand\docodeindex{#1}}% } % @synindex foo bar makes index foo feed into index bar. % Do this instead of @defindex foo if you don't want it as a separate index. % % @syncodeindex foo bar similar, but put all entries made for index foo % inside @code. % \def\synindex#1 #2 {\dosynindex\doindex{#1}{#2}} \def\syncodeindex#1 #2 {\dosynindex\docodeindex{#1}{#2}} % #1 is \doindex or \docodeindex, #2 the index getting redefined (foo), % #3 the target index (bar). \def\dosynindex#1#2#3{% % Only do \closeout if we haven't already done it, else we'll end up % closing the target index. \expandafter \ifx\csname donesynindex#2\endcsname \undefined % The \closeout helps reduce unnecessary open files; the limit on the % Acorn RISC OS is a mere 16 files. \expandafter\closeout\csname#2indfile\endcsname \expandafter\let\csname\donesynindex#2\endcsname = 1 \fi % redefine \fooindfile: \expandafter\let\expandafter\temp\expandafter=\csname#3indfile\endcsname \expandafter\let\csname#2indfile\endcsname=\temp % redefine \fooindex: \expandafter\xdef\csname#2index\endcsname{\noexpand#1{#3}}% } % Define \doindex, the driver for all \fooindex macros. % Argument #1 is generated by the calling \fooindex macro, % and it is "foo", the name of the index. % \doindex just uses \parsearg; it calls \doind for the actual work. % This is because \doind is more useful to call from other macros. % There is also \dosubind {index}{topic}{subtopic} % which makes an entry in a two-level index such as the operation index. \def\doindex#1{\edef\indexname{#1}\parsearg\singleindexer} \def\singleindexer #1{\doind{\indexname}{#1}} % like the previous two, but they put @code around the argument. \def\docodeindex#1{\edef\indexname{#1}\parsearg\singlecodeindexer} \def\singlecodeindexer #1{\doind{\indexname}{\code{#1}}} % Take care of Texinfo commands that can appear in an index entry. % Since there are some commands we want to expand, and others we don't, % we have to laboriously prevent expansion for those that we don't. % \def\indexdummies{% \def\@{@}% change to @@ when we switch to @ as escape char in index files. \def\ {\realbackslash\space }% % Need these in case \tex is in effect and \{ is a \delimiter again. % But can't use \lbracecmd and \rbracecmd because texindex assumes % braces and backslashes are used only as delimiters. \let\{ = \mylbrace \let\} = \myrbrace % % \definedummyword defines \#1 as \realbackslash #1\space, thus % effectively preventing its expansion. This is used only for control % words, not control letters, because the \space would be incorrect % for control characters, but is needed to separate the control word % from whatever follows. % % For control letters, we have \definedummyletter, which omits the % space. % % These can be used both for control words that take an argument and % those that do not. If it is followed by {arg} in the input, then % that will dutifully get written to the index (or wherever). % \def\definedummyword##1{% \expandafter\def\csname ##1\endcsname{\realbackslash ##1\space}% }% \def\definedummyletter##1{% \expandafter\def\csname ##1\endcsname{\realbackslash ##1}% }% % % Do the redefinitions. \commondummies } % For the aux file, @ is the escape character. So we want to redefine % everything using @ instead of \realbackslash. When everything uses % @, this will be simpler. % \def\atdummies{% \def\@{@@}% \def\ {@ }% \let\{ = \lbraceatcmd \let\} = \rbraceatcmd % % (See comments in \indexdummies.) \def\definedummyword##1{% \expandafter\def\csname ##1\endcsname{@##1\space}% }% \def\definedummyletter##1{% \expandafter\def\csname ##1\endcsname{@##1}% }% % % Do the redefinitions. \commondummies } % Called from \indexdummies and \atdummies. \definedummyword and % \definedummyletter must be defined first. % \def\commondummies{% % \normalturnoffactive % \commondummiesnofonts % \definedummyletter{_}% % % Non-English letters. \definedummyword{AA}% \definedummyword{AE}% \definedummyword{L}% \definedummyword{OE}% \definedummyword{O}% \definedummyword{aa}% \definedummyword{ae}% \definedummyword{l}% \definedummyword{oe}% \definedummyword{o}% \definedummyword{ss}% \definedummyword{exclamdown}% \definedummyword{questiondown}% \definedummyword{ordf}% \definedummyword{ordm}% % % Although these internal commands shouldn't show up, sometimes they do. \definedummyword{bf}% \definedummyword{gtr}% \definedummyword{hat}% \definedummyword{less}% \definedummyword{sf}% \definedummyword{sl}% \definedummyword{tclose}% \definedummyword{tt}% % \definedummyword{LaTeX}% \definedummyword{TeX}% % % Assorted special characters. \definedummyword{bullet}% \definedummyword{copyright}% \definedummyword{registeredsymbol}% \definedummyword{dots}% \definedummyword{enddots}% \definedummyword{equiv}% \definedummyword{error}% \definedummyword{expansion}% \definedummyword{minus}% \definedummyword{pounds}% \definedummyword{point}% \definedummyword{print}% \definedummyword{result}% % % Handle some cases of @value -- where it does not contain any % (non-fully-expandable) commands. \makevalueexpandable % % Normal spaces, not active ones. \unsepspaces % % No macro expansion. \turnoffmacros } % \commondummiesnofonts: common to \commondummies and \indexnofonts. % % Better have this without active chars. { \catcode`\~=\other \gdef\commondummiesnofonts{% % Control letters and accents. \definedummyletter{!}% \definedummyletter{"}% \definedummyletter{'}% \definedummyletter{*}% \definedummyletter{,}% \definedummyletter{.}% \definedummyletter{/}% \definedummyletter{:}% \definedummyletter{=}% \definedummyletter{?}% \definedummyletter{^}% \definedummyletter{`}% \definedummyletter{~}% \definedummyword{u}% \definedummyword{v}% \definedummyword{H}% \definedummyword{dotaccent}% \definedummyword{ringaccent}% \definedummyword{tieaccent}% \definedummyword{ubaraccent}% \definedummyword{udotaccent}% \definedummyword{dotless}% % % Texinfo font commands. \definedummyword{b}% \definedummyword{i}% \definedummyword{r}% \definedummyword{sc}% \definedummyword{t}% % % Commands that take arguments. \definedummyword{acronym}% \definedummyword{cite}% \definedummyword{code}% \definedummyword{command}% \definedummyword{dfn}% \definedummyword{emph}% \definedummyword{env}% \definedummyword{file}% \definedummyword{kbd}% \definedummyword{key}% \definedummyword{math}% \definedummyword{option}% \definedummyword{samp}% \definedummyword{strong}% \definedummyword{tie}% \definedummyword{uref}% \definedummyword{url}% \definedummyword{var}% \definedummyword{verb}% \definedummyword{w}% } } % \indexnofonts is used when outputting the strings to sort the index % by, and when constructing control sequence names. It eliminates all % control sequences and just writes whatever the best ASCII sort string % would be for a given command (usually its argument). % \def\indexnofonts{% \def\definedummyword##1{% \expandafter\let\csname ##1\endcsname\asis }% % We can just ignore the accent commands and other control letters. \def\definedummyletter##1{% \expandafter\def\csname ##1\endcsname{}% }% % \commondummiesnofonts % % Don't no-op \tt, since it isn't a user-level command % and is used in the definitions of the active chars like <, >, |, etc. % Likewise with the other plain tex font commands. %\let\tt=\asis % \def\ { }% \def\@{@}% % how to handle braces? \def\_{\normalunderscore}% % % Non-English letters. \def\AA{AA}% \def\AE{AE}% \def\L{L}% \def\OE{OE}% \def\O{O}% \def\aa{aa}% \def\ae{ae}% \def\l{l}% \def\oe{oe}% \def\o{o}% \def\ss{ss}% \def\exclamdown{!}% \def\questiondown{?}% \def\ordf{a}% \def\ordm{o}% % \def\LaTeX{LaTeX}% \def\TeX{TeX}% % % Assorted special characters. % (The following {} will end up in the sort string, but that's ok.) \def\bullet{bullet}% \def\copyright{copyright}% \def\registeredsymbol{R}% \def\dots{...}% \def\enddots{...}% \def\equiv{==}% \def\error{error}% \def\expansion{==>}% \def\minus{-}% \def\pounds{pounds}% \def\point{.}% \def\print{-|}% \def\result{=>}% } \let\indexbackslash=0 %overridden during \printindex. \let\SETmarginindex=\relax % put index entries in margin (undocumented)? % Most index entries go through here, but \dosubind is the general case. % #1 is the index name, #2 is the entry text. \def\doind#1#2{\dosubind{#1}{#2}{}} % Workhorse for all \fooindexes. % #1 is name of index, #2 is stuff to put there, #3 is subentry -- % empty if called from \doind, as we usually are (the main exception % is with most defuns, which call us directly). % \def\dosubind#1#2#3{% \iflinks {% % Store the main index entry text (including the third arg). \toks0 = {#2}% % If third arg is present, precede it with a space. \def\thirdarg{#3}% \ifx\thirdarg\empty \else \toks0 = \expandafter{\the\toks0 \space #3}% \fi % \edef\writeto{\csname#1indfile\endcsname}% % \ifvmode \dosubindsanitize \else \dosubindwrite \fi }% \fi } % Write the entry in \toks0 to the index file: % \def\dosubindwrite{% % Put the index entry in the margin if desired. \ifx\SETmarginindex\relax\else \insert\margin{\hbox{\vrule height8pt depth3pt width0pt \the\toks0}}% \fi % % Remember, we are within a group. \indexdummies % Must do this here, since \bf, etc expand at this stage \escapechar=`\\ \def\backslashcurfont{\indexbackslash}% \indexbackslash isn't defined now % so it will be output as is; and it will print as backslash. % % Process the index entry with all font commands turned off, to % get the string to sort by. {\indexnofonts \edef\temp{\the\toks0}% need full expansion \xdef\indexsorttmp{\temp}% }% % % Set up the complete index entry, with both the sort key and % the original text, including any font commands. We write % three arguments to \entry to the .?? file (four in the % subentry case), texindex reduces to two when writing the .??s % sorted result. \edef\temp{% \write\writeto{% \string\entry{\indexsorttmp}{\noexpand\folio}{\the\toks0}}% }% \temp } % Take care of unwanted page breaks: % % If a skip is the last thing on the list now, preserve it % by backing up by \lastskip, doing the \write, then inserting % the skip again. Otherwise, the whatsit generated by the % \write will make \lastskip zero. The result is that sequences % like this: % @end defun % @tindex whatever % @defun ... % will have extra space inserted, because the \medbreak in the % start of the @defun won't see the skip inserted by the @end of % the previous defun. % % But don't do any of this if we're not in vertical mode. We % don't want to do a \vskip and prematurely end a paragraph. % % Avoid page breaks due to these extra skips, too. % % But wait, there is a catch there: % We'll have to check whether \lastskip is zero skip. \ifdim is not % sufficient for this purpose, as it ignores stretch and shrink parts % of the skip. The only way seems to be to check the textual % representation of the skip. % % The following is almost like \def\zeroskipmacro{0.0pt} except that % the ``p'' and ``t'' characters have catcode \other, not 11 (letter). % \edef\zeroskipmacro{\expandafter\the\csname z@skip\endcsname} % % ..., ready, GO: % \def\dosubindsanitize{% % \lastskip and \lastpenalty cannot both be nonzero simultaneously. \skip0 = \lastskip \edef\lastskipmacro{\the\lastskip}% \count255 = \lastpenalty % % If \lastskip is nonzero, that means the last item was a % skip. And since a skip is discardable, that means this % -\skip0 glue we're inserting is preceded by a % non-discardable item, therefore it is not a potential % breakpoint, therefore no \nobreak needed. \ifx\lastskipmacro\zeroskipmacro \else \vskip-\skip0 \fi % \dosubindwrite % \ifx\lastskipmacro\zeroskipmacro % if \lastskip was zero, perhaps the last item was a % penalty, and perhaps it was >=10000, e.g., a \nobreak. % In that case, we want to re-insert the penalty; since we % just inserted a non-discardable item, any following glue % (such as a \parskip) would be a breakpoint. For example: % @deffn deffn-whatever % @vindex index-whatever % Description. % would allow a break between the index-whatever whatsit % and the "Description." paragraph. \ifnum\count255>9999 \nobreak \fi \else % On the other hand, if we had a nonzero \lastskip, % this make-up glue would be preceded by a non-discardable item % (the whatsit from the \write), so we must insert a \nobreak. \nobreak\vskip\skip0 \fi } % The index entry written in the file actually looks like % \entry {sortstring}{page}{topic} % or % \entry {sortstring}{page}{topic}{subtopic} % The texindex program reads in these files and writes files % containing these kinds of lines: % \initial {c} % before the first topic whose initial is c % \entry {topic}{pagelist} % for a topic that is used without subtopics % \primary {topic} % for the beginning of a topic that is used with subtopics % \secondary {subtopic}{pagelist} % for each subtopic. % Define the user-accessible indexing commands % @findex, @vindex, @kindex, @cindex. \def\findex {\fnindex} \def\kindex {\kyindex} \def\cindex {\cpindex} \def\vindex {\vrindex} \def\tindex {\tpindex} \def\pindex {\pgindex} \def\cindexsub {\begingroup\obeylines\cindexsub} {\obeylines % \gdef\cindexsub "#1" #2^^M{\endgroup % \dosubind{cp}{#2}{#1}}} % Define the macros used in formatting output of the sorted index material. % @printindex causes a particular index (the ??s file) to get printed. % It does not print any chapter heading (usually an @unnumbered). % \parseargdef\printindex{\begingroup \dobreak \chapheadingskip{10000}% % \smallfonts \rm \tolerance = 9500 \everypar = {}% don't want the \kern\-parindent from indentation suppression. % % See if the index file exists and is nonempty. % Change catcode of @ here so that if the index file contains % \initial {@} % as its first line, TeX doesn't complain about mismatched braces % (because it thinks @} is a control sequence). \catcode`\@ = 11 \openin 1 \jobname.#1s \ifeof 1 % \enddoublecolumns gets confused if there is no text in the index, % and it loses the chapter title and the aux file entries for the % index. The easiest way to prevent this problem is to make sure % there is some text. \putwordIndexNonexistent \else % % If the index file exists but is empty, then \openin leaves \ifeof % false. We have to make TeX try to read something from the file, so % it can discover if there is anything in it. \read 1 to \temp \ifeof 1 \putwordIndexIsEmpty \else % Index files are almost Texinfo source, but we use \ as the escape % character. It would be better to use @, but that's too big a change % to make right now. \def\indexbackslash{\backslashcurfont}% \catcode`\\ = 0 \escapechar = `\\ \begindoublecolumns \input \jobname.#1s \enddoublecolumns \fi \fi \closein 1 \endgroup} % These macros are used by the sorted index file itself. % Change them to control the appearance of the index. \def\initial#1{{% % Some minor font changes for the special characters. \let\tentt=\sectt \let\tt=\sectt \let\sf=\sectt % % Remove any glue we may have, we'll be inserting our own. \removelastskip % % We like breaks before the index initials, so insert a bonus. \penalty -300 % % Typeset the initial. Making this add up to a whole number of % baselineskips increases the chance of the dots lining up from column % to column. It still won't often be perfect, because of the stretch % we need before each entry, but it's better. % % No shrink because it confuses \balancecolumns. \vskip 1.67\baselineskip plus .5\baselineskip \leftline{\secbf #1}% \vskip .33\baselineskip plus .1\baselineskip % % Do our best not to break after the initial. \nobreak }} % \entry typesets a paragraph consisting of the text (#1), dot leaders, and % then page number (#2) flushed to the right margin. It is used for index % and table of contents entries. The paragraph is indented by \leftskip. % % A straightforward implementation would start like this: % \def\entry#1#2{... % But this frozes the catcodes in the argument, and can cause problems to % @code, which sets - active. This problem was fixed by a kludge--- % ``-'' was active throughout whole index, but this isn't really right. % % The right solution is to prevent \entry from swallowing the whole text. % --kasal, 21nov03 \def\entry{% \begingroup % % Start a new paragraph if necessary, so our assignments below can't % affect previous text. \par % % Do not fill out the last line with white space. \parfillskip = 0in % % No extra space above this paragraph. \parskip = 0in % % Do not prefer a separate line ending with a hyphen to fewer lines. \finalhyphendemerits = 0 % % \hangindent is only relevant when the entry text and page number % don't both fit on one line. In that case, bob suggests starting the % dots pretty far over on the line. Unfortunately, a large % indentation looks wrong when the entry text itself is broken across % lines. So we use a small indentation and put up with long leaders. % % \hangafter is reset to 1 (which is the value we want) at the start % of each paragraph, so we need not do anything with that. \hangindent = 2em % % When the entry text needs to be broken, just fill out the first line % with blank space. \rightskip = 0pt plus1fil % % A bit of stretch before each entry for the benefit of balancing % columns. \vskip 0pt plus1pt % % Swallow the left brace of the text (first parameter): \afterassignment\doentry \let\temp = } \def\doentry{% \bgroup % Instead of the swallowed brace. \noindent \aftergroup\finishentry % And now comes the text of the entry. } \def\finishentry#1{% % #1 is the page number. % % The following is kludged to not output a line of dots in the index if % there are no page numbers. The next person who breaks this will be % cursed by a Unix daemon. \def\tempa{{\rm }}% \def\tempb{#1}% \edef\tempc{\tempa}% \edef\tempd{\tempb}% \ifx\tempc\tempd \ % \else % % If we must, put the page number on a line of its own, and fill out % this line with blank space. (The \hfil is overwhelmed with the % fill leaders glue in \indexdotfill if the page number does fit.) \hfil\penalty50 \null\nobreak\indexdotfill % Have leaders before the page number. % % The `\ ' here is removed by the implicit \unskip that TeX does as % part of (the primitive) \par. Without it, a spurious underfull % \hbox ensues. \ifpdf \pdfgettoks#1.% \ \the\toksA \else \ #1% \fi \fi \par \endgroup } % Like \dotfill except takes at least 1 em. \def\indexdotfill{\cleaders \hbox{$\mathsurround=0pt \mkern1.5mu ${\it .}$ \mkern1.5mu$}\hskip 1em plus 1fill} \def\primary #1{\line{#1\hfil}} \newskip\secondaryindent \secondaryindent=0.5cm \def\secondary#1#2{{% \parfillskip=0in \parskip=0in \hangindent=1in \hangafter=1 \noindent\hskip\secondaryindent\hbox{#1}\indexdotfill \ifpdf \pdfgettoks#2.\ \the\toksA % The page number ends the paragraph. \else #2 \fi \par }} % Define two-column mode, which we use to typeset indexes. % Adapted from the TeXbook, page 416, which is to say, % the manmac.tex format used to print the TeXbook itself. \catcode`\@=11 \newbox\partialpage \newdimen\doublecolumnhsize \def\begindoublecolumns{\begingroup % ended by \enddoublecolumns % Grab any single-column material above us. \output = {% % % Here is a possibility not foreseen in manmac: if we accumulate a % whole lot of material, we might end up calling this \output % routine twice in a row (see the doublecol-lose test, which is % essentially a couple of indexes with @setchapternewpage off). In % that case we just ship out what is in \partialpage with the normal % output routine. Generally, \partialpage will be empty when this % runs and this will be a no-op. See the indexspread.tex test case. \ifvoid\partialpage \else \onepageout{\pagecontents\partialpage}% \fi % \global\setbox\partialpage = \vbox{% % Unvbox the main output page. \unvbox\PAGE \kern-\topskip \kern\baselineskip }% }% \eject % run that output routine to set \partialpage % % Use the double-column output routine for subsequent pages. \output = {\doublecolumnout}% % % Change the page size parameters. We could do this once outside this % routine, in each of @smallbook, @afourpaper, and the default 8.5x11 % format, but then we repeat the same computation. Repeating a couple % of assignments once per index is clearly meaningless for the % execution time, so we may as well do it in one place. % % First we halve the line length, less a little for the gutter between % the columns. We compute the gutter based on the line length, so it % changes automatically with the paper format. The magic constant % below is chosen so that the gutter has the same value (well, +-<1pt) % as it did when we hard-coded it. % % We put the result in a separate register, \doublecolumhsize, so we % can restore it in \pagesofar, after \hsize itself has (potentially) % been clobbered. % \doublecolumnhsize = \hsize \advance\doublecolumnhsize by -.04154\hsize \divide\doublecolumnhsize by 2 \hsize = \doublecolumnhsize % % Double the \vsize as well. (We don't need a separate register here, % since nobody clobbers \vsize.) \vsize = 2\vsize } % The double-column output routine for all double-column pages except % the last. % \def\doublecolumnout{% \splittopskip=\topskip \splitmaxdepth=\maxdepth % Get the available space for the double columns -- the normal % (undoubled) page height minus any material left over from the % previous page. \dimen@ = \vsize \divide\dimen@ by 2 \advance\dimen@ by -\ht\partialpage % % box0 will be the left-hand column, box2 the right. \setbox0=\vsplit255 to\dimen@ \setbox2=\vsplit255 to\dimen@ \onepageout\pagesofar \unvbox255 \penalty\outputpenalty } % % Re-output the contents of the output page -- any previous material, % followed by the two boxes we just split, in box0 and box2. \def\pagesofar{% \unvbox\partialpage % \hsize = \doublecolumnhsize \wd0=\hsize \wd2=\hsize \hbox to\pagewidth{\box0\hfil\box2}% } % % All done with double columns. \def\enddoublecolumns{% \output = {% % Split the last of the double-column material. Leave it on the % current page, no automatic page break. \balancecolumns % % If we end up splitting too much material for the current page, % though, there will be another page break right after this \output % invocation ends. Having called \balancecolumns once, we do not % want to call it again. Therefore, reset \output to its normal % definition right away. (We hope \balancecolumns will never be % called on to balance too much material, but if it is, this makes % the output somewhat more palatable.) \global\output = {\onepageout{\pagecontents\PAGE}}% }% \eject \endgroup % started in \begindoublecolumns % % \pagegoal was set to the doubled \vsize above, since we restarted % the current page. We're now back to normal single-column % typesetting, so reset \pagegoal to the normal \vsize (after the % \endgroup where \vsize got restored). \pagegoal = \vsize } % % Called at the end of the double column material. \def\balancecolumns{% \setbox0 = \vbox{\unvbox255}% like \box255 but more efficient, see p.120. \dimen@ = \ht0 \advance\dimen@ by \topskip \advance\dimen@ by-\baselineskip \divide\dimen@ by 2 % target to split to %debug\message{final 2-column material height=\the\ht0, target=\the\dimen@.}% \splittopskip = \topskip % Loop until we get a decent breakpoint. {% \vbadness = 10000 \loop \global\setbox3 = \copy0 \global\setbox1 = \vsplit3 to \dimen@ \ifdim\ht3>\dimen@ \global\advance\dimen@ by 1pt \repeat }% %debug\message{split to \the\dimen@, column heights: \the\ht1, \the\ht3.}% \setbox0=\vbox to\dimen@{\unvbox1}% \setbox2=\vbox to\dimen@{\unvbox3}% % \pagesofar } \catcode`\@ = \other \message{sectioning,} % Chapters, sections, etc. % \unnumberedno is an oxymoron, of course. But we count the unnumbered % sections so that we can refer to them unambiguously in the pdf % outlines by their "section number". We avoid collisions with chapter % numbers by starting them at 10000. (If a document ever has 10000 % chapters, we're in trouble anyway, I'm sure.) \newcount\unnumberedno \unnumberedno = 10000 \newcount\chapno \newcount\secno \secno=0 \newcount\subsecno \subsecno=0 \newcount\subsubsecno \subsubsecno=0 % This counter is funny since it counts through charcodes of letters A, B, ... \newcount\appendixno \appendixno = `\@ % % \def\appendixletter{\char\the\appendixno} % We do the following ugly conditional instead of the above simple % construct for the sake of pdftex, which needs the actual % letter in the expansion, not just typeset. % \def\appendixletter{% \ifnum\appendixno=`A A% \else\ifnum\appendixno=`B B% \else\ifnum\appendixno=`C C% \else\ifnum\appendixno=`D D% \else\ifnum\appendixno=`E E% \else\ifnum\appendixno=`F F% \else\ifnum\appendixno=`G G% \else\ifnum\appendixno=`H H% \else\ifnum\appendixno=`I I% \else\ifnum\appendixno=`J J% \else\ifnum\appendixno=`K K% \else\ifnum\appendixno=`L L% \else\ifnum\appendixno=`M M% \else\ifnum\appendixno=`N N% \else\ifnum\appendixno=`O O% \else\ifnum\appendixno=`P P% \else\ifnum\appendixno=`Q Q% \else\ifnum\appendixno=`R R% \else\ifnum\appendixno=`S S% \else\ifnum\appendixno=`T T% \else\ifnum\appendixno=`U U% \else\ifnum\appendixno=`V V% \else\ifnum\appendixno=`W W% \else\ifnum\appendixno=`X X% \else\ifnum\appendixno=`Y Y% \else\ifnum\appendixno=`Z Z% % The \the is necessary, despite appearances, because \appendixletter is % expanded while writing the .toc file. \char\appendixno is not % expandable, thus it is written literally, thus all appendixes come out % with the same letter (or @) in the toc without it. \else\char\the\appendixno \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi} % Each @chapter defines this as the name of the chapter. % page headings and footings can use it. @section does likewise. % However, they are not reliable, because we don't use marks. \def\thischapter{} \def\thissection{} \newcount\absseclevel % used to calculate proper heading level \newcount\secbase\secbase=0 % @raisesections/@lowersections modify this count % @raisesections: treat @section as chapter, @subsection as section, etc. \def\raisesections{\global\advance\secbase by -1} \let\up=\raisesections % original BFox name % @lowersections: treat @chapter as section, @section as subsection, etc. \def\lowersections{\global\advance\secbase by 1} \let\down=\lowersections % original BFox name % we only have subsub. \chardef\maxseclevel = 3 % % A numbered section within an unnumbered changes to unnumbered too. % To achive this, remember the "biggest" unnum. sec. we are currently in: \chardef\unmlevel = \maxseclevel % % Trace whether the current chapter is an appendix or not: % \chapheadtype is "N" or "A", unnumbered chapters are ignored. \def\chapheadtype{N} % Choose a heading macro % #1 is heading type % #2 is heading level % #3 is text for heading \def\genhead#1#2#3{% % Compute the abs. sec. level: \absseclevel=#2 \advance\absseclevel by \secbase % Make sure \absseclevel doesn't fall outside the range: \ifnum \absseclevel < 0 \absseclevel = 0 \else \ifnum \absseclevel > 3 \absseclevel = 3 \fi \fi % The heading type: \def\headtype{#1}% \if \headtype U% \ifnum \absseclevel < \unmlevel \chardef\unmlevel = \absseclevel \fi \else % Check for appendix sections: \ifnum \absseclevel = 0 \edef\chapheadtype{\headtype}% \else \if \headtype A\if \chapheadtype N% \errmessage{@appendix... within a non-appendix chapter}% \fi\fi \fi % Check for numbered within unnumbered: \ifnum \absseclevel > \unmlevel \def\headtype{U}% \else \chardef\unmlevel = 3 \fi \fi % Now print the heading: \if \headtype U% \ifcase\absseclevel \unnumberedzzz{#3}% \or \unnumberedseczzz{#3}% \or \unnumberedsubseczzz{#3}% \or \unnumberedsubsubseczzz{#3}% \fi \else \if \headtype A% \ifcase\absseclevel \appendixzzz{#3}% \or \appendixsectionzzz{#3}% \or \appendixsubseczzz{#3}% \or \appendixsubsubseczzz{#3}% \fi \else \ifcase\absseclevel \chapterzzz{#3}% \or \seczzz{#3}% \or \numberedsubseczzz{#3}% \or \numberedsubsubseczzz{#3}% \fi \fi \fi \suppressfirstparagraphindent } % an interface: \def\numhead{\genhead N} \def\apphead{\genhead A} \def\unnmhead{\genhead U} % @chapter, @appendix, @unnumbered. Increment top-level counter, reset % all lower-level sectioning counters to zero. % % Also set \chaplevelprefix, which we prepend to @float sequence numbers % (e.g., figures), q.v. By default (before any chapter), that is empty. \let\chaplevelprefix = \empty % \outer\parseargdef\chapter{\numhead0{#1}} % normally numhead0 calls chapterzzz \def\chapterzzz#1{% % section resetting is \global in case the chapter is in a group, such % as an @include file. \global\secno=0 \global\subsecno=0 \global\subsubsecno=0 \global\advance\chapno by 1 % % Used for \float. \gdef\chaplevelprefix{\the\chapno.}% \resetallfloatnos % \message{\putwordChapter\space \the\chapno}% % % Write the actual heading. \chapmacro{#1}{Ynumbered}{\the\chapno}% % % So @section and the like are numbered underneath this chapter. \global\let\section = \numberedsec \global\let\subsection = \numberedsubsec \global\let\subsubsection = \numberedsubsubsec } \outer\parseargdef\appendix{\apphead0{#1}} % normally apphead0 calls appendixzzz \def\appendixzzz#1{% \global\secno=0 \global\subsecno=0 \global\subsubsecno=0 \global\advance\appendixno by 1 \gdef\chaplevelprefix{\appendixletter.}% \resetallfloatnos % \def\appendixnum{\putwordAppendix\space \appendixletter}% \message{\appendixnum}% % \chapmacro{#1}{Yappendix}{\appendixletter}% % \global\let\section = \appendixsec \global\let\subsection = \appendixsubsec \global\let\subsubsection = \appendixsubsubsec } \outer\parseargdef\unnumbered{\unnmhead0{#1}} % normally unnmhead0 calls unnumberedzzz \def\unnumberedzzz#1{% \global\secno=0 \global\subsecno=0 \global\subsubsecno=0 \global\advance\unnumberedno by 1 % % Since an unnumbered has no number, no prefix for figures. \global\let\chaplevelprefix = \empty \resetallfloatnos % % This used to be simply \message{#1}, but TeX fully expands the % argument to \message. Therefore, if #1 contained @-commands, TeX % expanded them. For example, in `@unnumbered The @cite{Book}', TeX % expanded @cite (which turns out to cause errors because \cite is meant % to be executed, not expanded). % % Anyway, we don't want the fully-expanded definition of @cite to appear % as a result of the \message, we just want `@cite' itself. We use % \the to achieve this: TeX expands \the only once, % simply yielding the contents of . (We also do this for % the toc entries.) \toks0 = {#1}% \message{(\the\toks0)}% % \chapmacro{#1}{Ynothing}{\the\unnumberedno}% % \global\let\section = \unnumberedsec \global\let\subsection = \unnumberedsubsec \global\let\subsubsection = \unnumberedsubsubsec } % @centerchap is like @unnumbered, but the heading is centered. \outer\parseargdef\centerchap{% % Well, we could do the following in a group, but that would break % an assumption that \chapmacro is called at the outermost level. % Thus we are safer this way: --kasal, 24feb04 \let\centerparametersmaybe = \centerparameters \unnmhead0{#1}% \let\centerparametersmaybe = \relax } % @top is like @unnumbered. \let\top\unnumbered % Sections. \outer\parseargdef\numberedsec{\numhead1{#1}} % normally calls seczzz \def\seczzz#1{% \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1 \sectionheading{#1}{sec}{Ynumbered}{\the\chapno.\the\secno}% } \outer\parseargdef\appendixsection{\apphead1{#1}} % normally calls appendixsectionzzz \def\appendixsectionzzz#1{% \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1 \sectionheading{#1}{sec}{Yappendix}{\appendixletter.\the\secno}% } \let\appendixsec\appendixsection \outer\parseargdef\unnumberedsec{\unnmhead1{#1}} % normally calls unnumberedseczzz \def\unnumberedseczzz#1{% \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1 \sectionheading{#1}{sec}{Ynothing}{\the\unnumberedno.\the\secno}% } % Subsections. \outer\parseargdef\numberedsubsec{\numhead2{#1}} % normally calls numberedsubseczzz \def\numberedsubseczzz#1{% \global\subsubsecno=0 \global\advance\subsecno by 1 \sectionheading{#1}{subsec}{Ynumbered}{\the\chapno.\the\secno.\the\subsecno}% } \outer\parseargdef\appendixsubsec{\apphead2{#1}} % normally calls appendixsubseczzz \def\appendixsubseczzz#1{% \global\subsubsecno=0 \global\advance\subsecno by 1 \sectionheading{#1}{subsec}{Yappendix}% {\appendixletter.\the\secno.\the\subsecno}% } \outer\parseargdef\unnumberedsubsec{\unnmhead2{#1}} %normally calls unnumberedsubseczzz \def\unnumberedsubseczzz#1{% \global\subsubsecno=0 \global\advance\subsecno by 1 \sectionheading{#1}{subsec}{Ynothing}% {\the\unnumberedno.\the\secno.\the\subsecno}% } % Subsubsections. \outer\parseargdef\numberedsubsubsec{\numhead3{#1}} % normally numberedsubsubseczzz \def\numberedsubsubseczzz#1{% \global\advance\subsubsecno by 1 \sectionheading{#1}{subsubsec}{Ynumbered}% {\the\chapno.\the\secno.\the\subsecno.\the\subsubsecno}% } \outer\parseargdef\appendixsubsubsec{\apphead3{#1}} % normally appendixsubsubseczzz \def\appendixsubsubseczzz#1{% \global\advance\subsubsecno by 1 \sectionheading{#1}{subsubsec}{Yappendix}% {\appendixletter.\the\secno.\the\subsecno.\the\subsubsecno}% } \outer\parseargdef\unnumberedsubsubsec{\unnmhead3{#1}} %normally unnumberedsubsubseczzz \def\unnumberedsubsubseczzz#1{% \global\advance\subsubsecno by 1 \sectionheading{#1}{subsubsec}{Ynothing}% {\the\unnumberedno.\the\secno.\the\subsecno.\the\subsubsecno}% } % These macros control what the section commands do, according % to what kind of chapter we are in (ordinary, appendix, or unnumbered). % Define them by default for a numbered chapter. \let\section = \numberedsec \let\subsection = \numberedsubsec \let\subsubsection = \numberedsubsubsec % Define @majorheading, @heading and @subheading % NOTE on use of \vbox for chapter headings, section headings, and such: % 1) We use \vbox rather than the earlier \line to permit % overlong headings to fold. % 2) \hyphenpenalty is set to 10000 because hyphenation in a % heading is obnoxious; this forbids it. % 3) Likewise, headings look best if no \parindent is used, and % if justification is not attempted. Hence \raggedright. \def\majorheading{% {\advance\chapheadingskip by 10pt \chapbreak }% \parsearg\chapheadingzzz } \def\chapheading{\chapbreak \parsearg\chapheadingzzz} \def\chapheadingzzz#1{% {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt\raggedright \rm #1\hfill}}% \bigskip \par\penalty 200\relax \suppressfirstparagraphindent } % @heading, @subheading, @subsubheading. \parseargdef\heading{\sectionheading{#1}{sec}{Yomitfromtoc}{} \suppressfirstparagraphindent} \parseargdef\subheading{\sectionheading{#1}{subsec}{Yomitfromtoc}{} \suppressfirstparagraphindent} \parseargdef\subsubheading{\sectionheading{#1}{subsubsec}{Yomitfromtoc}{} \suppressfirstparagraphindent} % These macros generate a chapter, section, etc. heading only % (including whitespace, linebreaking, etc. around it), % given all the information in convenient, parsed form. %%% Args are the skip and penalty (usually negative) \def\dobreak#1#2{\par\ifdim\lastskip<#1\removelastskip\penalty#2\vskip#1\fi} %%% Define plain chapter starts, and page on/off switching for it % Parameter controlling skip before chapter headings (if needed) \newskip\chapheadingskip \def\chapbreak{\dobreak \chapheadingskip {-4000}} \def\chappager{\par\vfill\supereject} \def\chapoddpage{\chappager \ifodd\pageno \else \hbox to 0pt{} \chappager\fi} \def\setchapternewpage #1 {\csname CHAPPAG#1\endcsname} \def\CHAPPAGoff{% \global\let\contentsalignmacro = \chappager \global\let\pchapsepmacro=\chapbreak \global\let\pagealignmacro=\chappager} \def\CHAPPAGon{% \global\let\contentsalignmacro = \chappager \global\let\pchapsepmacro=\chappager \global\let\pagealignmacro=\chappager \global\def\HEADINGSon{\HEADINGSsingle}} \def\CHAPPAGodd{% \global\let\contentsalignmacro = \chapoddpage \global\let\pchapsepmacro=\chapoddpage \global\let\pagealignmacro=\chapoddpage \global\def\HEADINGSon{\HEADINGSdouble}} \CHAPPAGon % Chapter opening. % % #1 is the text, #2 is the section type (Ynumbered, Ynothing, % Yappendix, Yomitfromtoc), #3 the chapter number. % % To test against our argument. \def\Ynothingkeyword{Ynothing} \def\Yomitfromtockeyword{Yomitfromtoc} \def\Yappendixkeyword{Yappendix} % \def\chapmacro#1#2#3{% \pchapsepmacro {% \chapfonts \rm % % Have to define \thissection before calling \donoderef, because the % xref code eventually uses it. On the other hand, it has to be called % after \pchapsepmacro, or the headline will change too soon. \gdef\thissection{#1}% \gdef\thischaptername{#1}% % % Only insert the separating space if we have a chapter/appendix % number, and don't print the unnumbered ``number''. \def\temptype{#2}% \ifx\temptype\Ynothingkeyword \setbox0 = \hbox{}% \def\toctype{unnchap}% \def\thischapter{#1}% \else\ifx\temptype\Yomitfromtockeyword \setbox0 = \hbox{}% contents like unnumbered, but no toc entry \def\toctype{omit}% \xdef\thischapter{}% \else\ifx\temptype\Yappendixkeyword \setbox0 = \hbox{\putwordAppendix{} #3\enspace}% \def\toctype{app}% % We don't substitute the actual chapter name into \thischapter % because we don't want its macros evaluated now. And we don't % use \thissection because that changes with each section. % \xdef\thischapter{\putwordAppendix{} \appendixletter: \noexpand\thischaptername}% \else \setbox0 = \hbox{#3\enspace}% \def\toctype{numchap}% \xdef\thischapter{\putwordChapter{} \the\chapno: \noexpand\thischaptername}% \fi\fi\fi % % Write the toc entry for this chapter. Must come before the % \donoderef, because we include the current node name in the toc % entry, and \donoderef resets it to empty. \writetocentry{\toctype}{#1}{#3}% % % For pdftex, we have to write out the node definition (aka, make % the pdfdest) after any page break, but before the actual text has % been typeset. If the destination for the pdf outline is after the % text, then jumping from the outline may wind up with the text not % being visible, for instance under high magnification. \donoderef{#2}% % % Typeset the actual heading. \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright \hangindent=\wd0 \centerparametersmaybe \unhbox0 #1\par}% }% \nobreak\bigskip % no page break after a chapter title \nobreak } % @centerchap -- centered and unnumbered. \let\centerparametersmaybe = \relax \def\centerparameters{% \advance\rightskip by 3\rightskip \leftskip = \rightskip \parfillskip = 0pt } % I don't think this chapter style is supported any more, so I'm not % updating it with the new noderef stuff. We'll see. --karl, 11aug03. % \def\setchapterstyle #1 {\csname CHAPF#1\endcsname} % \def\unnchfopen #1{% \chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt\raggedright \rm #1\hfill}}\bigskip \par\nobreak } \def\chfopen #1#2{\chapoddpage {\chapfonts \vbox to 3in{\vfil \hbox to\hsize{\hfil #2} \hbox to\hsize{\hfil #1} \vfil}}% \par\penalty 5000 % } \def\centerchfopen #1{% \chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt \hfill {\rm #1}\hfill}}\bigskip \par\nobreak } \def\CHAPFopen{% \global\let\chapmacro=\chfopen \global\let\centerchapmacro=\centerchfopen} % Section titles. These macros combine the section number parts and % call the generic \sectionheading to do the printing. % \newskip\secheadingskip \def\secheadingbreak{\dobreak \secheadingskip{-1000}} % Subsection titles. \newskip\subsecheadingskip \def\subsecheadingbreak{\dobreak \subsecheadingskip{-500}} % Subsubsection titles. \def\subsubsecheadingskip{\subsecheadingskip} \def\subsubsecheadingbreak{\subsecheadingbreak} % Print any size, any type, section title. % % #1 is the text, #2 is the section level (sec/subsec/subsubsec), #3 is % the section type for xrefs (Ynumbered, Ynothing, Yappendix), #4 is the % section number. % \def\sectionheading#1#2#3#4{% {% % Switch to the right set of fonts. \csname #2fonts\endcsname \rm % % Insert space above the heading. \csname #2headingbreak\endcsname % % Only insert the space after the number if we have a section number. \def\sectionlevel{#2}% \def\temptype{#3}% % \ifx\temptype\Ynothingkeyword \setbox0 = \hbox{}% \def\toctype{unn}% \gdef\thissection{#1}% \else\ifx\temptype\Yomitfromtockeyword % for @headings -- no section number, don't include in toc, % and don't redefine \thissection. \setbox0 = \hbox{}% \def\toctype{omit}% \let\sectionlevel=\empty \else\ifx\temptype\Yappendixkeyword \setbox0 = \hbox{#4\enspace}% \def\toctype{app}% \gdef\thissection{#1}% \else \setbox0 = \hbox{#4\enspace}% \def\toctype{num}% \gdef\thissection{#1}% \fi\fi\fi % % Write the toc entry (before \donoderef). See comments in \chfplain. \writetocentry{\toctype\sectionlevel}{#1}{#4}% % % Write the node reference (= pdf destination for pdftex). % Again, see comments in \chfplain. \donoderef{#3}% % % Output the actual section heading. \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright \hangindent=\wd0 % zero if no section number \unhbox0 #1}% }% % Add extra space after the heading -- half of whatever came above it. % Don't allow stretch, though. \kern .5 \csname #2headingskip\endcsname % % Do not let the kern be a potential breakpoint, as it would be if it % was followed by glue. \nobreak % % We'll almost certainly start a paragraph next, so don't let that % glue accumulate. (Not a breakpoint because it's preceded by a % discardable item.) \vskip-\parskip % % This \nobreak is purely so the last item on the list is a \penalty % of 10000. This is so other code, for instance \parsebodycommon, can % check for and avoid allowing breakpoints. Otherwise, it would % insert a valid breakpoint between: % @section sec-whatever % @deffn def-whatever \nobreak } \message{toc,} % Table of contents. \newwrite\tocfile % Write an entry to the toc file, opening it if necessary. % Called from @chapter, etc. % % Example usage: \writetocentry{sec}{Section Name}{\the\chapno.\the\secno} % We append the current node name (if any) and page number as additional % arguments for the \{chap,sec,...}entry macros which will eventually % read this. The node name is used in the pdf outlines as the % destination to jump to. % % We open the .toc file for writing here instead of at @setfilename (or % any other fixed time) so that @contents can be anywhere in the document. % But if #1 is `omit', then we don't do anything. This is used for the % table of contents chapter openings themselves. % \newif\iftocfileopened \def\omitkeyword{omit}% % \def\writetocentry#1#2#3{% \edef\writetoctype{#1}% \ifx\writetoctype\omitkeyword \else \iftocfileopened\else \immediate\openout\tocfile = \jobname.toc \global\tocfileopenedtrue \fi % \iflinks \toks0 = {#2}% \toks2 = \expandafter{\lastnode}% \edef\temp{\write\tocfile{\realbackslash #1entry{\the\toks0}{#3}% {\the\toks2}{\noexpand\folio}}}% \temp \fi \fi % % Tell \shipout to create a pdf destination on each page, if we're % writing pdf. These are used in the table of contents. We can't % just write one on every page because the title pages are numbered % 1 and 2 (the page numbers aren't printed), and so are the first % two pages of the document. Thus, we'd have two destinations named % `1', and two named `2'. \ifpdf \global\pdfmakepagedesttrue \fi } \newskip\contentsrightmargin \contentsrightmargin=1in \newcount\savepageno \newcount\lastnegativepageno \lastnegativepageno = -1 % Prepare to read what we've written to \tocfile. % \def\startcontents#1{% % If @setchapternewpage on, and @headings double, the contents should % start on an odd page, unlike chapters. Thus, we maintain % \contentsalignmacro in parallel with \pagealignmacro. % From: Torbjorn Granlund \contentsalignmacro \immediate\closeout\tocfile % % Don't need to put `Contents' or `Short Contents' in the headline. % It is abundantly clear what they are. \def\thischapter{}% \chapmacro{#1}{Yomitfromtoc}{}% % \savepageno = \pageno \begingroup % Set up to handle contents files properly. \catcode`\\=0 \catcode`\{=1 \catcode`\}=2 \catcode`\@=11 % We can't do this, because then an actual ^ in a section % title fails, e.g., @chapter ^ -- exponentiation. --karl, 9jul97. %\catcode`\^=7 % to see ^^e4 as \"a etc. juha@piuha.ydi.vtt.fi \raggedbottom % Worry more about breakpoints than the bottom. \advance\hsize by -\contentsrightmargin % Don't use the full line length. % % Roman numerals for page numbers. \ifnum \pageno>0 \global\pageno = \lastnegativepageno \fi } % Normal (long) toc. \def\contents{% \startcontents{\putwordTOC}% \openin 1 \jobname.toc \ifeof 1 \else \input \jobname.toc \fi \vfill \eject \contentsalignmacro % in case @setchapternewpage odd is in effect \ifeof 1 \else \pdfmakeoutlines \fi \closein 1 \endgroup \lastnegativepageno = \pageno \global\pageno = \savepageno } % And just the chapters. \def\summarycontents{% \startcontents{\putwordShortTOC}% % \let\numchapentry = \shortchapentry \let\appentry = \shortchapentry \let\unnchapentry = \shortunnchapentry % We want a true roman here for the page numbers. \secfonts \let\rm=\shortcontrm \let\bf=\shortcontbf \let\sl=\shortcontsl \let\tt=\shortconttt \rm \hyphenpenalty = 10000 \advance\baselineskip by 1pt % Open it up a little. \def\numsecentry##1##2##3##4{} \let\appsecentry = \numsecentry \let\unnsecentry = \numsecentry \let\numsubsecentry = \numsecentry \let\appsubsecentry = \numsecentry \let\unnsubsecentry = \numsecentry \let\numsubsubsecentry = \numsecentry \let\appsubsubsecentry = \numsecentry \let\unnsubsubsecentry = \numsecentry \openin 1 \jobname.toc \ifeof 1 \else \input \jobname.toc \fi \closein 1 \vfill \eject \contentsalignmacro % in case @setchapternewpage odd is in effect \endgroup \lastnegativepageno = \pageno \global\pageno = \savepageno } \let\shortcontents = \summarycontents % Typeset the label for a chapter or appendix for the short contents. % The arg is, e.g., `A' for an appendix, or `3' for a chapter. % \def\shortchaplabel#1{% % This space should be enough, since a single number is .5em, and the % widest letter (M) is 1em, at least in the Computer Modern fonts. % But use \hss just in case. % (This space doesn't include the extra space that gets added after % the label; that gets put in by \shortchapentry above.) % % We'd like to right-justify chapter numbers, but that looks strange % with appendix letters. And right-justifying numbers and % left-justifying letters looks strange when there is less than 10 % chapters. Have to read the whole toc once to know how many chapters % there are before deciding ... \hbox to 1em{#1\hss}% } % These macros generate individual entries in the table of contents. % The first argument is the chapter or section name. % The last argument is the page number. % The arguments in between are the chapter number, section number, ... % Chapters, in the main contents. \def\numchapentry#1#2#3#4{\dochapentry{#2\labelspace#1}{#4}} % % Chapters, in the short toc. % See comments in \dochapentry re vbox and related settings. \def\shortchapentry#1#2#3#4{% \tocentry{\shortchaplabel{#2}\labelspace #1}{\doshortpageno\bgroup#4\egroup}% } % Appendices, in the main contents. % Need the word Appendix, and a fixed-size box. % \def\appendixbox#1{% % We use M since it's probably the widest letter. \setbox0 = \hbox{\putwordAppendix{} M}% \hbox to \wd0{\putwordAppendix{} #1\hss}} % \def\appentry#1#2#3#4{\dochapentry{\appendixbox{#2}\labelspace#1}{#4}} % Unnumbered chapters. \def\unnchapentry#1#2#3#4{\dochapentry{#1}{#4}} \def\shortunnchapentry#1#2#3#4{\tocentry{#1}{\doshortpageno\bgroup#4\egroup}} % Sections. \def\numsecentry#1#2#3#4{\dosecentry{#2\labelspace#1}{#4}} \let\appsecentry=\numsecentry \def\unnsecentry#1#2#3#4{\dosecentry{#1}{#4}} % Subsections. \def\numsubsecentry#1#2#3#4{\dosubsecentry{#2\labelspace#1}{#4}} \let\appsubsecentry=\numsubsecentry \def\unnsubsecentry#1#2#3#4{\dosubsecentry{#1}{#4}} % And subsubsections. \def\numsubsubsecentry#1#2#3#4{\dosubsubsecentry{#2\labelspace#1}{#4}} \let\appsubsubsecentry=\numsubsubsecentry \def\unnsubsubsecentry#1#2#3#4{\dosubsubsecentry{#1}{#4}} % This parameter controls the indentation of the various levels. % Same as \defaultparindent. \newdimen\tocindent \tocindent = 15pt % Now for the actual typesetting. In all these, #1 is the text and #2 is the % page number. % % If the toc has to be broken over pages, we want it to be at chapters % if at all possible; hence the \penalty. \def\dochapentry#1#2{% \penalty-300 \vskip1\baselineskip plus.33\baselineskip minus.25\baselineskip \begingroup \chapentryfonts \tocentry{#1}{\dopageno\bgroup#2\egroup}% \endgroup \nobreak\vskip .25\baselineskip plus.1\baselineskip } \def\dosecentry#1#2{\begingroup \secentryfonts \leftskip=\tocindent \tocentry{#1}{\dopageno\bgroup#2\egroup}% \endgroup} \def\dosubsecentry#1#2{\begingroup \subsecentryfonts \leftskip=2\tocindent \tocentry{#1}{\dopageno\bgroup#2\egroup}% \endgroup} \def\dosubsubsecentry#1#2{\begingroup \subsubsecentryfonts \leftskip=3\tocindent \tocentry{#1}{\dopageno\bgroup#2\egroup}% \endgroup} % We use the same \entry macro as for the index entries. \let\tocentry = \entry % Space between chapter (or whatever) number and the title. \def\labelspace{\hskip1em \relax} \def\dopageno#1{{\rm #1}} \def\doshortpageno#1{{\rm #1}} \def\chapentryfonts{\secfonts \rm} \def\secentryfonts{\textfonts} \def\subsecentryfonts{\textfonts} \def\subsubsecentryfonts{\textfonts} \message{environments,} % @foo ... @end foo. % @point{}, @result{}, @expansion{}, @print{}, @equiv{}. % % Since these characters are used in examples, it should be an even number of % \tt widths. Each \tt character is 1en, so two makes it 1em. % \def\point{$\star$} \def\result{\leavevmode\raise.15ex\hbox to 1em{\hfil$\Rightarrow$\hfil}} \def\expansion{\leavevmode\raise.1ex\hbox to 1em{\hfil$\mapsto$\hfil}} \def\print{\leavevmode\lower.1ex\hbox to 1em{\hfil$\dashv$\hfil}} \def\equiv{\leavevmode\lower.1ex\hbox to 1em{\hfil$\ptexequiv$\hfil}} % The @error{} command. % Adapted from the TeXbook's \boxit. % \newbox\errorbox % {\tentt \global\dimen0 = 3em}% Width of the box. \dimen2 = .55pt % Thickness of rules % The text. (`r' is open on the right, `e' somewhat less so on the left.) \setbox0 = \hbox{\kern-.75pt \tensf error\kern-1.5pt} % \setbox\errorbox=\hbox to \dimen0{\hfil \hsize = \dimen0 \advance\hsize by -5.8pt % Space to left+right. \advance\hsize by -2\dimen2 % Rules. \vbox{% \hrule height\dimen2 \hbox{\vrule width\dimen2 \kern3pt % Space to left of text. \vtop{\kern2.4pt \box0 \kern2.4pt}% Space above/below. \kern3pt\vrule width\dimen2}% Space to right. \hrule height\dimen2} \hfil} % \def\error{\leavevmode\lower.7ex\copy\errorbox} % @tex ... @end tex escapes into raw Tex temporarily. % One exception: @ is still an escape character, so that @end tex works. % But \@ or @@ will get a plain tex @ character. \envdef\tex{% \catcode `\\=0 \catcode `\{=1 \catcode `\}=2 \catcode `\$=3 \catcode `\&=4 \catcode `\#=6 \catcode `\^=7 \catcode `\_=8 \catcode `\~=\active \let~=\tie \catcode `\%=14 \catcode `\+=\other \catcode `\"=\other \catcode `\|=\other \catcode `\<=\other \catcode `\>=\other \escapechar=`\\ % \let\b=\ptexb \let\bullet=\ptexbullet \let\c=\ptexc \let\,=\ptexcomma \let\.=\ptexdot \let\dots=\ptexdots \let\equiv=\ptexequiv \let\!=\ptexexclam \let\i=\ptexi \let\indent=\ptexindent \let\noindent=\ptexnoindent \let\{=\ptexlbrace \let\+=\tabalign \let\}=\ptexrbrace \let\/=\ptexslash \let\*=\ptexstar \let\t=\ptext % \def\endldots{\mathinner{\ldots\ldots\ldots\ldots}}% \def\enddots{\relax\ifmmode\endldots\else$\mathsurround=0pt \endldots\,$\fi}% \def\@{@}% } % There is no need to define \Etex. % Define @lisp ... @end lisp. % @lisp environment forms a group so it can rebind things, % including the definition of @end lisp (which normally is erroneous). % Amount to narrow the margins by for @lisp. \newskip\lispnarrowing \lispnarrowing=0.4in % This is the definition that ^^M gets inside @lisp, @example, and other % such environments. \null is better than a space, since it doesn't % have any width. \def\lisppar{\null\endgraf} % This space is always present above and below environments. \newskip\envskipamount \envskipamount = 0pt % Make spacing and below environment symmetrical. We use \parskip here % to help in doing that, since in @example-like environments \parskip % is reset to zero; thus the \afterenvbreak inserts no space -- but the % start of the next paragraph will insert \parskip. % \def\aboveenvbreak{{% % =10000 instead of <10000 because of a special case in \itemzzz, q.v. \ifnum \lastpenalty=10000 \else \advance\envskipamount by \parskip \endgraf \ifdim\lastskip<\envskipamount \removelastskip % it's not a good place to break if the last penalty was \nobreak % or better ... \ifnum\lastpenalty<10000 \penalty-50 \fi \vskip\envskipamount \fi \fi }} \let\afterenvbreak = \aboveenvbreak % \nonarrowing is a flag. If "set", @lisp etc don't narrow margins. \let\nonarrowing=\relax % @cartouche ... @end cartouche: draw rectangle w/rounded corners around % environment contents. \font\circle=lcircle10 \newdimen\circthick \newdimen\cartouter\newdimen\cartinner \newskip\normbskip\newskip\normpskip\newskip\normlskip \circthick=\fontdimen8\circle % \def\ctl{{\circle\char'013\hskip -6pt}}% 6pt from pl file: 1/2charwidth \def\ctr{{\hskip 6pt\circle\char'010}} \def\cbl{{\circle\char'012\hskip -6pt}} \def\cbr{{\hskip 6pt\circle\char'011}} \def\carttop{\hbox to \cartouter{\hskip\lskip \ctl\leaders\hrule height\circthick\hfil\ctr \hskip\rskip}} \def\cartbot{\hbox to \cartouter{\hskip\lskip \cbl\leaders\hrule height\circthick\hfil\cbr \hskip\rskip}} % \newskip\lskip\newskip\rskip \envdef\cartouche{% \ifhmode\par\fi % can't be in the midst of a paragraph. \startsavinginserts \lskip=\leftskip \rskip=\rightskip \leftskip=0pt\rightskip=0pt % we want these *outside*. \cartinner=\hsize \advance\cartinner by-\lskip \advance\cartinner by-\rskip \cartouter=\hsize \advance\cartouter by 18.4pt % allow for 3pt kerns on either % side, and for 6pt waste from % each corner char, and rule thickness \normbskip=\baselineskip \normpskip=\parskip \normlskip=\lineskip % Flag to tell @lisp, etc., not to narrow margin. \let\nonarrowing=\comment \vbox\bgroup \baselineskip=0pt\parskip=0pt\lineskip=0pt \carttop \hbox\bgroup \hskip\lskip \vrule\kern3pt \vbox\bgroup \kern3pt \hsize=\cartinner \baselineskip=\normbskip \lineskip=\normlskip \parskip=\normpskip \vskip -\parskip \comment % For explanation, see the end of \def\group. } \def\Ecartouche{% \ifhmode\par\fi \kern3pt \egroup \kern3pt\vrule \hskip\rskip \egroup \cartbot \egroup \checkinserts } % This macro is called at the beginning of all the @example variants, % inside a group. \def\nonfillstart{% \aboveenvbreak \hfuzz = 12pt % Don't be fussy \sepspaces % Make spaces be word-separators rather than space tokens. \let\par = \lisppar % don't ignore blank lines \obeylines % each line of input is a line of output \parskip = 0pt \parindent = 0pt \emergencystretch = 0pt % don't try to avoid overfull boxes % @cartouche defines \nonarrowing to inhibit narrowing % at next level down. \ifx\nonarrowing\relax \advance \leftskip by \lispnarrowing \exdentamount=\lispnarrowing \fi \let\exdent=\nofillexdent } % If you want all examples etc. small: @set dispenvsize small. % If you want even small examples the full size: @set dispenvsize nosmall. % This affects the following displayed environments: % @example, @display, @format, @lisp % \def\smallword{small} \def\nosmallword{nosmall} \let\SETdispenvsize\relax \def\setnormaldispenv{% \ifx\SETdispenvsize\smallword \smallexamplefonts \rm \fi } \def\setsmalldispenv{% \ifx\SETdispenvsize\nosmallword \else \smallexamplefonts \rm \fi } % We often define two environments, @foo and @smallfoo. % Let's do it by one command: \def\makedispenv #1#2{ \expandafter\envdef\csname#1\endcsname {\setnormaldispenv #2} \expandafter\envdef\csname small#1\endcsname {\setsmalldispenv #2} \expandafter\let\csname E#1\endcsname \afterenvbreak \expandafter\let\csname Esmall#1\endcsname \afterenvbreak } % Define two synonyms: \def\maketwodispenvs #1#2#3{ \makedispenv{#1}{#3} \makedispenv{#2}{#3} } % @lisp: indented, narrowed, typewriter font; @example: same as @lisp. % % @smallexample and @smalllisp: use smaller fonts. % Originally contributed by Pavel@xerox. % \maketwodispenvs {lisp}{example}{% \nonfillstart \tt \let\kbdfont = \kbdexamplefont % Allow @kbd to do something special. \gobble % eat return } % @display/@smalldisplay: same as @lisp except keep current font. % \makedispenv {display}{% \nonfillstart \gobble } % @format/@smallformat: same as @display except don't narrow margins. % \makedispenv{format}{% \let\nonarrowing = t% \nonfillstart \gobble } % @flushleft: same as @format, but doesn't obey \SETdispenvsize. \envdef\flushleft{% \let\nonarrowing = t% \nonfillstart \gobble } \let\Eflushleft = \afterenvbreak % @flushright. % \envdef\flushright{% \let\nonarrowing = t% \nonfillstart \advance\leftskip by 0pt plus 1fill \gobble } \let\Eflushright = \afterenvbreak % @quotation does normal linebreaking (hence we can't use \nonfillstart) % and narrows the margins. We keep \parskip nonzero in general, since % we're doing normal filling. So, when using \aboveenvbreak and % \afterenvbreak, temporarily make \parskip 0. % \envdef\quotation{% {\parskip=0pt \aboveenvbreak}% because \aboveenvbreak inserts \parskip \parindent=0pt % % @cartouche defines \nonarrowing to inhibit narrowing at next level down. \ifx\nonarrowing\relax \advance\leftskip by \lispnarrowing \advance\rightskip by \lispnarrowing \exdentamount = \lispnarrowing \let\nonarrowing = \relax \fi \parsearg\quotationlabel } % We have retained a nonzero parskip for the environment, since we're % doing normal filling. % \def\Equotation{% \par \ifx\quotationauthor\undefined\else % indent a bit. \leftline{\kern 2\leftskip \sl ---\quotationauthor}% \fi {\parskip=0pt \afterenvbreak}% } % If we're given an argument, typeset it in bold with a colon after. \def\quotationlabel#1{% \def\temp{#1}% \ifx\temp\empty \else {\bf #1: }% \fi } % LaTeX-like @verbatim...@end verbatim and @verb{...} % If we want to allow any as delimiter, % we need the curly braces so that makeinfo sees the @verb command, eg: % `@verbx...x' would look like the '@verbx' command. --janneke@gnu.org % % [Knuth]: Donald Ervin Knuth, 1996. The TeXbook. % % [Knuth] p.344; only we need to do the other characters Texinfo sets % active too. Otherwise, they get lost as the first character on a % verbatim line. \def\dospecials{% \do\ \do\\\do\{\do\}\do\$\do\&% \do\#\do\^\do\^^K\do\_\do\^^A\do\%\do\~% \do\<\do\>\do\|\do\@\do+\do\"% } % % [Knuth] p. 380 \def\uncatcodespecials{% \def\do##1{\catcode`##1=\other}\dospecials} % % [Knuth] pp. 380,381,391 % Disable Spanish ligatures ?` and !` of \tt font \begingroup \catcode`\`=\active\gdef`{\relax\lq} \endgroup % % Setup for the @verb command. % % Eight spaces for a tab \begingroup \catcode`\^^I=\active \gdef\tabeightspaces{\catcode`\^^I=\active\def^^I{\ \ \ \ \ \ \ \ }} \endgroup % \def\setupverb{% \tt % easiest (and conventionally used) font for verbatim \def\par{\leavevmode\endgraf}% \catcode`\`=\active \tabeightspaces % Respect line breaks, % print special symbols as themselves, and % make each space count % must do in this order: \obeylines \uncatcodespecials \sepspaces } % Setup for the @verbatim environment % % Real tab expansion \newdimen\tabw \setbox0=\hbox{\tt\space} \tabw=8\wd0 % tab amount % \def\starttabbox{\setbox0=\hbox\bgroup} \begingroup \catcode`\^^I=\active \gdef\tabexpand{% \catcode`\^^I=\active \def^^I{\leavevmode\egroup \dimen0=\wd0 % the width so far, or since the previous tab \divide\dimen0 by\tabw \multiply\dimen0 by\tabw % compute previous multiple of \tabw \advance\dimen0 by\tabw % advance to next multiple of \tabw \wd0=\dimen0 \box0 \starttabbox }% } \endgroup \def\setupverbatim{% \nonfillstart \advance\leftskip by -\defbodyindent % Easiest (and conventionally used) font for verbatim \tt \def\par{\leavevmode\egroup\box0\endgraf}% \catcode`\`=\active \tabexpand % Respect line breaks, % print special symbols as themselves, and % make each space count % must do in this order: \obeylines \uncatcodespecials \sepspaces \everypar{\starttabbox}% } % Do the @verb magic: verbatim text is quoted by unique % delimiter characters. Before first delimiter expect a % right brace, after last delimiter expect closing brace: % % \def\doverb'{'#1'}'{#1} % % [Knuth] p. 382; only eat outer {} \begingroup \catcode`[=1\catcode`]=2\catcode`\{=\other\catcode`\}=\other \gdef\doverb{#1[\def\next##1#1}[##1\endgroup]\next] \endgroup % \def\verb{\begingroup\setupverb\doverb} % % % Do the @verbatim magic: define the macro \doverbatim so that % the (first) argument ends when '@end verbatim' is reached, ie: % % \def\doverbatim#1@end verbatim{#1} % % For Texinfo it's a lot easier than for LaTeX, % because texinfo's \verbatim doesn't stop at '\end{verbatim}': % we need not redefine '\', '{' and '}'. % % Inspired by LaTeX's verbatim command set [latex.ltx] % \begingroup \catcode`\ =\active \obeylines % % ignore everything up to the first ^^M, that's the newline at the end % of the @verbatim input line itself. Otherwise we get an extra blank % line in the output. \xdef\doverbatim#1^^M#2@end verbatim{#2\noexpand\end\gobble verbatim}% % We really want {...\end verbatim} in the body of the macro, but % without the active space; thus we have to use \xdef and \gobble. \endgroup % \envdef\verbatim{% \setupverbatim\doverbatim } \let\Everbatim = \afterenvbreak % @verbatiminclude FILE - insert text of file in verbatim environment. % \def\verbatiminclude{\parseargusing\filenamecatcodes\doverbatiminclude} % \def\doverbatiminclude#1{% {% \makevalueexpandable \setupverbatim \input #1 \afterenvbreak }% } % @copying ... @end copying. % Save the text away for @insertcopying later. Many commands won't be % allowed in this context, but that's ok. % % We save the uninterpreted tokens, rather than creating a box. % Saving the text in a box would be much easier, but then all the % typesetting commands (@smallbook, font changes, etc.) have to be done % beforehand -- and a) we want @copying to be done first in the source % file; b) letting users define the frontmatter in as flexible order as % possible is very desirable. % \def\copying{\begingroup % Define a command to swallow text until we reach `@end copying'. % \ is the escape char in this texinfo.tex file, so it is the % delimiter for the command; @ will be the escape char when we read % it, but that doesn't matter. \long\def\docopying##1\end copying{\gdef\copyingtext{##1}\enddocopying}% % % We must preserve ^^M's in the input file; see \insertcopying below. \catcode`\^^M = \active \docopying } % What we do to finish off the copying text. % \def\enddocopying{\endgroup\ignorespaces} % @insertcopying. Here we must play games with ^^M's. On the one hand, % we need them to delimit commands such as `@end quotation', so they % must be active. On the other hand, we certainly don't want every % end-of-line to be a \par, as would happen with the normal active % definition of ^^M. On the third hand, two ^^M's in a row should still % generate a \par. % % Our approach is to make ^^M insert a space and a penalty1 normally; % then it can also check if \lastpenalty=1. If it does, then manually % do \par. % % This messes up the normal definitions of @c[omment], so we redefine % it. Similarly for @ignore. (These commands are used in the gcc % manual for man page generation.) % % Seems pretty fragile, most line-oriented commands will presumably % fail, but for the limited use of getting the copying text (which % should be quite simple) inserted, we can hope it's ok. % {\catcode`\^^M=\active % \gdef\insertcopying{\begingroup % \parindent = 0pt % looks wrong on title page \def^^M{% \ifnum \lastpenalty=1 % \par % \else % \space \penalty 1 % \fi % }% % % Fix @c[omment] for catcode 13 ^^M's. \def\c##1^^M{\ignorespaces}% \let\comment = \c % % % Don't bother jumping through all the hoops that \doignore does, it % would be very hard since the catcodes are already set. \long\def\ignore##1\end ignore{\ignorespaces}% % \copyingtext % \endgroup}% } \message{defuns,} % @defun etc. \newskip\defbodyindent \defbodyindent=.4in \newskip\defargsindent \defargsindent=50pt \newskip\deflastargmargin \deflastargmargin=18pt % Start the processing of @deffn: \def\startdefun{% \ifnum\lastpenalty<10000 \medbreak \else % If there are two @def commands in a row, we'll have a \nobreak, % which is there to keep the function description together with its % header. But if there's nothing but headers, we need to allow a % break somewhere. Check for penalty 10002 (inserted by % \defargscommonending) instead of 10000, since the sectioning % commands insert a \penalty10000, and we don't want to allow a break % between a section heading and a defun. \ifnum\lastpenalty=10002 \penalty2000 \fi % % Similarly, after a section heading, do not allow a break. % But do insert the glue. \medskip % preceded by discardable penalty, so not a breakpoint \fi % \parindent=0in \advance\leftskip by \defbodyindent \exdentamount=\defbodyindent } \def\dodefunx#1{% % First, check whether we are in the right environment: \checkenv#1% % % As above, allow line break if we have multiple x headers in a row. % It's not a great place, though. \ifnum\lastpenalty=10002 \penalty3000 \fi % % And now, it's time to reuse the body of the original defun: \expandafter\gobbledefun#1% } \def\gobbledefun#1\startdefun{} % \printdefunline \deffnheader{text} % \def\printdefunline#1#2{% \begingroup % call \deffnheader: #1#2 \endheader % common ending: \interlinepenalty = 10000 \advance\rightskip by 0pt plus 1fil \endgraf \nobreak\vskip -\parskip \penalty 10002 % signal to \startdefun and \dodefunx % Some of the @defun-type tags do not enable magic parentheses, % rendering the following check redundant. But we don't optimize. \checkparencounts \endgroup } \def\Edefun{\endgraf\medbreak} % \makedefun{deffn} creates \deffn, \deffnx and \Edeffn; % the only thing remainnig is to define \deffnheader. % \def\makedefun#1{% \expandafter\let\csname E#1\endcsname = \Edefun \edef\temp{\noexpand\domakedefun \makecsname{#1}\makecsname{#1x}\makecsname{#1header}}% \temp } % \domakedefun \deffn \deffnx \deffnheader % % Define \deffn and \deffnx, without parameters. % \deffnheader has to be defined explicitly. % \def\domakedefun#1#2#3{% \envdef#1{% \startdefun \parseargusing\activeparens{\printdefunline#3}% }% \def#2{\dodefunx#1}% \def#3% } %%% Untyped functions: % @deffn category name args \makedefun{deffn}{\deffngeneral{}} % @deffn category class name args \makedefun{defop}#1 {\defopon{#1\ \putwordon}} % \defopon {category on}class name args \def\defopon#1#2 {\deffngeneral{\putwordon\ \code{#2}}{#1\ \code{#2}} } % \deffngeneral {subind}category name args % \def\deffngeneral#1#2 #3 #4\endheader{% % Remember that \dosubind{fn}{foo}{} is equivalent to \doind{fn}{foo}. \dosubind{fn}{\code{#3}}{#1}% \defname{#2}{}{#3}\magicamp\defunargs{#4\unskip}% } %%% Typed functions: % @deftypefn category type name args \makedefun{deftypefn}{\deftypefngeneral{}} % @deftypeop category class type name args \makedefun{deftypeop}#1 {\deftypeopon{#1\ \putwordon}} % \deftypeopon {category on}class type name args \def\deftypeopon#1#2 {\deftypefngeneral{\putwordon\ \code{#2}}{#1\ \code{#2}} } % \deftypefngeneral {subind}category type name args % \def\deftypefngeneral#1#2 #3 #4 #5\endheader{% \dosubind{fn}{\code{#4}}{#1}% \defname{#2}{#3}{#4}\defunargs{#5\unskip}% } %%% Typed variables: % @deftypevr category type var args \makedefun{deftypevr}{\deftypecvgeneral{}} % @deftypecv category class type var args \makedefun{deftypecv}#1 {\deftypecvof{#1\ \putwordof}} % \deftypecvof {category of}class type var args \def\deftypecvof#1#2 {\deftypecvgeneral{\putwordof\ \code{#2}}{#1\ \code{#2}} } % \deftypecvgeneral {subind}category type var args % \def\deftypecvgeneral#1#2 #3 #4 #5\endheader{% \dosubind{vr}{\code{#4}}{#1}% \defname{#2}{#3}{#4}\defunargs{#5\unskip}% } %%% Untyped variables: % @defvr category var args \makedefun{defvr}#1 {\deftypevrheader{#1} {} } % @defcv category class var args \makedefun{defcv}#1 {\defcvof{#1\ \putwordof}} % \defcvof {category of}class var args \def\defcvof#1#2 {\deftypecvof{#1}#2 {} } %%% Type: % @deftp category name args \makedefun{deftp}#1 #2 #3\endheader{% \doind{tp}{\code{#2}}% \defname{#1}{}{#2}\defunargs{#3\unskip}% } % Remaining @defun-like shortcuts: \makedefun{defun}{\deffnheader{\putwordDeffunc} } \makedefun{defmac}{\deffnheader{\putwordDefmac} } \makedefun{defspec}{\deffnheader{\putwordDefspec} } \makedefun{deftypefun}{\deftypefnheader{\putwordDeffunc} } \makedefun{defvar}{\defvrheader{\putwordDefvar} } \makedefun{defopt}{\defvrheader{\putwordDefopt} } \makedefun{deftypevar}{\deftypevrheader{\putwordDefvar} } \makedefun{defmethod}{\defopon\putwordMethodon} \makedefun{deftypemethod}{\deftypeopon\putwordMethodon} \makedefun{defivar}{\defcvof\putwordInstanceVariableof} \makedefun{deftypeivar}{\deftypecvof\putwordInstanceVariableof} % \defname, which formats the name of the @def (not the args). % #1 is the category, such as "Function". % #2 is the return type, if any. % #3 is the function name. % % We are followed by (but not passed) the arguments, if any. % \def\defname#1#2#3{% % Get the values of \leftskip and \rightskip as they were outside the @def... \advance\leftskip by -\defbodyindent % % How we'll format the type name. Putting it in brackets helps % distinguish it from the body text that may end up on the next line % just below it. \def\temp{#1}% \setbox0=\hbox{\kern\deflastargmargin \ifx\temp\empty\else [\rm\temp]\fi} % % Figure out line sizes for the paragraph shape. % The first line needs space for \box0; but if \rightskip is nonzero, % we need only space for the part of \box0 which exceeds it: \dimen0=\hsize \advance\dimen0 by -\wd0 \advance\dimen0 by \rightskip % The continuations: \dimen2=\hsize \advance\dimen2 by -\defargsindent % (plain.tex says that \dimen1 should be used only as global.) \parshape 2 0in \dimen0 \defargsindent \dimen2 % % Put the type name to the right margin. \noindent \hbox to 0pt{% \hfil\box0 \kern-\hsize % \hsize has to be shortened this way: \kern\leftskip % Intentionally do not respect \rightskip, since we need the space. }% % % Allow all lines to be underfull without complaint: \tolerance=10000 \hbadness=10000 \exdentamount=\defbodyindent {% % defun fonts. We use typewriter by default (used to be bold) because: % . we're printing identifiers, they should be in tt in principle. % . in languages with many accents, such as Czech or French, it's % common to leave accents off identifiers. The result looks ok in % tt, but exceedingly strange in rm. % . we don't want -- and --- to be treated as ligatures. % . this still does not fix the ?` and !` ligatures, but so far no % one has made identifiers using them :). \df \tt \def\temp{#2}% return value type \ifx\temp\empty\else \tclose{\temp} \fi #3% output function name }% {\rm\enskip}% hskip 0.5 em of \tenrm % \boldbrax % arguments will be output next, if any. } % Print arguments in slanted roman (not ttsl), inconsistently with using % tt for the name. This is because literal text is sometimes needed in % the argument list (groff manual), and ttsl and tt are not very % distinguishable. Prevent hyphenation at `-' chars. % \def\defunargs#1{% % use sl by default (not ttsl), % tt for the names. \df \sl \hyphenchar\font=0 % % On the other hand, if an argument has two dashes (for instance), we % want a way to get ttsl. Let's try @var for that. \let\var=\ttslanted #1% \sl\hyphenchar\font=45 } % We want ()&[] to print specially on the defun line. % \def\activeparens{% \catcode`\(=\active \catcode`\)=\active \catcode`\[=\active \catcode`\]=\active \catcode`\&=\active } % Make control sequences which act like normal parenthesis chars. \let\lparen = ( \let\rparen = ) % Be sure that we always have a definition for `(', etc. For example, % if the fn name has parens in it, \boldbrax will not be in effect yet, % so TeX would otherwise complain about undefined control sequence. { \activeparens \global\let(=\lparen \global\let)=\rparen \global\let[=\lbrack \global\let]=\rbrack \global\let& = \& \gdef\boldbrax{\let(=\opnr\let)=\clnr\let[=\lbrb\let]=\rbrb} \gdef\magicamp{\let&=\amprm} } \newcount\parencount % If we encounter &foo, then turn on ()-hacking afterwards \newif\ifampseen \def\amprm#1 {\ampseentrue{\bf\ }} \def\parenfont{% \ifampseen % At the first level, print parens in roman, % otherwise use the default font. \ifnum \parencount=1 \rm \fi \else % The \sf parens (in \boldbrax) actually are a little bolder than % the contained text. This is especially needed for [ and ] . \sf \fi } \def\infirstlevel#1{% \ifampseen \ifnum\parencount=1 #1% \fi \fi } \def\bfafterword#1 {#1 \bf} \def\opnr{% \global\advance\parencount by 1 {\parenfont(}% \infirstlevel \bfafterword } \def\clnr{% {\parenfont)}% \infirstlevel \sl \global\advance\parencount by -1 } \newcount\brackcount \def\lbrb{% \global\advance\brackcount by 1 {\bf[}% } \def\rbrb{% {\bf]}% \global\advance\brackcount by -1 } \def\checkparencounts{% \ifnum\parencount=0 \else \badparencount \fi \ifnum\brackcount=0 \else \badbrackcount \fi } \def\badparencount{% \errmessage{Unbalanced parentheses in @def}% \global\parencount=0 } \def\badbrackcount{% \errmessage{Unbalanced square braces in @def}% \global\brackcount=0 } \message{macros,} % @macro. % To do this right we need a feature of e-TeX, \scantokens, % which we arrange to emulate with a temporary file in ordinary TeX. \ifx\eTeXversion\undefined \newwrite\macscribble \def\scantokens#1{% \toks0={#1\endinput}% \immediate\openout\macscribble=\jobname.tmp \immediate\write\macscribble{\the\toks0}% \immediate\closeout\macscribble \input \jobname.tmp } \fi \def\scanmacro#1{% \begingroup \newlinechar`\^^M \let\xeatspaces\eatspaces % Undo catcode changes of \startcontents and \doprintindex \catcode`\@=0 \catcode`\\=\other \escapechar=`\@ % ... and \example \spaceisspace % % Append \endinput to make sure that TeX does not see the ending newline. % % I've verified that it is necessary both for e-TeX and for ordinary TeX % --kasal, 29nov03 \scantokens{#1\endinput}% \endgroup } \newcount\paramno % Count of parameters \newtoks\macname % Macro name \newif\ifrecursive % Is it recursive? \def\macrolist{} % List of all defined macros in the form % \do\macro1\do\macro2... % Utility routines. % This does \let #1 = #2, except with \csnames. \def\cslet#1#2{% \expandafter\expandafter \expandafter\let \expandafter\expandafter \csname#1\endcsname \csname#2\endcsname} % Trim leading and trailing spaces off a string. % Concepts from aro-bend problem 15 (see CTAN). {\catcode`\@=11 \gdef\eatspaces #1{\expandafter\trim@\expandafter{#1 }} \gdef\trim@ #1{\trim@@ @#1 @ #1 @ @@} \gdef\trim@@ #1@ #2@ #3@@{\trim@@@\empty #2 @} \def\unbrace#1{#1} \unbrace{\gdef\trim@@@ #1 } #2@{#1} } % Trim a single trailing ^^M off a string. {\catcode`\^^M=\other \catcode`\Q=3% \gdef\eatcr #1{\eatcra #1Q^^MQ}% \gdef\eatcra#1^^MQ{\eatcrb#1Q}% \gdef\eatcrb#1Q#2Q{#1}% } % Macro bodies are absorbed as an argument in a context where % all characters are catcode 10, 11 or 12, except \ which is active % (as in normal texinfo). It is necessary to change the definition of \. % It's necessary to have hard CRs when the macro is executed. This is % done by making ^^M (\endlinechar) catcode 12 when reading the macro % body, and then making it the \newlinechar in \scanmacro. \def\macrobodyctxt{% \catcode`\~=\other \catcode`\^=\other \catcode`\_=\other \catcode`\|=\other \catcode`\<=\other \catcode`\>=\other \catcode`\+=\other \catcode`\{=\other \catcode`\}=\other \catcode`\@=\other \catcode`\^^M=\other \usembodybackslash} \def\macroargctxt{% \catcode`\~=\other \catcode`\^=\other \catcode`\_=\other \catcode`\|=\other \catcode`\<=\other \catcode`\>=\other \catcode`\+=\other \catcode`\@=\other \catcode`\\=\other} % \mbodybackslash is the definition of \ in @macro bodies. % It maps \foo\ => \csname macarg.foo\endcsname => #N % where N is the macro parameter number. % We define \csname macarg.\endcsname to be \realbackslash, so % \\ in macro replacement text gets you a backslash. {\catcode`@=0 @catcode`@\=@active @gdef@usembodybackslash{@let\=@mbodybackslash} @gdef@mbodybackslash#1\{@csname macarg.#1@endcsname} } \expandafter\def\csname macarg.\endcsname{\realbackslash} \def\macro{\recursivefalse\parsearg\macroxxx} \def\rmacro{\recursivetrue\parsearg\macroxxx} \def\macroxxx#1{% \getargs{#1}% now \macname is the macname and \argl the arglist \ifx\argl\empty % no arguments \paramno=0% \else \expandafter\parsemargdef \argl;% \fi \if1\csname ismacro.\the\macname\endcsname \message{Warning: redefining \the\macname}% \else \expandafter\ifx\csname \the\macname\endcsname \relax \else \errmessage{Macro name \the\macname\space already defined}\fi \global\cslet{macsave.\the\macname}{\the\macname}% \global\expandafter\let\csname ismacro.\the\macname\endcsname=1% % Add the macroname to \macrolist \toks0 = \expandafter{\macrolist\do}% \xdef\macrolist{\the\toks0 \expandafter\noexpand\csname\the\macname\endcsname}% \fi \begingroup \macrobodyctxt \ifrecursive \expandafter\parsermacbody \else \expandafter\parsemacbody \fi} \parseargdef\unmacro{% \if1\csname ismacro.#1\endcsname \global\cslet{#1}{macsave.#1}% \global\expandafter\let \csname ismacro.#1\endcsname=0% % Remove the macro name from \macrolist: \begingroup \expandafter\let\csname#1\endcsname \relax \let\do\unmacrodo \xdef\macrolist{\macrolist}% \endgroup \else \errmessage{Macro #1 not defined}% \fi } % Called by \do from \dounmacro on each macro. The idea is to omit any % macro definitions that have been changed to \relax. % \def\unmacrodo#1{% \ifx#1\relax % remove this \else \noexpand\do \noexpand #1% \fi } % This makes use of the obscure feature that if the last token of a % is #, then the preceding argument is delimited by % an opening brace, and that opening brace is not consumed. \def\getargs#1{\getargsxxx#1{}} \def\getargsxxx#1#{\getmacname #1 \relax\getmacargs} \def\getmacname #1 #2\relax{\macname={#1}} \def\getmacargs#1{\def\argl{#1}} % Parse the optional {params} list. Set up \paramno and \paramlist % so \defmacro knows what to do. Define \macarg.blah for each blah % in the params list, to be ##N where N is the position in that list. % That gets used by \mbodybackslash (above). % We need to get `macro parameter char #' into several definitions. % The technique used is stolen from LaTeX: let \hash be something % unexpandable, insert that wherever you need a #, and then redefine % it to # just before using the token list produced. % % The same technique is used to protect \eatspaces till just before % the macro is used. \def\parsemargdef#1;{\paramno=0\def\paramlist{}% \let\hash\relax\let\xeatspaces\relax\parsemargdefxxx#1,;,} \def\parsemargdefxxx#1,{% \if#1;\let\next=\relax \else \let\next=\parsemargdefxxx \advance\paramno by 1% \expandafter\edef\csname macarg.\eatspaces{#1}\endcsname {\xeatspaces{\hash\the\paramno}}% \edef\paramlist{\paramlist\hash\the\paramno,}% \fi\next} % These two commands read recursive and nonrecursive macro bodies. % (They're different since rec and nonrec macros end differently.) \long\def\parsemacbody#1@end macro% {\xdef\temp{\eatcr{#1}}\endgroup\defmacro}% \long\def\parsermacbody#1@end rmacro% {\xdef\temp{\eatcr{#1}}\endgroup\defmacro}% % This defines the macro itself. There are six cases: recursive and % nonrecursive macros of zero, one, and many arguments. % Much magic with \expandafter here. % \xdef is used so that macro definitions will survive the file % they're defined in; @include reads the file inside a group. \def\defmacro{% \let\hash=##% convert placeholders to macro parameter chars \ifrecursive \ifcase\paramno % 0 \expandafter\xdef\csname\the\macname\endcsname{% \noexpand\scanmacro{\temp}}% \or % 1 \expandafter\xdef\csname\the\macname\endcsname{% \bgroup\noexpand\macroargctxt \noexpand\braceorline \expandafter\noexpand\csname\the\macname xxx\endcsname}% \expandafter\xdef\csname\the\macname xxx\endcsname##1{% \egroup\noexpand\scanmacro{\temp}}% \else % many \expandafter\xdef\csname\the\macname\endcsname{% \bgroup\noexpand\macroargctxt \noexpand\csname\the\macname xx\endcsname}% \expandafter\xdef\csname\the\macname xx\endcsname##1{% \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}% \expandafter\expandafter \expandafter\xdef \expandafter\expandafter \csname\the\macname xxx\endcsname \paramlist{\egroup\noexpand\scanmacro{\temp}}% \fi \else \ifcase\paramno % 0 \expandafter\xdef\csname\the\macname\endcsname{% \noexpand\norecurse{\the\macname}% \noexpand\scanmacro{\temp}\egroup}% \or % 1 \expandafter\xdef\csname\the\macname\endcsname{% \bgroup\noexpand\macroargctxt \noexpand\braceorline \expandafter\noexpand\csname\the\macname xxx\endcsname}% \expandafter\xdef\csname\the\macname xxx\endcsname##1{% \egroup \noexpand\norecurse{\the\macname}% \noexpand\scanmacro{\temp}\egroup}% \else % many \expandafter\xdef\csname\the\macname\endcsname{% \bgroup\noexpand\macroargctxt \expandafter\noexpand\csname\the\macname xx\endcsname}% \expandafter\xdef\csname\the\macname xx\endcsname##1{% \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}% \expandafter\expandafter \expandafter\xdef \expandafter\expandafter \csname\the\macname xxx\endcsname \paramlist{% \egroup \noexpand\norecurse{\the\macname}% \noexpand\scanmacro{\temp}\egroup}% \fi \fi} \def\norecurse#1{\bgroup\cslet{#1}{macsave.#1}} % \braceorline decides whether the next nonwhitespace character is a % {. If so it reads up to the closing }, if not, it reads the whole % line. Whatever was read is then fed to the next control sequence % as an argument (by \parsebrace or \parsearg) \def\braceorline#1{\let\next=#1\futurelet\nchar\braceorlinexxx} \def\braceorlinexxx{% \ifx\nchar\bgroup\else \expandafter\parsearg \fi \next} % We mant to disable all macros during \shipout so that they are not % expanded by \write. \def\turnoffmacros{\begingroup \def\do##1{\let\noexpand##1=\relax}% \edef\next{\macrolist}\expandafter\endgroup\next} % @alias. % We need some trickery to remove the optional spaces around the equal % sign. Just make them active and then expand them all to nothing. \def\alias{\parseargusing\obeyspaces\aliasxxx} \def\aliasxxx #1{\aliasyyy#1\relax} \def\aliasyyy #1=#2\relax{% {% \expandafter\let\obeyedspace=\empty \xdef\next{\global\let\makecsname{#1}=\makecsname{#2}}% }% \next } \message{cross references,} \newwrite\auxfile \newif\ifhavexrefs % True if xref values are known. \newif\ifwarnedxrefs % True if we warned once that they aren't known. % @inforef is relatively simple. \def\inforef #1{\inforefzzz #1,,,,**} \def\inforefzzz #1,#2,#3,#4**{\putwordSee{} \putwordInfo{} \putwordfile{} \file{\ignorespaces #3{}}, node \samp{\ignorespaces#1{}}} % @node's only job in TeX is to define \lastnode, which is used in % cross-references. The @node line might or might not have commas, and % might or might not have spaces before the first comma, like: % @node foo , bar , ... % We don't want such trailing spaces in the node name. % \parseargdef\node{\checkenv{}\donode #1 ,\finishnodeparse} % % also remove a trailing comma, in case of something like this: % @node Help-Cross, , , Cross-refs \def\donode#1 ,#2\finishnodeparse{\dodonode #1,\finishnodeparse} \def\dodonode#1,#2\finishnodeparse{\gdef\lastnode{#1}} \let\nwnode=\node \let\lastnode=\empty % Write a cross-reference definition for the current node. #1 is the % type (Ynumbered, Yappendix, Ynothing). % \def\donoderef#1{% \ifx\lastnode\empty\else \setref{\lastnode}{#1}% \global\let\lastnode=\empty \fi } % @anchor{NAME} -- define xref target at arbitrary point. % \newcount\savesfregister % \def\savesf{\relax \ifhmode \savesfregister=\spacefactor \fi} \def\restoresf{\relax \ifhmode \spacefactor=\savesfregister \fi} \def\anchor#1{\savesf \setref{#1}{Ynothing}\restoresf \ignorespaces} % \setref{NAME}{SNT} defines a cross-reference point NAME (a node or an % anchor), which consists of three parts: % 1) NAME-title - the current sectioning name taken from \thissection, % or the anchor name. % 2) NAME-snt - section number and type, passed as the SNT arg, or % empty for anchors. % 3) NAME-pg - the page number. % % This is called from \donoderef, \anchor, and \dofloat. In the case of % floats, there is an additional part, which is not written here: % 4) NAME-lof - the text as it should appear in a @listoffloats. % \def\setref#1#2{% \pdfmkdest{#1}% \iflinks {% \atdummies % preserve commands, but don't expand them \turnoffactive \otherbackslash \edef\writexrdef##1##2{% \write\auxfile{@xrdef{#1-% #1 of \setref, expanded by the \edef ##1}{##2}}% these are parameters of \writexrdef }% \toks0 = \expandafter{\thissection}% \immediate \writexrdef{title}{\the\toks0 }% \immediate \writexrdef{snt}{\csname #2\endcsname}% \Ynumbered etc. \writexrdef{pg}{\folio}% will be written later, during \shipout }% \fi } % @xref, @pxref, and @ref generate cross-references. For \xrefX, #1 is % the node name, #2 the name of the Info cross-reference, #3 the printed % node name, #4 the name of the Info file, #5 the name of the printed % manual. All but the node name can be omitted. % \def\pxref#1{\putwordsee{} \xrefX[#1,,,,,,,]} \def\xref#1{\putwordSee{} \xrefX[#1,,,,,,,]} \def\ref#1{\xrefX[#1,,,,,,,]} \def\xrefX[#1,#2,#3,#4,#5,#6]{\begingroup \unsepspaces \def\printedmanual{\ignorespaces #5}% \def\printedrefname{\ignorespaces #3}% \setbox1=\hbox{\printedmanual\unskip}% \setbox0=\hbox{\printedrefname\unskip}% \ifdim \wd0 = 0pt % No printed node name was explicitly given. \expandafter\ifx\csname SETxref-automatic-section-title\endcsname\relax % Use the node name inside the square brackets. \def\printedrefname{\ignorespaces #1}% \else % Use the actual chapter/section title appear inside % the square brackets. Use the real section title if we have it. \ifdim \wd1 > 0pt % It is in another manual, so we don't have it. \def\printedrefname{\ignorespaces #1}% \else \ifhavexrefs % We know the real title if we have the xref values. \def\printedrefname{\refx{#1-title}{}}% \else % Otherwise just copy the Info node name. \def\printedrefname{\ignorespaces #1}% \fi% \fi \fi \fi % % Make link in pdf output. \ifpdf \leavevmode \getfilename{#4}% {\turnoffactive \otherbackslash \ifnum\filenamelength>0 \startlink attr{/Border [0 0 0]}% goto file{\the\filename.pdf} name{#1}% \else \startlink attr{/Border [0 0 0]}% goto name{\pdfmkpgn{#1}}% \fi }% \linkcolor \fi % % Float references are printed completely differently: "Figure 1.2" % instead of "[somenode], p.3". We distinguish them by the % LABEL-title being set to a magic string. {% % Have to otherify everything special to allow the \csname to % include an _ in the xref name, etc. \indexnofonts \turnoffactive \otherbackslash \expandafter\global\expandafter\let\expandafter\Xthisreftitle \csname XR#1-title\endcsname }% \iffloat\Xthisreftitle % If the user specified the print name (third arg) to the ref, % print it instead of our usual "Figure 1.2". \ifdim\wd0 = 0pt \refx{#1-snt}% \else \printedrefname \fi % % if the user also gave the printed manual name (fifth arg), append % "in MANUALNAME". \ifdim \wd1 > 0pt \space \putwordin{} \cite{\printedmanual}% \fi \else % node/anchor (non-float) references. % % If we use \unhbox0 and \unhbox1 to print the node names, TeX does not % insert empty discretionaries after hyphens, which means that it will % not find a line break at a hyphen in a node names. Since some manuals % are best written with fairly long node names, containing hyphens, this % is a loss. Therefore, we give the text of the node name again, so it % is as if TeX is seeing it for the first time. \ifdim \wd1 > 0pt \putwordsection{} ``\printedrefname'' \putwordin{} \cite{\printedmanual}% \else % _ (for example) has to be the character _ for the purposes of the % control sequence corresponding to the node, but it has to expand % into the usual \leavevmode...\vrule stuff for purposes of % printing. So we \turnoffactive for the \refx-snt, back on for the % printing, back off for the \refx-pg. {\turnoffactive \otherbackslash % Only output a following space if the -snt ref is nonempty; for % @unnumbered and @anchor, it won't be. \setbox2 = \hbox{\ignorespaces \refx{#1-snt}{}}% \ifdim \wd2 > 0pt \refx{#1-snt}\space\fi }% % output the `[mynode]' via a macro so it can be overridden. \xrefprintnodename\printedrefname % % But we always want a comma and a space: ,\space % % output the `page 3'. \turnoffactive \otherbackslash \putwordpage\tie\refx{#1-pg}{}% \fi \fi \endlink \endgroup} % This macro is called from \xrefX for the `[nodename]' part of xref % output. It's a separate macro only so it can be changed more easily, % since square brackets don't work well in some documents. Particularly % one that Bob is working on :). % \def\xrefprintnodename#1{[#1]} % Things referred to by \setref. % \def\Ynothing{} \def\Yomitfromtoc{} \def\Ynumbered{% \ifnum\secno=0 \putwordChapter@tie \the\chapno \else \ifnum\subsecno=0 \putwordSection@tie \the\chapno.\the\secno \else \ifnum\subsubsecno=0 \putwordSection@tie \the\chapno.\the\secno.\the\subsecno \else \putwordSection@tie \the\chapno.\the\secno.\the\subsecno.\the\subsubsecno \fi\fi\fi } \def\Yappendix{% \ifnum\secno=0 \putwordAppendix@tie @char\the\appendixno{}% \else \ifnum\subsecno=0 \putwordSection@tie @char\the\appendixno.\the\secno \else \ifnum\subsubsecno=0 \putwordSection@tie @char\the\appendixno.\the\secno.\the\subsecno \else \putwordSection@tie @char\the\appendixno.\the\secno.\the\subsecno.\the\subsubsecno \fi\fi\fi } % Define \refx{NAME}{SUFFIX} to reference a cross-reference string named NAME. % If its value is nonempty, SUFFIX is output afterward. % \def\refx#1#2{% {% \indexnofonts \otherbackslash \expandafter\global\expandafter\let\expandafter\thisrefX \csname XR#1\endcsname }% \ifx\thisrefX\relax % If not defined, say something at least. \angleleft un\-de\-fined\angleright \iflinks \ifhavexrefs \message{\linenumber Undefined cross reference `#1'.}% \else \ifwarnedxrefs\else \global\warnedxrefstrue \message{Cross reference values unknown; you must run TeX again.}% \fi \fi \fi \else % It's defined, so just use it. \thisrefX \fi #2% Output the suffix in any case. } % This is the macro invoked by entries in the aux file. Usually it's % just a \def (we prepend XR to the control sequence name to avoid % collisions). But if this is a float type, we have more work to do. % \def\xrdef#1#2{% \expandafter\gdef\csname XR#1\endcsname{#2}% remember this xref value. % % Was that xref control sequence that we just defined for a float? \expandafter\iffloat\csname XR#1\endcsname % it was a float, and we have the (safe) float type in \iffloattype. \expandafter\let\expandafter\floatlist \csname floatlist\iffloattype\endcsname % % Is this the first time we've seen this float type? \expandafter\ifx\floatlist\relax \toks0 = {\do}% yes, so just \do \else % had it before, so preserve previous elements in list. \toks0 = \expandafter{\floatlist\do}% \fi % % Remember this xref in the control sequence \floatlistFLOATTYPE, % for later use in \listoffloats. \expandafter\xdef\csname floatlist\iffloattype\endcsname{\the\toks0{#1}}% \fi } % Read the last existing aux file, if any. No error if none exists. % \def\tryauxfile{% \openin 1 \jobname.aux \ifeof 1 \else \readauxfile \global\havexrefstrue \fi \closein 1 } \def\readauxfile{\begingroup \catcode`\^^@=\other \catcode`\^^A=\other \catcode`\^^B=\other \catcode`\^^C=\other \catcode`\^^D=\other \catcode`\^^E=\other \catcode`\^^F=\other \catcode`\^^G=\other \catcode`\^^H=\other \catcode`\^^K=\other \catcode`\^^L=\other \catcode`\^^N=\other \catcode`\^^P=\other \catcode`\^^Q=\other \catcode`\^^R=\other \catcode`\^^S=\other \catcode`\^^T=\other \catcode`\^^U=\other \catcode`\^^V=\other \catcode`\^^W=\other \catcode`\^^X=\other \catcode`\^^Z=\other \catcode`\^^[=\other \catcode`\^^\=\other \catcode`\^^]=\other \catcode`\^^^=\other \catcode`\^^_=\other % It was suggested to set the catcode of ^ to 7, which would allow ^^e4 etc. % in xref tags, i.e., node names. But since ^^e4 notation isn't % supported in the main text, it doesn't seem desirable. Furthermore, % that is not enough: for node names that actually contain a ^ % character, we would end up writing a line like this: 'xrdef {'hat % b-title}{'hat b} and \xrdef does a \csname...\endcsname on the first % argument, and \hat is not an expandable control sequence. It could % all be worked out, but why? Either we support ^^ or we don't. % % The other change necessary for this was to define \auxhat: % \def\auxhat{\def^{'hat }}% extra space so ok if followed by letter % and then to call \auxhat in \setq. % \catcode`\^=\other % % Special characters. Should be turned off anyway, but... \catcode`\~=\other \catcode`\[=\other \catcode`\]=\other \catcode`\"=\other \catcode`\_=\other \catcode`\|=\other \catcode`\<=\other \catcode`\>=\other \catcode`\$=\other \catcode`\#=\other \catcode`\&=\other \catcode`\%=\other \catcode`+=\other % avoid \+ for paranoia even though we've turned it off % % This is to support \ in node names and titles, since the \ % characters end up in a \csname. It's easier than % leaving it active and making its active definition an actual \ % character. What I don't understand is why it works in the *value* % of the xrdef. Seems like it should be a catcode12 \, and that % should not typeset properly. But it works, so I'm moving on for % now. --karl, 15jan04. \catcode`\\=\other % % Make the characters 128-255 be printing characters. {% \count 1=128 \def\loop{% \catcode\count 1=\other \advance\count 1 by 1 \ifnum \count 1<256 \loop \fi }% }% % % @ is our escape character in .aux files, and we need braces. \catcode`\{=1 \catcode`\}=2 \catcode`\@=0 % \input \jobname.aux \endgroup} \message{insertions,} % including footnotes. \newcount \footnoteno % The trailing space in the following definition for supereject is % vital for proper filling; pages come out unaligned when you do a % pagealignmacro call if that space before the closing brace is % removed. (Generally, numeric constants should always be followed by a % space to prevent strange expansion errors.) \def\supereject{\par\penalty -20000\footnoteno =0 } % @footnotestyle is meaningful for info output only. \let\footnotestyle=\comment {\catcode `\@=11 % % Auto-number footnotes. Otherwise like plain. \gdef\footnote{% \let\indent=\ptexindent \let\noindent=\ptexnoindent \global\advance\footnoteno by \@ne \edef\thisfootno{$^{\the\footnoteno}$}% % % In case the footnote comes at the end of a sentence, preserve the % extra spacing after we do the footnote number. \let\@sf\empty \ifhmode\edef\@sf{\spacefactor\the\spacefactor}\ptexslash\fi % % Remove inadvertent blank space before typesetting the footnote number. \unskip \thisfootno\@sf \dofootnote }% % Don't bother with the trickery in plain.tex to not require the % footnote text as a parameter. Our footnotes don't need to be so general. % % Oh yes, they do; otherwise, @ifset (and anything else that uses % \parseargline) fails inside footnotes because the tokens are fixed when % the footnote is read. --karl, 16nov96. % \gdef\dofootnote{% \insert\footins\bgroup % We want to typeset this text as a normal paragraph, even if the % footnote reference occurs in (for example) a display environment. % So reset some parameters. \hsize=\pagewidth \interlinepenalty\interfootnotelinepenalty \splittopskip\ht\strutbox % top baseline for broken footnotes \splitmaxdepth\dp\strutbox \floatingpenalty\@MM \leftskip\z@skip \rightskip\z@skip \spaceskip\z@skip \xspaceskip\z@skip \parindent\defaultparindent % \smallfonts \rm % % Because we use hanging indentation in footnotes, a @noindent appears % to exdent this text, so make it be a no-op. makeinfo does not use % hanging indentation so @noindent can still be needed within footnote % text after an @example or the like (not that this is good style). \let\noindent = \relax % % Hang the footnote text off the number. Use \everypar in case the % footnote extends for more than one paragraph. \everypar = {\hang}% \textindent{\thisfootno}% % % Don't crash into the line above the footnote text. Since this % expands into a box, it must come within the paragraph, lest it % provide a place where TeX can split the footnote. \footstrut \futurelet\next\fo@t } }%end \catcode `\@=11 % In case a @footnote appears in a vbox, save the footnote text and create % the real \insert just after the vbox finished. Otherwise, the insertion % would be lost. % Similarily, if a @footnote appears inside an alignment, save the footnote % text to a box and make the \insert when a row of the table is finished. % And the same can be done for other insert classes. --kasal, 16nov03. % Replace the \insert primitive by a cheating macro. % Deeper inside, just make sure that the saved insertions are not spilled % out prematurely. % \def\startsavinginserts{% \ifx \insert\ptexinsert \let\insert\saveinsert \else \let\checkinserts\relax \fi } % This \insert replacement works for both \insert\footins{foo} and % \insert\footins\bgroup foo\egroup, but it doesn't work for \insert27{foo}. % \def\saveinsert#1{% \edef\next{\noexpand\savetobox \makeSAVEname#1}% \afterassignment\next % swallow the left brace \let\temp = } \def\makeSAVEname#1{\makecsname{SAVE\expandafter\gobble\string#1}} \def\savetobox#1{\global\setbox#1 = \vbox\bgroup \unvbox#1} \def\checksaveins#1{\ifvoid#1\else \placesaveins#1\fi} \def\placesaveins#1{% \ptexinsert \csname\expandafter\gobblesave\string#1\endcsname {\box#1}% } % eat @SAVE -- beware, all of them have catcode \other: { \def\dospecials{\do S\do A\do V\do E} \uncatcodespecials % ;-) \gdef\gobblesave @SAVE{} } % initialization: \def\newsaveins #1{% \edef\next{\noexpand\newsaveinsX \makeSAVEname#1}% \next } \def\newsaveinsX #1{% \csname newbox\endcsname #1% \expandafter\def\expandafter\checkinserts\expandafter{\checkinserts \checksaveins #1}% } % initialize: \let\checkinserts\empty \newsaveins\footins \newsaveins\margin % @image. We use the macros from epsf.tex to support this. % If epsf.tex is not installed and @image is used, we complain. % % Check for and read epsf.tex up front. If we read it only at @image % time, we might be inside a group, and then its definitions would get % undone and the next image would fail. \openin 1 = epsf.tex \ifeof 1 \else % Do not bother showing banner with epsf.tex v2.7k (available in % doc/epsf.tex and on ctan). \def\epsfannounce{\toks0 = }% \input epsf.tex \fi \closein 1 % % We will only complain once about lack of epsf.tex. \newif\ifwarnednoepsf \newhelp\noepsfhelp{epsf.tex must be installed for images to work. It is also included in the Texinfo distribution, or you can get it from ftp://tug.org/tex/epsf.tex.} % \def\image#1{% \ifx\epsfbox\undefined \ifwarnednoepsf \else \errhelp = \noepsfhelp \errmessage{epsf.tex not found, images will be ignored}% \global\warnednoepsftrue \fi \else \imagexxx #1,,,,,\finish \fi } % % Arguments to @image: % #1 is (mandatory) image filename; we tack on .eps extension. % #2 is (optional) width, #3 is (optional) height. % #4 is (ignored optional) html alt text. % #5 is (ignored optional) extension. % #6 is just the usual extra ignored arg for parsing this stuff. \newif\ifimagevmode \def\imagexxx#1,#2,#3,#4,#5,#6\finish{\begingroup \catcode`\^^M = 5 % in case we're inside an example \normalturnoffactive % allow _ et al. in names % If the image is by itself, center it. \ifvmode \imagevmodetrue \nobreak\bigskip % Usually we'll have text after the image which will insert % \parskip glue, so insert it here too to equalize the space % above and below. \nobreak\vskip\parskip \nobreak \line\bgroup\hss \fi % % Output the image. \ifpdf \dopdfimage{#1}{#2}{#3}% \else % \epsfbox itself resets \epsf?size at each figure. \setbox0 = \hbox{\ignorespaces #2}\ifdim\wd0 > 0pt \epsfxsize=#2\relax \fi \setbox0 = \hbox{\ignorespaces #3}\ifdim\wd0 > 0pt \epsfysize=#3\relax \fi \epsfbox{#1.eps}% \fi % \ifimagevmode \hss \egroup \bigbreak \fi % space after the image \endgroup} % @float FLOATTYPE,LOC ... @end float for displayed figures, tables, etc. % We don't actually implement floating yet, we just plop the float "here". % But it seemed the best name for the future. % \envparseargdef\float{\dofloat #1,,,\finish} % #1 is the optional FLOATTYPE, the text label for this float, typically % "Figure", "Table", "Example", etc. Can't contain commas. If omitted, % this float will not be numbered and cannot be referred to. % % #2 is the optional xref label. Also must be present for the float to % be referable. % % #3 is the optional positioning argument; for now, it is ignored. It % will somehow specify the positions allowed to float to (here, top, bottom). % % We keep a separate counter for each FLOATTYPE, which we reset at each % chapter-level command. \let\resetallfloatnos=\empty % \def\dofloat#1,#2,#3,#4\finish{% \let\thiscaption=\empty \let\thisshortcaption=\empty % % don't lose footnotes inside @float. \startsavinginserts % % We can't be used inside a paragraph. \par % \vtop\bgroup \def\floattype{#1}% \def\floatlabel{#2}% \def\floatloc{#3}% we do nothing with this yet. % \ifx\floattype\empty \let\safefloattype=\empty \else {% % the floattype might have accents or other special characters, % but we need to use it in a control sequence name. \indexnofonts \turnoffactive \xdef\safefloattype{\floattype}% }% \fi % % If label is given but no type, we handle that as the empty type. \ifx\floatlabel\empty \else % We want each FLOATTYPE to be numbered separately (Figure 1, % Table 1, Figure 2, ...). (And if no label, no number.) % \expandafter\getfloatno\csname\safefloattype floatno\endcsname \global\advance\floatno by 1 % {% % This magic value for \thissection is output by \setref as the % XREFLABEL-title value. \xrefX uses it to distinguish float % labels (which have a completely different output format) from % node and anchor labels. And \xrdef uses it to construct the % lists of floats. % \edef\thissection{\floatmagic=\safefloattype}% \setref{\floatlabel}{Yfloat}% }% \fi % % start with \parskip glue, I guess. \vskip\parskip % % Don't suppress indentation if a float happens to start a section. \restorefirstparagraphindent } % we have these possibilities: % @float Foo,lbl & @caption{Cap}: Foo 1.1: Cap % @float Foo,lbl & no caption: Foo 1.1 % @float Foo & @caption{Cap}: Foo: Cap % @float Foo & no caption: Foo % @float ,lbl & Caption{Cap}: 1.1: Cap % @float ,lbl & no caption: 1.1 % @float & @caption{Cap}: Cap % @float & no caption: % \def\Efloat{% \let\floatident = \empty % % In all cases, if we have a float type, it comes first. \ifx\floattype\empty \else \def\floatident{\floattype}\fi % % If we have an xref label, the number comes next. \ifx\floatlabel\empty \else \ifx\floattype\empty \else % if also had float type, need tie first. \appendtomacro\floatident{\tie}% \fi % the number. \appendtomacro\floatident{\chaplevelprefix\the\floatno}% \fi % % Start the printed caption with what we've constructed in % \floatident, but keep it separate; we need \floatident again. \let\captionline = \floatident % \ifx\thiscaption\empty \else \ifx\floatident\empty \else \appendtomacro\captionline{: }% had ident, so need a colon between \fi % % caption text. \appendtomacro\captionline\thiscaption \fi % % If we have anything to print, print it, with space before. % Eventually this needs to become an \insert. \ifx\captionline\empty \else \vskip.5\parskip \captionline \fi % % If have an xref label, write the list of floats info. Do this % after the caption, to avoid chance of it being a breakpoint. \ifx\floatlabel\empty \else % Write the text that goes in the lof to the aux file as % \floatlabel-lof. Besides \floatident, we include the short % caption if specified, else the full caption if specified, else nothing. {% \atdummies \turnoffactive \otherbackslash \immediate\write\auxfile{@xrdef{\floatlabel-lof}{% \floatident \ifx\thisshortcaption\empty \ifx\thiscaption\empty \else : \thiscaption \fi \else : \thisshortcaption \fi }}% }% \fi % % Space below caption, if we printed anything. \ifx\printedsomething\empty \else \vskip\parskip \fi \egroup % end of \vtop \checkinserts } % Append the tokens #2 to the definition of macro #1, not expanding either. % \newtoks\appendtomacroAtoks \newtoks\appendtomacroBtoks \def\appendtomacro#1#2{% \appendtomacroAtoks = \expandafter{#1}% \appendtomacroBtoks = {#2}% \edef#1{\the\appendtomacroAtoks \the\appendtomacroBtoks}% } % @caption, @shortcaption are easy. % \long\def\caption#1{\checkenv\float \def\thiscaption{#1}} \def\shortcaption#1{\checkenv\float \def\thisshortcaption{#1}} % The parameter is the control sequence identifying the counter we are % going to use. Create it if it doesn't exist and assign it to \floatno. \def\getfloatno#1{% \ifx#1\relax % Haven't seen this figure type before. \csname newcount\endcsname #1% % % Remember to reset this floatno at the next chap. \expandafter\gdef\expandafter\resetallfloatnos \expandafter{\resetallfloatnos #1=0 }% \fi \let\floatno#1% } % \setref calls this to get the XREFLABEL-snt value. We want an @xref % to the FLOATLABEL to expand to "Figure 3.1". We call \setref when we % first read the @float command. % \def\Yfloat{\floattype@tie \chaplevelprefix\the\floatno}% % Magic string used for the XREFLABEL-title value, so \xrefX can % distinguish floats from other xref types. \def\floatmagic{!!float!!} % #1 is the control sequence we are passed; we expand into a conditional % which is true if #1 represents a float ref. That is, the magic % \thissection value which we \setref above. % \def\iffloat#1{\expandafter\doiffloat#1==\finish} % % #1 is (maybe) the \floatmagic string. If so, #2 will be the % (safe) float type for this float. We set \iffloattype to #2. % \def\doiffloat#1=#2=#3\finish{% \def\temp{#1}% \def\iffloattype{#2}% \ifx\temp\floatmagic } % @listoffloats FLOATTYPE - print a list of floats like a table of contents. % \parseargdef\listoffloats{% \def\floattype{#1}% floattype {% % the floattype might have accents or other special characters, % but we need to use it in a control sequence name. \indexnofonts \turnoffactive \xdef\safefloattype{\floattype}% }% % % \xrdef saves the floats as a \do-list in \floatlistSAFEFLOATTYPE. \expandafter\ifx\csname floatlist\safefloattype\endcsname \relax \ifhavexrefs % if the user said @listoffloats foo but never @float foo. \message{\linenumber No `\safefloattype' floats to list.}% \fi \else \begingroup \leftskip=\tocindent % indent these entries like a toc \let\do=\listoffloatsdo \csname floatlist\safefloattype\endcsname \endgroup \fi } % This is called on each entry in a list of floats. We're passed the % xref label, in the form LABEL-title, which is how we save it in the % aux file. We strip off the -title and look up \XRLABEL-lof, which % has the text we're supposed to typeset here. % % Figures without xref labels will not be included in the list (since % they won't appear in the aux file). % \def\listoffloatsdo#1{\listoffloatsdoentry#1\finish} \def\listoffloatsdoentry#1-title\finish{{% % Can't fully expand XR#1-lof because it can contain anything. Just % pass the control sequence. On the other hand, XR#1-pg is just the % page number, and we want to fully expand that so we can get a link % in pdf output. \toksA = \expandafter{\csname XR#1-lof\endcsname}% % % use the same \entry macro we use to generate the TOC and index. \edef\writeentry{\noexpand\entry{\the\toksA}{\csname XR#1-pg\endcsname}}% \writeentry }} \message{localization,} % and i18n. % @documentlanguage is usually given very early, just after % @setfilename. If done too late, it may not override everything % properly. Single argument is the language abbreviation. % It would be nice if we could set up a hyphenation file here. % \parseargdef\documentlanguage{% \tex % read txi-??.tex file in plain TeX. % Read the file if it exists. \openin 1 txi-#1.tex \ifeof 1 \errhelp = \nolanghelp \errmessage{Cannot read language file txi-#1.tex}% \else \input txi-#1.tex \fi \closein 1 \endgroup } \newhelp\nolanghelp{The given language definition file cannot be found or is empty. Maybe you need to install it? In the current directory should work if nowhere else does.} % @documentencoding should change something in TeX eventually, most % likely, but for now just recognize it. \let\documentencoding = \comment % Page size parameters. % \newdimen\defaultparindent \defaultparindent = 15pt \chapheadingskip = 15pt plus 4pt minus 2pt \secheadingskip = 12pt plus 3pt minus 2pt \subsecheadingskip = 9pt plus 2pt minus 2pt % Prevent underfull vbox error messages. \vbadness = 10000 % Don't be so finicky about underfull hboxes, either. \hbadness = 2000 % Following George Bush, just get rid of widows and orphans. \widowpenalty=10000 \clubpenalty=10000 % Use TeX 3.0's \emergencystretch to help line breaking, but if we're % using an old version of TeX, don't do anything. We want the amount of % stretch added to depend on the line length, hence the dependence on % \hsize. We call this whenever the paper size is set. % \def\setemergencystretch{% \ifx\emergencystretch\thisisundefined % Allow us to assign to \emergencystretch anyway. \def\emergencystretch{\dimen0}% \else \emergencystretch = .15\hsize \fi } % Parameters in order: 1) textheight; 2) textwidth; 3) voffset; % 4) hoffset; 5) binding offset; 6) topskip; 7) physical page height; 8) % physical page width. % % We also call \setleading{\textleading}, so the caller should define % \textleading. The caller should also set \parskip. % \def\internalpagesizes#1#2#3#4#5#6#7#8{% \voffset = #3\relax \topskip = #6\relax \splittopskip = \topskip % \vsize = #1\relax \advance\vsize by \topskip \outervsize = \vsize \advance\outervsize by 2\topandbottommargin \pageheight = \vsize % \hsize = #2\relax \outerhsize = \hsize \advance\outerhsize by 0.5in \pagewidth = \hsize % \normaloffset = #4\relax \bindingoffset = #5\relax % \ifpdf \pdfpageheight #7\relax \pdfpagewidth #8\relax \fi % \setleading{\textleading} % \parindent = \defaultparindent \setemergencystretch } % @letterpaper (the default). \def\letterpaper{{\globaldefs = 1 \parskip = 3pt plus 2pt minus 1pt \textleading = 13.2pt % % If page is nothing but text, make it come out even. \internalpagesizes{46\baselineskip}{6in}% {\voffset}{.25in}% {\bindingoffset}{36pt}% {11in}{8.5in}% }} % Use @smallbook to reset parameters for 7x9.5 (or so) format. \def\smallbook{{\globaldefs = 1 \parskip = 2pt plus 1pt \textleading = 12pt % \internalpagesizes{7.5in}{5in}% {\voffset}{.25in}% {\bindingoffset}{16pt}% {9.25in}{7in}% % \lispnarrowing = 0.3in \tolerance = 700 \hfuzz = 1pt \contentsrightmargin = 0pt \defbodyindent = .5cm }} % Use @afourpaper to print on European A4 paper. \def\afourpaper{{\globaldefs = 1 \parskip = 3pt plus 2pt minus 1pt \textleading = 13.2pt % % Double-side printing via postscript on Laserjet 4050 % prints double-sided nicely when \bindingoffset=10mm and \hoffset=-6mm. % To change the settings for a different printer or situation, adjust % \normaloffset until the front-side and back-side texts align. Then % do the same for \bindingoffset. You can set these for testing in % your texinfo source file like this: % @tex % \global\normaloffset = -6mm % \global\bindingoffset = 10mm % @end tex \internalpagesizes{51\baselineskip}{160mm} {\voffset}{\hoffset}% {\bindingoffset}{44pt}% {297mm}{210mm}% % \tolerance = 700 \hfuzz = 1pt \contentsrightmargin = 0pt \defbodyindent = 5mm }} % Use @afivepaper to print on European A5 paper. % From romildo@urano.iceb.ufop.br, 2 July 2000. % He also recommends making @example and @lisp be small. \def\afivepaper{{\globaldefs = 1 \parskip = 2pt plus 1pt minus 0.1pt \textleading = 12.5pt % \internalpagesizes{160mm}{120mm}% {\voffset}{\hoffset}% {\bindingoffset}{8pt}% {210mm}{148mm}% % \lispnarrowing = 0.2in \tolerance = 800 \hfuzz = 1.2pt \contentsrightmargin = 0pt \defbodyindent = 2mm \tableindent = 12mm }} % A specific text layout, 24x15cm overall, intended for A4 paper. \def\afourlatex{{\globaldefs = 1 \afourpaper \internalpagesizes{237mm}{150mm}% {\voffset}{4.6mm}% {\bindingoffset}{7mm}% {297mm}{210mm}% % % Must explicitly reset to 0 because we call \afourpaper. \globaldefs = 0 }} % Use @afourwide to print on A4 paper in landscape format. \def\afourwide{{\globaldefs = 1 \afourpaper \internalpagesizes{241mm}{165mm}% {\voffset}{-2.95mm}% {\bindingoffset}{7mm}% {297mm}{210mm}% \globaldefs = 0 }} % @pagesizes TEXTHEIGHT[,TEXTWIDTH] % Perhaps we should allow setting the margins, \topskip, \parskip, % and/or leading, also. Or perhaps we should compute them somehow. % \parseargdef\pagesizes{\pagesizesyyy #1,,\finish} \def\pagesizesyyy#1,#2,#3\finish{{% \setbox0 = \hbox{\ignorespaces #2}\ifdim\wd0 > 0pt \hsize=#2\relax \fi \globaldefs = 1 % \parskip = 3pt plus 2pt minus 1pt \setleading{\textleading}% % \dimen0 = #1 \advance\dimen0 by \voffset % \dimen2 = \hsize \advance\dimen2 by \normaloffset % \internalpagesizes{#1}{\hsize}% {\voffset}{\normaloffset}% {\bindingoffset}{44pt}% {\dimen0}{\dimen2}% }} % Set default to letter. % \letterpaper \message{and turning on texinfo input format.} % Define macros to output various characters with catcode for normal text. \catcode`\"=\other \catcode`\~=\other \catcode`\^=\other \catcode`\_=\other \catcode`\|=\other \catcode`\<=\other \catcode`\>=\other \catcode`\+=\other \catcode`\$=\other \def\normaldoublequote{"} \def\normaltilde{~} \def\normalcaret{^} \def\normalunderscore{_} \def\normalverticalbar{|} \def\normalless{<} \def\normalgreater{>} \def\normalplus{+} \def\normaldollar{$}%$ font-lock fix % This macro is used to make a character print one way in \tt % (where it can probably be output as-is), and another way in other fonts, % where something hairier probably needs to be done. % % #1 is what to print if we are indeed using \tt; #2 is what to print % otherwise. Since all the Computer Modern typewriter fonts have zero % interword stretch (and shrink), and it is reasonable to expect all % typewriter fonts to have this, we can check that font parameter. % \def\ifusingtt#1#2{\ifdim \fontdimen3\font=0pt #1\else #2\fi} % Same as above, but check for italic font. Actually this also catches % non-italic slanted fonts since it is impossible to distinguish them from % italic fonts. But since this is only used by $ and it uses \sl anyway % this is not a problem. \def\ifusingit#1#2{\ifdim \fontdimen1\font>0pt #1\else #2\fi} % Turn off all special characters except @ % (and those which the user can use as if they were ordinary). % Most of these we simply print from the \tt font, but for some, we can % use math or other variants that look better in normal text. \catcode`\"=\active \def\activedoublequote{{\tt\char34}} \let"=\activedoublequote \catcode`\~=\active \def~{{\tt\char126}} \chardef\hat=`\^ \catcode`\^=\active \def^{{\tt \hat}} \catcode`\_=\active \def_{\ifusingtt\normalunderscore\_} % Subroutine for the previous macro. \def\_{\leavevmode \kern.07em \vbox{\hrule width.3em height.1ex}\kern .07em } \catcode`\|=\active \def|{{\tt\char124}} \chardef \less=`\< \catcode`\<=\active \def<{{\tt \less}} \chardef \gtr=`\> \catcode`\>=\active \def>{{\tt \gtr}} \catcode`\+=\active \def+{{\tt \char 43}} \catcode`\$=\active \def${\ifusingit{{\sl\$}}\normaldollar}%$ font-lock fix % If a .fmt file is being used, characters that might appear in a file % name cannot be active until we have parsed the command line. % So turn them off again, and have \everyjob (or @setfilename) turn them on. % \otherifyactive is called near the end of this file. \def\otherifyactive{\catcode`+=\other \catcode`\_=\other} \catcode`\@=0 % \backslashcurfont outputs one backslash character in current font, % as in \char`\\. \global\chardef\backslashcurfont=`\\ \global\let\rawbackslashxx=\backslashcurfont % let existing .??s files work % \rawbackslash defines an active \ to do \backslashcurfont. % \otherbackslash defines an active \ to be a literal `\' character with % catcode other. {\catcode`\\=\active @gdef@rawbackslash{@let\=@backslashcurfont} @gdef@otherbackslash{@let\=@realbackslash} } % \realbackslash is an actual character `\' with catcode other. {\catcode`\\=\other @gdef@realbackslash{\}} % \normalbackslash outputs one backslash in fixed width font. \def\normalbackslash{{\tt\backslashcurfont}} \catcode`\\=\active % Used sometimes to turn off (effectively) the active characters % even after parsing them. @def@turnoffactive{% @let"=@normaldoublequote @let\=@realbackslash @let~=@normaltilde @let^=@normalcaret @let_=@normalunderscore @let|=@normalverticalbar @let<=@normalless @let>=@normalgreater @let+=@normalplus @let$=@normaldollar %$ font-lock fix @unsepspaces } % Same as @turnoffactive except outputs \ as {\tt\char`\\} instead of % the literal character `\'. (Thus, \ is not expandable when this is in % effect.) % @def@normalturnoffactive{@turnoffactive @let\=@normalbackslash} % Make _ and + \other characters, temporarily. % This is canceled by @fixbackslash. @otherifyactive % If a .fmt file is being used, we don't want the `\input texinfo' to show up. % That is what \eatinput is for; after that, the `\' should revert to printing % a backslash. % @gdef@eatinput input texinfo{@fixbackslash} @global@let\ = @eatinput % On the other hand, perhaps the file did not have a `\input texinfo'. Then % the first `\{ in the file would cause an error. This macro tries to fix % that, assuming it is called before the first `\' could plausibly occur. % Also back turn on active characters that might appear in the input % file name, in case not using a pre-dumped format. % @gdef@fixbackslash{% @ifx\@eatinput @let\ = @normalbackslash @fi @catcode`+=@active @catcode`@_=@active } % Say @foo, not \foo, in error messages. @escapechar = `@@ % These look ok in all fonts, so just make them not special. @catcode`@& = @other @catcode`@# = @other @catcode`@% = @other @c Local variables: @c eval: (add-hook 'write-file-hooks 'time-stamp) @c page-delimiter: "^\\\\message" @c time-stamp-start: "def\\\\texinfoversion{" @c time-stamp-format: "%:y-%02m-%02d.%02H" @c time-stamp-end: "}" @c End: @c vim:sw=2: @ignore arch-tag: e1b36e32-c96e-4135-a41a-0b2efa2ea115 @end ignore gccintro-1.0/gccintro.texi0000664000175000017500000055632310046437273011374 \input texinfo.tex @c -*- texinfo -*- @c -*-texinfo-*- @c This is the texinfo source for the manual "An Introduction to GCC". @c Copyright (C) 2003, 2004 Network Theory Ltd @c See the file COPYING.FDL for copying conditions. @c @include config-local.texi @c %**start of header @setfilename gccintro.info @settitle An Introduction to GCC @c %**end of header @iftex @finalout @end iftex @setchapternewpage odd @titlepage @title An Introduction to GCC @subtitle for the GNU Compilers @code{gcc} and @code{g++} @author Brian Gough @author Foreword by Richard M.@: Stallman @page @vskip 0pt plus 1filll @ifset publish @flushleft A catalogue record for this book is available from the British Library. First printing, March 2004 (7/3/2004). Published by Network Theory Limited. 15 Royal Park Bristol BS8 3AL United Kingdom Email: info@@network-theory.co.uk ISBN 0-9541617-9-3 Further information about this book is available from @uref{http://www.network-theory.co.uk/gcc/intro/} @end flushleft @vskip 1ex Cover Image: From a layout of a fast, energy-efficient hardware stack.@footnote{``A Fast and Energy-Efficient Stack'' by J.@: Ebergen, D.@: Finchelstein, R.@: Kao, J.@: Lexau and R.@: Hopkins.} Image created with the free Electric VLSI design system by Steven Rubin of Static Free Software (@uref{http://www.staticfreesoft.com/,,www.staticfreesoft.com}). Static Free Software provides support for Electric to the electronics design industry. @vskip 1ex @end ifset Copyright @copyright{} 2004 Network Theory Ltd. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with the Front-Cover Texts being ``A Network Theory Manual'', and with the Back-Cover Texts as in (a) below. A copy of the license is included in the section entitled ``GNU Free Documentation License''. (a) The Back-Cover Text is: ``The development of this manual was funded entirely by Network Theory Ltd. Copies published by Network Theory Ltd raise money for more free documentation.'' The Texinfo source for this manual may be obtained from: @* @code{http://www.network-theory.co.uk/gcc/intro/src/} @end titlepage @ifnottex @node Top @top An Introduction to GCC @end ifnottex @contents @ifinfo This manual provides an introduction to the GNU C and C++ Compilers, @code{gcc} and @code{g++}, which are part of the GNU Compiler Collection (GCC). The development of this manual was funded entirely by Network Theory Ltd. Copies published by Network Theory Ltd raise money for more free documentation. @end ifinfo @menu * Introduction:: * Compiling a C program:: * Compilation options:: * Using the preprocessor:: * Compiling for debugging:: * Compiling with optimization:: * Compiling a C++ program:: * Platform-specific options:: * Troubleshooting:: * Compiler-related tools:: * How the compiler works:: * Examining compiled files:: * Getting help:: * Further reading:: * Acknowledgements:: * Index:: @end menu @ifnotinfo @include rms.texi @end ifnotinfo @node Introduction @chapter Introduction The purpose of this book is to explain the use of the GNU C and C++ compilers, @code{gcc} and @code{g++}. After reading this book you should understand how to compile a program, and how to use basic compiler options for optimization and debugging. This book does not attempt to teach the C or C++ languages themselves, since this material can be found in many other places (@pxref{Further reading}). Experienced programmers who are familiar with other systems, but new to the GNU compilers, can skip the early sections of the chapters ``@cite{Compiling a C program}'', ``@cite{Using the preprocessor}'' and ``@cite{Compiling a C++ program}''. The remaining sections and chapters should provide a good overview of the features of GCC for those already know how to use other compilers. @menu * A brief history of GCC:: * Major features of GCC:: * Programming in C and C++:: * Conventions used in this manual:: @end menu @node A brief history of GCC @section A brief history of GCC @cindex @code{gcc}, GNU C Compiler @cindex C, @code{gcc} compiler @cindex history, of GCC @cindex Richard Stallman, principal author of GCC @cindex GNU Project, history of @cindex Free Software Foundation (FSF) The original author of the GNU C Compiler (GCC) is Richard Stallman, the founder of the GNU Project. The GNU project was started in 1984 to create a complete Unix-like operating system as free software, in order to promote freedom and cooperation among computer users and programmers. Every Unix-like operating system needs a C compiler, and as there were no free compilers in existence at that time, the GNU Project had to develop one from scratch. The work was funded by donations from individuals and companies to the Free Software Foundation, a non-profit organization set up to support the work of the GNU Project. The first release of GCC was made in 1987. This was a significant breakthrough, being the first portable ANSI C optimizing compiler released as free software. Since that time GCC has become one of the most important tools in the development of free software. @cindex C++, @code{g++} compiler @cindex @code{g++}, GNU C++ Compiler @cindex EGCS (Experimental GNU Compiler Suite) A major revision of the compiler came with the 2.0 series in 1992, which added the ability to compile C++. In 1997 an experimental branch of the compiler (EGCS) was created, to improve optimization and C++ support. Following this work, EGCS was adopted as the new main-line of GCC development, and these features became widely available in the 3.0 release of GCC in 2001. @cindex Fortran, @code{g77} compiler @cindex Objective-C @cindex ADA, @code{gnat} compiler @cindex Java, @code{gcj} compiler @cindex @code{g77}, Fortran compiler @cindex @code{gnat}, GNU ADA compiler @cindex @code{gcj}, GNU Compiler for Java Over time GCC has been extended to support many additional languages, including Fortran, ADA, Java and Objective-C. The acronym GCC is now used to refer to the ``GNU Compiler Collection''. Its development is guided by the @dfn{GCC Steering Committee}, a group composed of representatives from GCC user communities in industry, research and academia. @node Major features of GCC @section Major features of GCC @cindex features, of GCC @cindex major features, of GCC @cindex GNU Compilers, major features This section describes some of the most important features of GCC. First of all, GCC is a portable compiler---it runs on most platforms available today, and can produce output for many types of processors. In addition to the processors used in personal computers, it also supports microcontrollers, DSPs and 64-bit CPUs. @cindex embedded systems, cross-compilation for GCC is not only a native compiler---it can also @dfn{cross-compile} any program, producing executable files for a different system from the one used by GCC itself. This allows software to be compiled for embedded systems which are not capable of running a compiler. GCC is written in C with a strong focus on portability, and can compile itself, so it can be adapted to new systems easily. GCC has multiple language @dfn{frontends}, for parsing different languages. Programs in each language can be compiled, or cross-compiled, for any architecture. For example, an ADA program can be compiled for a microcontroller, or a C program for a supercomputer. GCC has a modular design, allowing support for new languages and architectures to be added. Adding a new language front-end to GCC enables the use of that language on any architecture, provided that the necessary run-time facilities (such as libraries) are available. Similarly, adding support for a new architecture makes it available to all languages. Finally, and most importantly, GCC is free software, distributed under the GNU General Public License (GNU GPL).@footnote{For details see the license file @file{COPYING} distributed with GCC.} This means you have the freedom to use and to modify GCC, as with all GNU software. If you need support for a new type of CPU, a new language, or a new feature you can add it yourself, or hire someone to enhance GCC for you. You can hire someone to fix a bug if it is important for your work. Furthermore, you have the freedom to share any enhancements you make to GCC. As a result of this freedom you can also make use of enhancements to GCC developed by others. The many features offered by GCC today show how this freedom to cooperate works to benefit you, and everyone else who uses GCC. @node Programming in C and C++ @section Programming in C and C++ @cindex Lisp, compared with C/C++ @cindex Smalltalk, compared with C/C++ @cindex Scheme, compared with C/C++ @cindex Java, compared with C/C++ @cindex C/C++, risks of using @cindex risks, when using C/C++ C and C++ are languages that allow direct access to the computer's memory. Historically, they have been used for writing low-level systems software, and applications where high-performance or control over resource usage are critical. However, great care is required to ensure that memory is accessed correctly, to avoid corrupting other data-structures. This book describes techniques that will help in detecting potential errors during compilation, but the risk in using languages like C or C++ can never be eliminated. In addition to C and C++ the GNU Project also provides other high-level languages, such as GNU Common Lisp (@code{gcl}), GNU Smalltalk (@code{gst}), the GNU Scheme extension language (@code{guile}) and the GNU Compiler for Java (@code{gcj}). These languages do not allow the user to access memory directly, eliminating the possibility of memory access errors. They are a safer alternative to C and C++ for many applications. @node Conventions used in this manual @section Conventions used in this manual @cindex conventions, used in manual @cindex examples, conventions used in @cindex shell prompt @cindex @code{$}, shell prompt This manual contains many examples which can be typed at the keyboard. A command entered at the terminal is shown like this, @example $ @i{command} @end example @noindent followed by its output. For example: @example $ echo "hello world" hello world @end example @noindent @cindex dollar sign @code{$}, shell prompt The first character on the line is the terminal prompt, and should not be typed. The dollar sign @samp{$} is used as the standard prompt in this manual, although some systems may use a different character. When a command in an example is too long to fit in a single line it is wrapped and then indented on subsequent lines, like this: @example $ echo "an example of a line which is too long to fit in this manual" @end example @noindent When entered at the keyboard, the entire command should be typed on a single line. The example source files used in this manual can be downloaded from the publisher's website,@footnote{See @uref{http://www.network-theory.co.uk/gcc/intro/}} or entered by hand using any text editor, such as the standard GNU editor, @code{emacs}. The example compilation commands use @code{gcc} and @code{g++} as the names of the GNU C and C++ compilers, and @code{cc} to refer to other compilers. The example programs should work with any version of GCC. Any command-line options which are only available in recent versions of GCC are noted in the text. @cindex shell variables @cindex environment variables The examples assume the use of a GNU operating system---there may be minor differences in the output on other systems. Some non-essential and verbose system-dependent output messages (such as very long system paths) have been edited in the examples for brevity. The commands for setting environment variables use the syntax of the standard GNU shell (@code{bash}), and should work with any version of the Bourne shell. @node Compiling a C program @chapter Compiling a C program @cindex compiling C programs with @code{gcc} This chapter describes how to compile C programs using @code{gcc}. Programs can be compiled from a single source file or from multiple source files, and may use system libraries and header files. @cindex source code @cindex machine code @cindex executable file @cindex binary file, also called executable file Compilation refers to the process of converting a program from the textual @dfn{source code}, in a programming language such as C or C++, into @dfn{machine code}, the sequence of 1's and 0's used to control the central processing unit (CPU) of the computer. This machine code is then stored in a file known as an @dfn{executable file}, sometimes referred to as a @dfn{binary file}. @menu * Compiling a simple C program:: * Finding errors in a simple program:: * Compiling multiple source files:: * Compiling files independently:: * Recompiling and relinking:: * Linking with external libraries:: * Using library header files:: @end menu @node Compiling a simple C program @section Compiling a simple C program @cindex Hello World program, in C @cindex C, compiling with @code{gcc} @cindex simple C program, compiling The classic example program for the C language is @dfn{Hello World}. Here is the source code for our version of the program: @example @verbatiminclude hello.c @end example @noindent @cindex @code{.c}, C source file extension @cindex @code{c}, C source file extension @cindex C source file, @code{.c} extension @cindex file extension, @code{.c} source file @cindex extension, @code{.c} source file We will assume that the source code is stored in a file called @file{hello.c}. To compile the file @file{hello.c} with @code{gcc}, use the following command: @example $ gcc -Wall hello.c -o hello @end example @noindent @cindex @code{gcc}, simple example @cindex @option{-o} option, set output filename @cindex @option{o} option, set output filename @cindex output file option, @option{-o} @cindex @code{a.out}, default executable filename @cindex executable, default filename @code{a.out} @cindex default executable filename, @code{a.out} This compiles the source code in @file{hello.c} to machine code and stores it in an executable file @file{hello}. The output file for the machine code is specified using the @option{-o} option. This option is usually given as the last argument on the command line. If it is omitted, the output is written to a default file called @file{a.out}. Note that if a file with the same name as the executable file already exists in the current directory it will be overwritten. @cindex @option{-Wall} option, enable common warnings @cindex @option{Wall} option, enable common warnings @cindex warning options, @option{-Wall} The option @option{-Wall} turns on all the most commonly-used compiler warnings---@strong{it is recommended that you always use this option!} There are many other warning options which will be discussed in later chapters, but @option{-Wall} is the most important. GCC will not produce any warnings unless they are enabled. Compiler warnings are an essential aid in detecting problems when programming in C and C++. In this case, the compiler does not produce any warnings with the @option{-Wall} option, since the program is completely valid. Source code which does not produce any warnings is said to @dfn{compile cleanly}. @cindex executable, running @cindex running an executable file, C To run the program, type the path name of the executable like this: @example $ ./hello Hello, world! @end example @noindent This loads the executable file into memory and causes the CPU to begin executing the instructions contained within it. The path @code{./} refers to the current directory, so @code{./hello} loads and runs the executable file @file{hello} located in the current directory. @node Finding errors in a simple program @section Finding errors in a simple program @cindex @code{printf}, example of error in format As mentioned above, compiler warnings are an essential aid when programming in C and C++. To demonstrate this, the program below contains a subtle error: it uses the function @code{printf} incorrectly, by specifying a floating-point format @samp{%f} for an integer value: @example @verbatiminclude bad.c @end example @noindent This error is not obvious at first sight, but can be detected by the compiler if the warning option @option{-Wall} has been enabled. Compiling the program above, @file{bad.c}, with the warning option @option{-Wall} produces the following message: @cindex warning, format with different type arg @cindex format, @samp{different type arg} warning @cindex @code{different type arg}, format warning @example $ gcc -Wall bad.c -o bad bad.c: In function `main': bad.c:6: warning: double format, different type arg (arg 2) @end example @noindent This indicates that a format string has been used incorrectly in the file @file{bad.c} at line 6. The messages produced by GCC always have the form @i{file:line-number:message}. The compiler distinguishes between @dfn{error messages}, which prevent successful compilation, and @dfn{warning messages} which indicate possible problems (but do not stop the program from compiling). In this case, the correct format specifier would have been @samp{%d} (the allowed format specifiers for @code{printf} can be found in any general book on C, such as the @cite{GNU C Library Reference Manual}, @pxref{Further reading}). Without the warning option @option{-Wall} the program appears to compile cleanly, but produces incorrect results: @cindex bug, example of @example $ gcc bad.c -o bad $ ./bad Two plus two is 2.585495 @r{(incorrect output)} @end example @noindent @cindex C/C++, risks of using, example @cindex risks, example of corrupted output The incorrect format specifier causes the output to be corrupted, because the function @code{printf} is passed an integer instead of a floating-point number. Integers and floating-point numbers are stored in different formats in memory, and generally occupy different numbers of bytes, leading to a spurious result. The actual output shown above may differ, depending on the specific platform and environment. Clearly, it is very dangerous to develop a program without checking for compiler warnings. If there are any functions which are not used correctly they can cause the program to crash, or to produce incorrect results. Turning on the compiler warning option @option{-Wall} will catch many of the commonest errors which occur in C programming. @node Compiling multiple source files @section Compiling multiple source files @cindex multiple files, compiling @cindex compiling multiple files A program can be split up into multiple files. This makes it easier to edit and understand, especially in the case of large programs---it also allows the individual parts to be compiled independently. In the following example we will split up the program @dfn{Hello World} into three files: @file{main.c}, @file{hello_fn.c} and the header file @file{hello.h}. Here is the main program @file{main.c}: @example @verbatiminclude main.c @end example @noindent The original call to the @code{printf} system function in the previous program @file{hello.c} has been replaced by a call to a new external function @code{hello}, which we will define in a separate file @file{hello_fn.c}. @cindex declaration, in header file @cindex header file, declarations in @cindex @code{.h}, header file extension @cindex @code{h}, header file extension @cindex header file, @code{.h} extension @cindex file extension, @code{.h} header file @cindex extension, @code{.h} header file The main program also includes the header file @file{hello.h} which will contain the declaration of the function @code{hello}. The declaration is used to ensure that the types of the arguments and return value match up correctly between the function call and the function definition. We no longer need to include the system header file @file{stdio.h} in @file{main.c} to declare the function @code{printf}, since the file @file{main.c} does not call @code{printf} directly. The declaration in @file{hello.h} is a single line specifying the prototype of the function @code{hello}: @example @verbatiminclude hello1.h @end example @noindent The definition of the function @code{hello} itself is contained in the file @file{hello_fn.c}: @example @verbatiminclude hello_fn.c @end example @noindent This function prints the message ``@code{Hello, }@var{name}@code{!}'' using its argument as the value of @var{name}. @cindex @code{#include}, preprocessor directive Incidentally, the difference between the two forms of the include statement @code{#include "@var{FILE}.h"} and @code{#include <@var{FILE}.h>} is that the former searches for @file{@var{FILE}.h} in the current directory before looking in the system header file directories. The include statement @code{#include <@var{FILE}.h>} searches the system header files, but does not look in the current directory by default. To compile these source files with @code{gcc}, use the following command: @example $ gcc -Wall main.c hello_fn.c -o newhello @end example @noindent In this case, we use the @option{-o} option to specify a different output file for the executable, @file{newhello}. Note that the header file @file{hello.h} is not specified in the list of files on the command line. The directive @code{#include "hello.h"} in the source files instructs the compiler to include it automatically at the appropriate points. To run the program, type the path name of the executable: @example $ ./newhello Hello, world! @end example @noindent All the parts of the program have been combined into a single executable file, which produces the same result as the executable created from the single source file used earlier. @node Compiling files independently @section Compiling files independently @cindex compiling files independently @cindex independent compilation of files If a program is stored in a single file then any change to an individual function requires the whole program to be recompiled to produce a new executable. The recompilation of large source files can be very time-consuming. @cindex linking, explanation of @cindex object file, explanation of @cindex @code{.o}, object file extension @cindex @code{o}, object file extension @cindex file extension, @code{.o} object file @cindex extension, @code{.o} object file @cindex object file, @code{.o} extension When programs are stored in independent source files, only the files which have changed need to be recompiled after the source code has been modified. In this approach, the source files are compiled separately and then @dfn{linked} together---a two stage process. In the first stage, a file is compiled without creating an executable. The result is referred to as an @dfn{object file}, and has the extension @file{.o} when using GCC. In the second stage, the object files are merged together by a separate program called the @dfn{linker}. The linker combines all the object files together to create a single executable. An object file contains machine code where any references to the memory addresses of functions (or variables) in other files are left undefined. This allows source files to be compiled without direct reference to each other. The linker fills in these missing addresses when it produces the executable. @menu * Creating object files from source files:: * Creating executables from object files:: * Link order of object files:: @end menu @node Creating object files from source files @subsection Creating object files from source files @cindex creating object files from source files @cindex @option{-c} option, compile to object file @cindex @option{c} option, compile to object file @cindex compile to object file, @option{-c} option @cindex object file, creating from source using option @option{-c} The command-line option @option{-c} is used to compile a source file to an object file. For example, the following command will compile the source file @file{main.c} to an object file: @example $ gcc -Wall -c main.c @end example @noindent This produces an object file @file{main.o} containing the machine code for the @code{main} function. It contains a reference to the external function @code{hello}, but the corresponding memory address is left undefined in the object file at this stage (it will be filled in later by linking). The corresponding command for compiling the @code{hello} function in the source file @file{hello_fn.c} is: @example $ gcc -Wall -c hello_fn.c @end example @noindent This produces the object file @file{hello_fn.o}. Note that there is no need to use the option @option{-o} to specify the name of the output file in this case. When compiling with @option{-c} the compiler automatically creates an object file whose name is the same as the source file, with @file{.o} instead of the original extension. @cindex header file, not compiled There is no need to put the header file @file{hello.h} on the command line, since it is automatically included by the @code{#include} statements in @file{main.c} and @file{hello_fn.c}. @node Creating executables from object files @subsection Creating executables from object files @cindex creating executable files from object files @cindex linking, creating executable files from object files @cindex executable, creating from object files by linking @cindex object files, linking to create executable file The final step in creating an executable file is to use @code{gcc} to link the object files together and fill in the missing addresses of external functions. To link object files together, they are simply listed on the command line: @example $ gcc main.o hello_fn.o -o hello @end example @noindent This is one of the few occasions where there is no need to use the @option{-Wall} warning option, since the individual source files have already been successfully compiled to object code. Once the source files have been compiled, linking is an unambiguous process which either succeeds or fails (it fails only if there are references which cannot be resolved). @cindex linker, initial description To perform the linking step @code{gcc} uses the linker @code{ld}, which is a separate program. On GNU systems the GNU linker, GNU @code{ld}, is used. Other systems may use the GNU linker with GCC, or may have their own linkers. The linker itself will be discussed later (@pxref{How the compiler works}). By running the linker, @code{gcc} creates an executable file from the object files. @c @example @c ld main.o hello.o -o hello @c @end example The resulting executable file can now be run: @example $ ./hello Hello, world! @end example @noindent It produces the same output as the version of the program using a single source file in the previous section. @node Link order of object files @subsection Link order of object files @cindex link order, of object files @cindex link order, from left to right @cindex object files, link order @cindex order, of object files in linking On Unix-like systems, the traditional behavior of compilers and linkers is to search for external functions from left to right in the object files specified on the command line. This means that the object file which contains the definition of a function should appear after any files which call that function. In this case, the file @file{hello_fn.o} containing the function @code{hello} should be specified after @file{main.o} itself, since @code{main} calls @code{hello}: @example $ gcc main.o hello_fn.o -o hello @r{(correct order)} @end example @noindent With some compilers or linkers the opposite ordering would result in an error, @cindex error, undefined reference due to order of object files @cindex undefined reference error, due to order of object files @example $ cc hello_fn.o main.o -o hello @r{(incorrect order)} main.o: In function `main': main.o(.text+0xf): undefined reference to `hello' @end example @noindent because there is no object file containing @code{hello} after @file{main.o}. Most current compilers and linkers will search all object files, regardless of order, but since not all compilers do this it is best to follow the convention of ordering object files from left to right. This is worth keeping in mind if you ever encounter unexpected problems with undefined references, and all the necessary object files appear to be present on the command line. @node Recompiling and relinking @section Recompiling and relinking @cindex recompiling modified source files @cindex relinking, updated object files @cindex modified source files, recompiling @cindex updated object files, relinking @cindex source files, recompiling @cindex object files, relinking @cindex C programs, recompiling after modification To show how source files can be compiled independently we will edit the main program @file{main.c} and modify it to print a greeting to @code{everyone} instead of @code{world}: @example @verbatiminclude main2.c @end example @noindent The updated file @file{main.c} can now be recompiled with the following command: @example $ gcc -Wall -c main.c @end example @noindent @cindex recompiling @cindex modified source files, recompiling @cindex updated source files, recompiling This produces a new object file @file{main.o}. There is no need to create a new object file for @file{hello_fn.c}, since that file and the related files that it depends on, such as header files, have not changed. @cindex relinking @cindex linking, updated object files The new object file can be relinked with the @code{hello} function to create a new executable file: @example $ gcc main.o hello_fn.o -o hello @end example @noindent The resulting executable @file{hello} now uses the new @code{main} function to produce the following output: @example $ ./hello Hello, everyone! @end example @noindent Note that only the file @file{main.c} has been recompiled, and then relinked with the existing object file for the @code{hello} function. If the file @file{hello_fn.c} had been modified instead, we could have recompiled @file{hello_fn.c} to create a new object file @file{hello_fn.o} and relinked this with the existing file @file{main.o}.@footnote{If the prototype of a function has changed, it is necessary to modify and recompile all of the other source files which use it.} In general, linking is faster than compilation---in a large project with many source files, recompiling only those that have been modified can make a significant saving. The process of recompiling only the modified files in a project can be automated using @cite{GNU Make} (@pxref{Further reading}). @node Linking with external libraries @section Linking with external libraries @cindex linking, with external libraries @cindex libraries, linking with @cindex @code{sqrt}, example of linking with @cindex C math library @cindex library, C math library @cindex math library @cindex system libraries @cindex external libraries, linking with A library is a collection of precompiled object files which can be linked into programs. The most common use of libraries is to provide system functions, such as the square root function @code{sqrt} found in the C math library. @cindex libraries, stored in archive files @cindex archive file, explanation of @cindex @code{.a}, archive file extension @cindex @code{a}, archive file extension @cindex file extension, @code{.a} archive file @cindex extension, @code{.a} archive file @cindex archive file, @code{.a} extension @cindex GNU archiver, @code{ar} @cindex @code{ar}, GNU archiver Libraries are typically stored in special @dfn{archive files} with the extension @file{.a}, referred to as @dfn{static libraries}. They are created from object files with a separate tool, the GNU archiver @code{ar}, and used by the linker to resolve references to functions at compile-time. We will see later how to create libraries using the @command{ar} command (@pxref{Compiler-related tools}). For simplicity, only static libraries are covered in this section---dynamic linking at runtime using @dfn{shared libraries} will be described in the next chapter. @cindex system libraries, location of @cindex C standard library @cindex C library, standard @cindex standard library, C @cindex library, C standard library The standard system libraries are usually found in the directories @file{/usr/lib} and @file{/lib}.@footnote{On systems supporting both 64 and 32-bit executables the 64-bit versions of the libraries will often be stored in @file{/usr/lib64} and @file{/lib64}, with the 32-bit versions in @file{/usr/lib} and @file{/lib}.} For example, the C math library is typically stored in the file @file{/usr/lib/libm.a} on Unix-like systems. The corresponding prototype declarations for the functions in this library are given in the header file @file{/usr/include/math.h}. The C standard library itself is stored in @file{/usr/lib/libc.a} and contains functions specified in the ANSI/ISO C standard, such as @samp{printf}---this library is linked by default for every C program. Here is an example program which makes a call to the external function @code{sqrt} in the math library @file{libm.a}: @example @verbatiminclude calc.c @end example @noindent Trying to create an executable from this source file alone causes the compiler to give an error at the link stage: @cindex undefined reference, due to missing library @cindex reference, undefined due to missing library @cindex libraries, link error due to undefined reference @example @group $ gcc -Wall calc.c -o calc /tmp/ccbR6Ojm.o: In function `main': /tmp/ccbR6Ojm.o(.text+0x19): undefined reference to `sqrt' @end group @end example @noindent The problem is that the reference to the @code{sqrt} function cannot be resolved without the external math library @file{libm.a}. The function @code{sqrt} is not defined in the program or the default library @file{libc.a}, and the compiler does not link to the file @file{libm.a} unless it is explicitly selected. @cindex @file{/tmp} directory, temporary files @cindex temporary files, written to @file{/tmp} @cindex object files, temporary Incidentally, the file mentioned in the error message @file{/tmp/ccbR60jm.o} is a temporary object file created by the compiler from @file{calc.c}, in order to carry out the linking process. To enable the compiler to link the @code{sqrt} function to the main program @file{calc.c} we need to supply the library @file{libm.a}. One obvious but cumbersome way to do this is to specify it explicitly on the command line: @example $ gcc -Wall calc.c /usr/lib/libm.a -o calc @end example @noindent The library @file{libm.a} contains object files for all the mathematical functions, such as @code{sin}, @code{cos}, @code{exp}, @code{log} and @code{sqrt}. The linker searches through these to find the object file containing the @code{sqrt} function. Once the object file for the @code{sqrt} function has been found, the main program can be linked and a complete executable produced: @example $ ./calc The square root of 2.0 is 1.414214 @end example @noindent The executable file includes the machine code for the main function and the machine code for the @code{sqrt} function, copied from the corresponding object file in the library @file{libm.a}. @cindex linking, with library using @option{-l} @cindex libraries, linking with using @option{-l} @cindex @option{-l} option, linking with libraries @cindex @option{-lm} option, link with math library @cindex @option{l} option, linking with libraries @cindex math library, linking with @option{-lm} To avoid the need to specify long paths on the command line, the compiler provides a short-cut option @samp{-l} for linking against libraries. For example, the following command, @example $ gcc -Wall calc.c -lm -o calc @end example @noindent is equivalent to the original command above using the full library name @file{/usr/lib/libm.a}. In general, the compiler option @option{-l@var{NAME}} will attempt to link object files with a library file @file{lib@var{NAME}.a} in the standard library directories. Additional directories can specified with command-line options and environment variables, to be discussed shortly. A large program will typically use many @option{-l} options to link libraries such as the math library, graphics libraries and networking libraries. @menu * Link order of libraries:: @end menu @node Link order of libraries @subsection Link order of libraries @cindex libraries, link order @cindex link order, of libraries @cindex link order, from left to right The ordering of libraries on the command line follows the same convection as for object files: they are searched from left to right---a library containing the definition of a function should appear after any source files or object files which use it. This includes libraries specified with the short-cut @option{-l} option, as shown in the following command: @example $ gcc -Wall calc.c -lm -o calc @r{(correct order)} @end example @noindent With some compilers the opposite ordering (placing the @option{-lm} option before the file which uses it) would result in an error, @example $ cc -Wall -lm calc.c -o calc @r{(incorrect order)} main.o: In function `main': main.o(.text+0xf): undefined reference to `sqrt' @end example @noindent @cindex undefined reference error, due to library link order @cindex error, undefined reference due to library link order @cindex linking, undefined reference error due to library link order because there is no library or object file containing @code{sqrt} after @file{calc.c}. The option @option{-lm} should appear after the file @file{calc.c}. When several libraries are being used, the same convention should be followed for the libraries themselves. A library which calls an external function defined in another library should appear before the library containing the function. @cindex ordering of libraries @cindex libraries, link order For example, a program @file{data.c} using the GNU Linear Programming library @file{libglpk.a}, which in turn uses the math library @file{libm.a}, should be compiled as, @example $ gcc -Wall data.c -lglpk -lm @end example @noindent since the object files in @file{libglpk.a} use functions defined in @file{libm.a}. @c With some compilers the opposite ordering would result in @c an error, @c @c @example @c $ cc -Wall data.c -lm -lglpk @r{(incorrect order)} @c main.o: In function `main': @c main.o(.text+0xf): undefined reference to `exp' @c @end example @c @noindent @c because there is no library containing mathematical functions used by @c @file{libglpk.a} (such as @code{exp}) after the @option{-lglpk} option. As for object files, most current compilers will search all libraries, regardless of order. However, since not all compilers do this it is best to follow the convention of ordering libraries from left to right. @node Using library header files @section Using library header files @cindex header file, missing @cindex declaration, missing @cindex missing header files @cindex library header files, using When using a library it is essential to include the appropriate header files, in order to declare the function arguments and return values with the correct types. Without declarations, the arguments of a function can be passed with the wrong type, causing corrupted results. The following example shows another program which makes a function call to the C math library. In this case, the function @code{pow} is used to compute the cube of two (2 raised to the power of 3): @example @verbatiminclude badpow.c @end example @noindent However, the program contains an error---the @code{#include} statement for @file{math.h} is missing, so the prototype @code{double pow (double x, double y)} given there will not be seen by the compiler. @c Note that in this case the format specifier @c @code{%f} is correct, since @code{x} is a floating point variable. @c @cindex @code{math.h}, header file for mathematical functions Compiling the program without any warning options will produce an executable file which gives incorrect results: @cindex bug, example of @example $ gcc badpow.c -lm $ ./a.out Two cubed is 2.851120 @r{(incorrect result, should be 8)} @end example @noindent @cindex C/C++, risks of using, example The results are corrupted because the arguments and return value of the call to @code{pow} are passed with incorrect types.@footnote{The actual output shown above may differ, depending on the specific platform and environment.} This can be detected by turning on the warning option @option{-Wall}: @example $ gcc -Wall badpow.c -lm badpow.c: In function `main': badpow.c:6: warning: implicit declaration of function `pow' @end example @noindent @cindex @code{implicit declaration of function} warning, due to missing header file @cindex warnings, implicit declaration of function @cindex header file, missing header causes implicit declaration @cindex missing header file, causes implicit declaration @c The error is now detected and can be fixed by adding the line @c @code{#include } to the beginning of the source file. This example shows again the importance of using the warning option @option{-Wall} to detect serious problems that could otherwise easily be overlooked. @node Compilation options @chapter Compilation options @cindex compilation, options @cindex options, compilation This chapter describes other commonly-used compiler options available in GCC. These options control features such as the search paths used for locating libraries and include files, the use of additional warnings and diagnostics, preprocessor macros and C language dialects. @menu * Setting search paths:: * Shared libraries and static libraries:: * C language standards:: * Warning options in -Wall:: * Additional warning options:: @end menu @node Setting search paths @section Setting search paths @cindex search paths @cindex paths, search In the last chapter, we saw how to link to a program with functions in the C math library @file{libm.a}, using the short-cut option @option{-lm} and the header file @file{math.h}. A common problem when compiling a program using library header files is the error: @cindex @code{No such file or directory}, header file not found @cindex header file, not found---compilation error @code{no such file or directory} @example @var{FILE.h}: No such file or directory @end example @noindent This occurs if a header file is not present in the standard include file directories used by @code{gcc}. A similar problem can occur for libraries: @cindex @code{cannot find @var{library}}, linker error @cindex linker error, @code{cannot find @var{library}} @example /usr/bin/ld: cannot find @var{library} @end example @noindent This happens if a library used for linking is not present in the standard library directories used by @code{gcc}. By default, @code{gcc} searches the following directories for header files: @cindex ld: cannot find library error @cindex link error, cannot find library @cindex default directories, linking and header files @cindex linking, default directories @cindex header file, default directories @example /usr/local/include/ /usr/include/ @end example @noindent and the following directories for libraries: @example /usr/local/lib/ /usr/lib/ @end example @noindent The list of directories for header files is often referred to as the @dfn{include path}, and the list of directories for libraries as the @dfn{library search path} or @dfn{link path}. @cindex libraries, on 64-bit platforms @cindex 64-bit platforms, additional library directories @cindex system libraries, location of The directories on these paths are searched in order, from first to last in the two lists above.@footnote{The default search paths may also include additional system-dependent or site-specific directories, and directories in the GCC installation itself. For example, on 64-bit platforms additional @file{lib64} directories may also be searched by default.} For example, a header file found in @file{/usr/local/include} takes precedence over a file with the same name in @file{/usr/include}. Similarly, a library found in @file{/usr/local/lib} takes precedence over a library with the same name in @file{/usr/lib}. @cindex @option{-L} option, library search path @cindex @option{L} option, library search path @cindex @option{-I} option, include path @cindex @option{I} option, include path @cindex libraries, extending search path with @option{-L} @cindex include path, extending with @option{-I} @cindex header file, include path---extending with @option{-I} When additional libraries are installed in other directories it is necessary to extend the search paths, in order for the libraries to be found. The compiler options @option{-I} and @option{-L} add new directories to the beginning of the include path and library search path respectively. @menu * Search path example:: * Environment variables:: * Extended search paths:: @end menu @node Search path example @subsection Search path example @cindex search paths, example @cindex @code{gdbm}, GNU DBM library @cindex key-value pairs, stored with GDBM @cindex DBM file, created with @code{gdbm} The following example program uses a library that might be installed as an additional package on a system---the GNU Database Management Library (GDBM). The GDBM Library stores key-value pairs in a DBM file, a type of data file which allows values to be stored and indexed by a @dfn{key} (an arbitrary sequence of characters). Here is the example program @file{dbmain.c}, which creates a DBM file containing a key @samp{testkey} with the value @samp{testvalue}: @example @verbatiminclude dbmain.c @end example @noindent The program uses the header file @file{gdbm.h} and the library @file{libgdbm.a}. If the library has been installed in the default location of @file{/usr/local/lib}, with the header file in @file{/usr/local/include}, then the program can be compiled with the following simple command: @example $ gcc -Wall dbmain.c -lgdbm @end example @noindent Both these directories are part of the default @code{gcc} include and link paths. However, if GDBM has been installed in a different location, trying to compile the program will give the following error: @cindex @code{No such file or directory}, header file not found @example $ gcc -Wall dbmain.c -lgdbm dbmain.c:1: gdbm.h: No such file or directory @end example @noindent For example, if version 1.8.3 of the GDBM package is installed under the directory @file{/opt/gdbm-1.8.3} the location of the header file would be, @example /opt/gdbm-1.8.3/include/gdbm.h @end example @noindent which is not part of the default @code{gcc} include path. Adding the appropriate directory to the include path with the command-line option @option{-I} allows the program to be compiled, but not linked: @cindex @code{cannot find -l@var{library}} error, example of @example $ gcc -Wall -I/opt/gdbm-1.8.3/include dbmain.c -lgdbm /usr/bin/ld: cannot find -lgdbm collect2: ld returned 1 exit status @end example @noindent The directory containing the library is still missing from the link path. @c The location of the library itself would be, @c @example @c /opt/gdbm-1.8.3/lib/libgdbm.a @c @end example @c @noindent It can be added to the link path using the following option: @example -L/opt/gdbm-1.8.3/lib/ @end example @noindent The following command line allows the program to be compiled and linked: @example $ gcc -Wall -I/opt/gdbm-1.8.3/include -L/opt/gdbm-1.8.3/lib dbmain.c -lgdbm @end example @noindent This produces the final executable linked to the GDBM library. Before seeing how to run this executable we will take a brief look at the environment variables that affect the @option{-I} and @option{-L} options. Note that you should never place the absolute paths of header files in @code{#include} statements in your source code, as this will prevent the program from compiling on other systems. The @option{-I} option or the @env{INCLUDE_PATH} variable described below should always be used to set the include path for header files. @node Environment variables @subsection Environment variables @cindex environment variables, for default search paths @cindex shell variables The search paths for header files and libraries can also be controlled through environment variables in the shell. These may be set automatically for each session using the appropriate login file, such as @file{.bash_profile}. @cindex @code{bash} profile file, login settings @cindex include path, setting with environment variables @cindex @env{C_INCLUDE_PATH} @cindex @env{CPLUS_INCLUDE_PATH} Additional directories can be added to the include path using the environment variable @env{C_INCLUDE_PATH} (for C header files) or @env{CPLUS_INCLUDE_PATH} (for C++ header files). For example, the following commands will add @file{/opt/gdbm-1.8.3/include} to the include path when compiling C programs: @example $ C_INCLUDE_PATH=/opt/gdbm-1.8.3/include $ export C_INCLUDE_PATH @end example @noindent This directory will be searched after any directories specified on the command line with the option @option{-I}, and before the standard default directories @file{/usr/local/include} and @file{/usr/include}. The shell command @code{export} is needed to make the environment variable available to programs outside the shell itself, such as the compiler---it is only needed once for each variable in each shell session, and can also be set in the appropriate login file. Similarly, additional directories can be added to the link path using the environment variable @env{LIBRARY_PATH}. For example, the following commands will add @file{/opt/gdbm-1.8.3/lib} to the link path: @cindex link path, setting with environment variable @example $ LIBRARY_PATH=/opt/gdbm-1.8.3/lib $ export LIBRARY_PATH @end example @noindent This directory will be searched after any directories specified on the command line with the option @option{-L}, and before the standard default directories @file{/usr/local/lib} and @file{/usr/lib}. With the environment variable settings given above the program @file{dbmain.c} can be compiled without the @option{-I} and @option{-L} options, @example $ gcc -Wall dbmain.c -lgdbm @end example @noindent because the default paths now use the directories specified in the environment variables @env{C_INCLUDE_PATH} and @env{LIBRARY_PATH}. @c Note @c that these defaults are in addition to the system directories @c @file{/usr/include}, @file{/usr/lib}, @file{/usr/local/include} and @c @file{/usr/local/lib} which are always searched. @node Extended search paths @subsection Extended search paths @cindex multiple directories, on include and link paths @cindex extended search paths, for include and link directories @cindex search paths, extended Following the standard Unix convention for search paths, several directories can be specified together in an environment variable as a colon separated list: @example @var{DIR1}:@var{DIR2}:@var{DIR3}:... @end example @noindent The directories are then searched in order from left to right. A single dot @samp{.} can be used to specify the current directory.@footnote{The current directory can also be specified using an empty path element. For example, @code{:@var{DIR1}:@var{DIR2}} is equivalent to @code{.:@var{DIR1}:@var{DIR2}}.} For example, the following settings create default include and link paths for packages installed in the current directory @file{.} and the @file{include} and @file{lib} directories under @file{/opt/gdbm-1.8.3} and @file{/net} respectively: @example $ C_INCLUDE_PATH=.:/opt/gdbm-1.8.3/include:/net/include $ LIBRARY_PATH=.:/opt/gdbm-1.8.3/lib:/net/lib @end example @noindent To specify multiple search path directories on the command line, the options @option{-I} and @option{-L} can be repeated. For example, the following command, @example $ gcc -I. -I/opt/gdbm-1.8.3/include -I/net/include -L. -L/opt/gdbm-1.8.3/lib -L/net/lib ..... @end example @noindent is equivalent to the environment variable settings given above. When environment variables and command-line options are used together the compiler searches the directories in the following order: @enumerate @item command-line options @option{-I} and @option{-L}, from left to right @item directories specified by environment variables, such as @env{C_INCLUDE_PATH} and @env{LIBRARY_PATH} @item default system directories @end enumerate @noindent In day-to-day usage, directories are usually added to the search paths with the options @option{-I} and @option{-L}. @node Shared libraries and static libraries @section Shared libraries and static libraries @cindex shared libraries @cindex static libraries Although the example program above has been successfully compiled and linked, a final step is needed before being able to load and run the executable file. If an attempt is made to start the executable directly, the following error will occur on most systems: @cindex error, while loading shared libraries @cindex @code{cannot open shared object file} error @cindex shared libraries, error while loading @cindex libraries, error while loading shared library @example $ ./a.out ./a.out: error while loading shared libraries: libgdbm.so.3: cannot open shared object file: No such file or directory @end example @noindent This is because the GDBM package provides a @dfn{shared library}. This type of library requires special treatment---it must be loaded from disk before the executable will run. External libraries are usually provided in two forms: @dfn{static libraries} and @dfn{shared libraries}. Static libraries are the @file{.a} files seen earlier. When a program is linked against a static library, the machine code from the object files for any external functions used by the program is copied from the library into the final executable. @cindex @code{.so}, shared object file extension @cindex @code{so}, shared object file extension @cindex extension, @code{.so} shared object file @cindex file extension, @code{.so} shared object file @cindex shared object file, @code{.so} extension @cindex dynamically linked library, see shared libraries @cindex DLL (dynamically linked library), see shared libraries Shared libraries are handled with a more advanced form of linking, which makes the executable file smaller. They use the extension @file{.so}, which stands for @dfn{shared object}. @cindex loader function @cindex dynamic loader @cindex linking, dynamic (shared libraries) An executable file linked against a shared library contains only a small table of the functions it requires, instead of the complete machine code from the object files for the external functions. Before the executable file starts running, the machine code for the external functions is copied into memory from the shared library file on disk by the operating system---a process referred to as @dfn{dynamic linking}. @cindex shared libraries, advantages of @cindex disk space, reduced usage by shared libraries Dynamic linking makes executable files smaller and saves disk space, because one copy of a library can be shared between multiple programs. Most operating systems also provide a virtual memory mechanism which allows one copy of a shared library in physical memory to be used by all running programs, saving memory as well as disk space. Furthermore, shared libraries make it possible to update a library without recompiling the programs which use it (provided the interface to the library does not change). Because of these advantages @code{gcc} compiles programs to use shared libraries by default on most systems, if they are available. Whenever a static library @file{lib@var{NAME}.a} would be used for linking with the option @option{-l@var{NAME}} the compiler first checks for an alternative shared library with the same name and a @file{.so} extension. In this case, when the compiler searches for the @file{libgdbm} library in the link path, it finds the following two files in the directory @file{/opt/gdbm-1.8.3/lib}: @example $ cd /opt/gdbm-1.8.3/lib $ ls libgdbm.* libgdbm.a libgdbm.so @end example @noindent Consequently, the @file{libgdbm.so} shared object file is used in preference to the @file{libgdbm.a} static library. @cindex @option{-rpath} option, set run-time shared library search path @cindex @option{rpath} option, set run-time shared library search path However, when the executable file is started its loader function must find the shared library in order to load it into memory. By default the loader searches for shared libraries only in a predefined set of system directories, such as @file{/usr/local/lib} and @file{/usr/lib}. If the library is not located in one of these directories it must be added to the load path.@footnote{Note that the directory containing the shared library can, in principle, be stored (``hard-coded'') in the executable itself using the linker option @option{-rpath}, but this is not usually done since it creates problems if the library is moved or the executable is copied to another system.} @cindex @env{LD_LIBRARY_PATH}, shared library load path @cindex shared libraries, setting load path @cindex environment variables @cindex shell variables The simplest way to set the load path is through the environment variable @env{LD_LIBRARY_PATH}. For example, the following commands set the load path to @file{/opt/gdbm-1.8.3/lib} so that @file{libgdbm.so} can be found: @example $ LD_LIBRARY_PATH=/opt/gdbm-1.8.3/lib $ export LD_LIBRARY_PATH $ ./a.out Storing key-value pair... done. @end example @noindent The executable now runs successfully, prints its message and creates a DBM file called @file{test} containing the key-value pair @samp{testkey} and @samp{testvalue}. @cindex environment variables, setting permanently @cindex shell variables, setting permanently @cindex login file, setting environment variables in @cindex profile file, setting environment variables in To save typing, the @env{LD_LIBRARY_PATH} environment variable can be set once for each session in the appropriate login file, such as @file{.bash_profile} for the GNU Bash shell. @cindex @code{bash} profile file, login settings Several shared library directories can be placed in the load path, as a colon separated list @code{@var{DIR1}:@var{DIR2}:@var{DIR3}:...:@var{DIRN}}. For example, the following command sets the load path to use the @file{lib} directories under @file{/opt/gdbm-1.8.3} and @file{/opt/gtk-1.4}: @cindex environment variables, extending an existing path @cindex paths, extending an existing path in an environment variable @example $ LD_LIBRARY_PATH=/opt/gdbm-1.8.3/lib:/opt/gtk-1.4/lib $ export LD_LIBRARY_PATH @end example @noindent If the load path contains existing entries, it can be extended using the syntax @code{LD_LIBRARY_PATH=@var{NEWDIRS}:$LD_LIBRARY_PATH}. For example, the following command adds the directory @file{/opt/gsl-1.5/lib} to the load path shown above: @example $ LD_LIBRARY_PATH=/opt/gsl-1.5/lib:$LD_LIBRARY_PATH $ echo $LD_LIBRARY_PATH /opt/gsl-1.5/lib:/opt/gdbm-1.8.3/lib:/opt/gtk-1.4/lib @end example @noindent It is possible for the system administrator to set the @env{LD_LIBRARY_PATH} variable for all users, by adding it to a default login script, such as @file{/etc/profile}. On GNU systems, a system-wide path can also be defined in the loader configuration file @file{/etc/ld.so.conf}. @cindex loader configuration file, @code{ld.so.conf} @cindex @code{ld.so.conf}, loader configuration file @cindex @option{-static} option, force static linking @cindex @option{static} option, force static linking @cindex static linking, forcing with @option{-static} Alternatively, static linking can be forced with the @option{-static} option to @code{gcc} to avoid the use of shared libraries: @example $ gcc -Wall -static -I/opt/gdbm-1.8.3/include/ -L/opt/gdbm-1.8.3/lib/ dbmain.c -lgdbm @end example @noindent This creates an executable linked with the static library @file{libgdbm.a} which can be run without setting the environment variable @env{LD_LIBRARY_PATH} or putting shared libraries in the default directories: @example $ ./a.out Storing key-value pair... done. @end example @noindent As noted earlier, it is also possible to link directly with individual library files by specifying the full path to the library on the command line. For example, the following command will link directly with the static library @file{libgdbm.a}, @example $ gcc -Wall -I/opt/gdbm-1.8.3/include dbmain.c /opt/gdbm-1.8.3/lib/libgdbm.a @end example @noindent and the command below will link with the shared library file @file{libgdbm.so}: @example $ gcc -Wall -I/opt/gdbm-1.8.3/include dbmain.c /opt/gdbm-1.8.3/lib/libgdbm.so @end example @noindent In the latter case it is still necessary to set the library load path when running the executable. @node C language standards @section C language standards @cindex C language, dialects of @cindex dialects of C language @cindex ANSI/ISO C, compared with GNU C extensions @cindex ISO C, compared with GNU C extensions @cindex GNU C extensions, compared with ANSI/ISO C @cindex @option{-ansi} option, disable language extensions @cindex @option{ansi} option, disable language extensions @cindex @option{-pedantic} option, conform to the ANSI standard (with @option{-ansi}) @cindex @option{pedantic} option, conform to the ANSI standard (with @option{-ansi}) @cindex @option{-std} option, select specific language standard @cindex @option{std} option, select specific language standard By default, @code{gcc} compiles programs using the GNU dialect of the C language, referred to as @dfn{GNU C}. This dialect incorporates the official ANSI/ISO standard for the C language with several useful GNU extensions, such as nested functions and variable-size arrays. Most ANSI/ISO programs will compile under GNU C without changes. There are several options which control the dialect of C used by @code{gcc}. The most commonly-used options are @option{-ansi} and @option{-pedantic}. The specific dialects of the C language for each standard can also be selected with the @option{-std} option. @menu * ANSI/ISO:: * Strict ANSI/ISO:: * Selecting specific standards:: @end menu @node ANSI/ISO @subsection ANSI/ISO @cindex ANSI/ISO C, controlled with @option{-ansi} option @cindex ISO C, controlled with @option{-ansi} option Occasionally a valid ANSI/ISO program may be incompatible with the extensions in GNU C. To deal with this situation, the compiler option @option{-ansi} disables those GNU extensions which conflict with the ANSI/ISO standard. On systems using the GNU C Library (@code{glibc}) it also disables extensions to the C standard library. This allows programs written for ANSI/ISO C to be compiled without any unwanted effects from GNU extensions. For example, here is a valid ANSI/ISO C program which uses a variable called @code{asm}: @example @verbatiminclude ansi.c @end example @noindent The variable name @code{asm} is valid under the ANSI/ISO standard, but this program will not compile in GNU C because @code{asm} is a GNU C keyword extension (it allows native assembly instructions to be used in C functions). Consequently, it cannot be used as a variable name without giving a compilation error: @cindex keywords, additional in GNU C @cindex @code{parse error}, due to language extensions @example $ gcc -Wall ansi.c ansi.c: In function `main': ansi.c:6: parse error before `asm' ansi.c:7: parse error before `asm' @end example @noindent In contrast, using the @option{-ansi} option disables the @code{asm} keyword extension, and allows the program above to be compiled correctly: @example $ gcc -Wall -ansi ansi.c $ ./a.out the string asm is '6502' @end example @noindent For reference, the non-standard keywords and macros defined by the GNU C extensions are @code{asm}, @code{inline}, @code{typeof}, @code{unix} and @code{vax}. More details can be found in the GCC Reference Manual ``@cite{Using GCC}'' (@pxref{Further reading}). @cindex @code{asm}, GNU C extension keyword @cindex @code{typeof}, GNU C extension keyword @cindex @code{unix}, GNU C extension keyword @cindex @code{vax}, GNU C extension keyword The next example shows the effect of the @option{-ansi} option on systems using the GNU C Library, such as GNU/Linux systems. The program below prints the value of pi, @math{\pi=3.14159...}, from the preprocessor definition @code{M_PI} in the header file @file{math.h}: @example @verbatiminclude pi.c @end example @noindent The constant @code{M_PI} is not part of the ANSI/ISO C standard library (it comes from the BSD version of Unix). In this case, the program will not compile with the @option{-ansi} option: @cindex @code{undeclared identifier} error for C library, when using @option{-ansi} option @example $ gcc -Wall -ansi pi.c pi.c: In function `main': pi.c:7: `M_PI' undeclared (first use in this function) pi.c:7: (Each undeclared identifier is reported only once pi.c:7: for each function it appears in.) @end example @noindent The program can be compiled without the @option{-ansi} option. In this case both the language and library extensions are enabled by default: @example $ gcc -Wall pi.c $ ./a.out the value of pi is 3.141593 @end example @noindent It is also possible to compile the program using ANSI/ISO C, by enabling only the extensions in the GNU C Library itself. This can be achieved by defining special macros, such as @code{_GNU_SOURCE}, which enable extensions in the GNU C Library:@footnote{The @option{-D} option for defining macros will be explained in detail in the next chapter.} @cindex @code{_GNU_SOURCE} macro, enables extensions to GNU C Library @cindex @code{GNU_SOURCE} macro (@code{_GNU_SOURCE}), enables extensions to GNU C Library @example $ gcc -Wall -ansi -D_GNU_SOURCE pi.c $ ./a.out the value of pi is 3.141593 @end example @noindent @cindex feature test macros, GNU C Library @cindex GNU C Library, feature test macros @cindex POSIX extensions, GNU C Library @cindex BSD extensions, GNU C Library @cindex XOPEN extensions, GNU C Library @cindex SVID extensions, GNU C Library The GNU C Library provides a number of these macros (referred to as @dfn{feature test macros}) which allow control over the support for POSIX extensions (@w{@code{_POSIX_C_SOURCE}}), BSD extensions (@w{@code{_BSD_SOURCE}}), SVID extensions (@w{@code{_SVID_SOURCE}}), XOPEN extensions (@w{@code{_XOPEN_SOURCE}}) and GNU extensions (@w{@code{_GNU_SOURCE}}). The @w{@code{_GNU_SOURCE}} macro enables all the extensions together, with the POSIX extensions taking precedence over the others in cases where they conflict. Further information about feature test macros can be found in the @cite{GNU C Library Reference Manual}, @pxref{Further reading}. @node Strict ANSI/ISO @subsection Strict ANSI/ISO @cindex @option{pedantic} option, ANSI/ISO C @cindex ANSI/ISO C, pedantic diagnostics option @cindex strict ANSI/ISO C, @option{-pedantic} option The command-line option @option{-pedantic} in combination with @option{-ansi} will cause @code{gcc} to reject all GNU C extensions, not just those that are incompatible with the ANSI/ISO standard. This helps you to write portable programs which follow the ANSI/ISO standard. Here is a program which uses variable-size arrays, a GNU C extension. The array @code{x[n]} is declared with a length specified by the integer variable @code{n}. @cindex variable-size arrays in GNU C @cindex arrays, variable-size in GNU C @example @verbatiminclude gnuarray.c @end example @noindent This program will compile with @option{-ansi}, because support for variable length arrays does not interfere with the compilation of valid ANSI/ISO programs---it is a backwards-compatible extension: @example $ gcc -Wall -ansi gnuarray.c @end example @noindent However, compiling with @option{-ansi -pedantic} reports warnings about violations of the ANSI/ISO standard: @cindex variable-size array, forbidden in ANSI/ISO C @example $ gcc -Wall -ansi -pedantic gnuarray.c gnuarray.c: In function `main': gnuarray.c:5: warning: ISO C90 forbids variable-size array `x' @end example @noindent Note that an absence of warnings from @option{-ansi -pedantic} does not guarantee that a program strictly conforms to the ANSI/ISO standard. The standard itself specifies only a limited set of circumstances that should generate diagnostics, and these are what @option{-ansi -pedantic} reports. @node Selecting specific standards @subsection Selecting specific standards @cindex @option{-std} option, select specific language standard @cindex @option{std} option, select specific language standard @cindex @code{c89}/@code{c99}, selected with @option{-std} @cindex @code{gnu89}/@code{gnu99}, selected with @option{-std} @cindex @code{iso9899:1990}/@code{iso9899:1999}, selected with @option{-std} @cindex selecting specific language standards, with @option{-std} @cindex language standards, selecting with @option{-std} The specific language standard used by GCC can be controlled with the @option{-std} option. The following C language standards are supported: @table @asis @item @option{-std=c89} or @option{-std=iso9899:1990} The original ANSI/ISO C language standard (ANSI X3.159-1989, ISO/IEC 9899:1990). GCC incorporates the corrections in the two ISO Technical Corrigenda to the original standard. @item @option{-std=iso9899:199409} The ISO C language standard with ISO Amendment 1, published in 1994. This amendment was mainly concerned with internationalization, such as adding support for multibyte characters to the C library. @item @option{-std=c99} or @option{-std=iso9899:1999} The revised ISO C language standard, published in 1999 (ISO/IEC 9899:1999). @end table @noindent The C language standards with GNU extensions can be selected with the options @option{-std=gnu89} and @option{-std=gnu99}. @node Warning options in -Wall @section Warning options in @code{-Wall} @cindex warning options, in detail As described earlier (@pxref{Compiling a simple C program}), the warning option @option{-Wall} enables warnings for many common errors, and should always be used. It combines a large number of other, more specific, warning options which can also be selected individually. Here is a summary of these options: @table @asis @item @option{-Wcomment} @r{(included in @option{-Wall})} @cindex comments, nested @cindex nested comments, warning of @cindex @option{-Wcomment} option, warn about nested comments @cindex @option{Wcomment} option, warn about nested comments @cindex @option{comment} warning option, warn about nested comments This option warns about nested comments. Nested comments typically arise when a section of code containing comments is later @dfn{commented out}: @example /* commented out double x = 1.23 ; /* x-position */ */ @end example @noindent Nested comments can be a source of confusion---the safe way to ``comment out'' a section of code containing comments is to surround it with the preprocessor directive @code{#if 0 ... #endif}: @cindex @code{#if}, preprocessor directive @example /* commented out */ #if 0 double x = 1.23 ; /* x-position */ #endif @end example @item @option{-Wformat} @r{(included in @option{-Wall})} @cindex @option{-Wformat} option, warn about incorrect format strings @cindex format strings, incorrect usage warning @cindex @code{printf}, incorrect usage warning @cindex @code{scanf}, incorrect usage warning This option warns about the incorrect use of format strings in functions such as @code{printf} and @code{scanf}, where the format specifier does not agree with the type of the corresponding function argument. @item @option{-Wunused} @r{(included in @option{-Wall})} @cindex unused variable warning, @option{-Wunused} @cindex @option{-Wunused} option, unused variable warning @cindex @option{Wunused} option, unused variable warning This option warns about unused variables. When a variable is declared but not used this can be the result of another variable being accidentally substituted in its place. If the variable is genuinely not needed it can be removed from the source code. @item @option{-Wimplicit} @r{(included in @option{-Wall})} @cindex @option{-Wimplicit} option, warn about missing declarations @cindex @option{Wimplicit} option, warn about missing declarations @cindex implicit declaration warning @cindex missing prototypes warning @cindex prototypes, missing This option warns about any functions that are used without being declared. The most common reason for a function to be used without being declared is forgetting to include a header file. @item @option{-Wreturn-type} @r{(included in @option{-Wall})} @cindex return type, invalid @cindex empty @code{return}, incorrect use of @cindex void @code{return}, incorrect use of @cindex @option{-Wreturn-type} option, warn about incorrect return types @cindex @option{Wreturn-type} option, warn about incorrect return types This option warns about functions that are defined without a return type but not declared @code{void}. It also catches empty @code{return} statements in functions that are not declared @code{void}. For example, the following program does not use an explicit return value: @example @verbatiminclude main4.c @end example @noindent The lack of a return value in the code above could be the result of an accidental omission by the programmer---the value returned by the main function is actually the return value of the @code{printf} function (the number of characters printed). To avoid ambiguity, it is preferable to use an explicit value in the return statement, either as a variable or a constant, such as @code{return 0}. @end table The complete set of warning options included in @option{-Wall} can be found in the GCC Reference Manual ``@cite{Using GCC}'' (@pxref{Further reading}). The options included in @option{-Wall} have the common characteristic that they report constructions which are always wrong, or can easily be rewritten in an unambiguously correct way. This is why they are so useful---any warning produced by @option{-Wall} can be taken as an indication of a potentially serious problem. @node Additional warning options @section Additional warning options @cindex additional warning options @cindex warning options, additional GCC provides many other warning options that are not included in @option{-Wall}, but are often useful. Typically these produce warnings for source code which may be technically valid but is very likely to cause problems. The criteria for these options are based on experience of common errors---they are not included in @option{-Wall} because they only indicate possibly problematic or ``suspicious'' code. Since these warnings can be issued for valid code it is not necessary to compile with them all the time. It is more appropriate to use them periodically and review the results, checking for anything unexpected, or to enable them for some programs or files. @table @asis @item @option{-W} @cindex common errors, not included with @option{-Wall} @cindex @option{-W} option, enable additional warnings @cindex @option{W} option, enable additional warnings @cindex warnings, additional with @option{-W} @cindex warning option, @option{-W} additional warnings This is a general option similar to @option{-Wall} which warns about a selection of common programming errors, such as functions which can return without a value (also known as ``falling off the end of the function body''), and comparisons between signed and unsigned values. For example, the following function tests whether an unsigned integer is negative (which is impossible, of course): @example @verbatiminclude w.c @end example @noindent Compiling this function with @option{-Wall} does not produce a warning, @example $ gcc -Wall -c w.c @end example @noindent but does give a warning with @option{-W}: @cindex @code{comparison of ... expression always true/false} warning, example of @example $ gcc -W -c w.c w.c: In function `foo': w.c:4: warning: comparison of unsigned expression < 0 is always false @end example @noindent In practice, the options @option{-W} and @option{-Wall} are normally used together. @item @option{-Wconversion} @cindex type conversions, warning of @cindex conversions between types, warning of @cindex unsigned variable converted to signed, warning of @cindex signed variable converted to unsigned, warning of @cindex @option{-Wconversion} option, warn about type conversions @cindex @option{Wconversion} option, warn about type conversions This option warns about implicit type conversions that could cause unexpected results. For example, the assignment of a negative value to an unsigned variable, as in the following code, @example unsigned int x = -1; @end example @noindent @cindex casts, used to avoid conversion warnings @cindex unsigned integer, casting @cindex signed integer, casting is technically allowed by the ANSI/ISO C standard (with the negative integer being converted to a positive integer, according to the machine representation) but could be a simple programming error. If you need to perform such a conversion you can use an explicit cast, such as @code{((unsigned int) -1)}, to avoid any warnings from this option. On two's-complement machines the result of the cast gives the maximum number that can be represented by an unsigned integer. @item @option{-Wshadow} @cindex shadowing of variables @cindex variable shadowing @cindex @option{-Wshadow} option, warn about shadowed variables @cindex @option{Wshadow} option, warn about shadowed variables This option warns about the redeclaration of a variable name in a scope where it has already been declared. This is referred to as variable @dfn{shadowing}, and causes confusion about which occurrence of the variable corresponds to which value. The following function declares a local variable @code{y} that shadows the declaration in the body of the function: @example @verbatiminclude shadow.c @end example @noindent This is valid ANSI/ISO C, where the return value is 1. The shadowing of the variable @code{y} might make it seem (incorrectly) that the return value is @code{x}, when looking at the line @code{y = x} (especially in a large and complicated function). Shadowing can also occur for function names. For example, the following program attempts to define a variable @code{sin} which shadows the standard function @code{sin(x)}. @example @verbatiminclude shadow2.c @end example @noindent This error will be detected by the @option{-Wshadow} option. @item @option{-Wcast-qual} @cindex qualifiers, warning about overriding by casts @cindex @code{const}, warning about overriding by casts @cindex @option{-Wcast-qual} option, warn about casts removing qualifiers @cindex @option{Wcast-qual} option, warn about casts removing qualifiers This option warns about pointers that are cast to remove a type qualifier, such as @code{const}. For example, the following function discards the @code{const} qualifier from its input argument, allowing it to be overwritten: @example @verbatiminclude castqual.c @end example @noindent The modification of the original contents of @code{str} is a violation of its @code{const} property. This option will warn about the improper cast of the variable @code{str} which allows the string to be modified. @item @option{-Wwrite-strings} @cindex writable string constants, disabling @cindex constant strings, compile-time warnings @cindex @option{-Wwrite-strings} option, warning for modified string constants @cindex @option{Wwrite-strings} option, warning for modified string constants This option implicitly gives all string constants defined in the program a @code{const} qualifier, causing a compile-time warning if there is an attempt to overwrite them. The result of modifying a string constant is not defined by the ANSI/ISO standard, and the use of writable string constants is deprecated in GCC. @cindex K&R dialect of C, warnings of different behavior @cindex Traditional C (K&R), warnings of different behavior @cindex @option{-Wtraditional} option, warn about traditional C @cindex @option{Wtraditional} option, warn about traditional C @item @option{-Wtraditional} This option warns about parts of the code which would be interpreted differently by an ANSI/ISO compiler and a ``traditional'' pre-ANSI compiler.@footnote{The traditional form of the C language was described in the original C reference manual ``@cite{The C Programming Language (First Edition)}'' by Kernighan and Ritchie.} When maintaining legacy software it may be necessary to investigate whether the traditional or ANSI/ISO interpretation was intended in the original code for warnings generated by this option. @end table @noindent @cindex warnings, promoting to errors @cindex compilation, stopping on warning @cindex @option{-Werror} option, convert warnings to errors @cindex @option{Werror} option, convert warnings to errors The options above produce diagnostic warning messages, but allow the compilation to continue and produce an object file or executable. For large programs it can be desirable to catch all the warnings by stopping the compilation whenever a warning is generated. The @option{-Werror} option changes the default behavior by converting warnings into errors, stopping the compilation whenever a warning occurs. @node Using the preprocessor @chapter Using the preprocessor @cindex @code{cpp}, C preprocessor @cindex preprocessor, using This chapter describes the use of the GNU C preprocessor @code{cpp}, which is part of the GCC package. The preprocessor expands macros in source files before they are compiled. It is automatically called whenever GCC processes a C or C++ program.@footnote{In recent versions of GCC the preprocessor is integrated into the compiler, although a separate @command{cpp} command is also provided.} @menu * Defining macros:: * Macros with values:: * Preprocessing source files:: @end menu @node Defining macros @section Defining macros @cindex defining macros @cindex macros, defining in preprocessor @cindex @code{#define}, preprocessor directive @cindex @code{#ifdef}, preprocessor directive The following program demonstrates the most common use of the C preprocessor. It uses the preprocessor conditional @code{#ifdef} to check whether a macro is defined. When the macro is defined, the preprocessor includes the corresponding code up to the closing @code{#endif} command. In this example, the macro which is tested is called @code{TEST}, and the conditional part of the source code is a @code{printf} statement which prints the message ``@code{Test mode}'': @example @verbatiminclude dtest.c @end example @noindent The @code{gcc} option @option{-D@var{NAME}} defines a preprocessor macro @code{NAME} from the command line. If the program above is compiled with the command-line option @option{-DTEST}, the macro @code{TEST} will be defined and the resulting executable will print both messages: @cindex @option{-D} option, define macro @cindex @option{D} option, define macro @example $ gcc -Wall -DTEST dtest.c $ ./a.out Test mode Running... @end example @noindent If the same program is compiled without the @option{-D} option then the ``@code{Test mode}'' message is omitted from the source code after preprocessing, and the final executable does not include the code for it: @example $ gcc -Wall dtest.c $ ./a.out Running... @end example @noindent @cindex namespace, reserved prefix for preprocessor Macros are generally undefined, unless specified on the command line with the option @option{-D}, or in a source file (or library header file) with @code{#define}. Some macros are automatically defined by the compiler---these typically use a reserved namespace beginning with a double-underscore prefix @samp{__}. The complete set of predefined macros can be listed by running the GNU preprocessor @code{cpp} with the option @option{-dM} on an empty file: @cindex predefined macros @cindex macros, predefined @cindex @option{-dM} option, list predefined macros @cindex @option{dM} option, list predefined macros @example $ cpp -dM /dev/null #define __i386__ 1 #define __i386 1 #define i386 1 #define __unix 1 #define __unix__ 1 #define __ELF__ 1 #define unix 1 ....... @end example @noindent Note that this list includes a small number of system-specific macros defined by @code{gcc} which do not use the double-underscore prefix. These non-standard macros can be disabled with the @option{-ansi} option of @code{gcc}. @cindex system-specific predefined macros @node Macros with values @section Macros with values @cindex value, of macro @cindex macros, defined with value In addition to being defined, a macro can also be given a concrete value. This value is inserted into the source code at each point where the macro occurs. The following program uses a macro @code{NUM}, to represent a number which will be printed: @example @verbatiminclude dtestval.c @end example @noindent Note that macros are not expanded inside strings---only the occurrence of @code{NUM} outside the string is substituted by the preprocessor. To define a macro with a value, the @option{-D} command-line option can be used in the form @option{-D@var{NAME}=@var{VALUE}}. For example, the following command line defines @code{NUM} to be 100 when compiling the program above: @example $ gcc -Wall -DNUM=100 dtestval.c $ ./a.out Value of NUM is 100 @end example @noindent This example uses a number, but a macro can take values of any form. Whatever the value of the macro is, it is inserted directly into the source code at the point where the macro name occurs. For example, the following definition expands the occurrences of @code{NUM} to @code{2+2} during preprocessing: @example $ gcc -Wall -DNUM="2+2" dtestval.c $ ./a.out Value of NUM is 4 @end example @noindent After the preprocessor has made the substitution @code{NUM @expansion{} 2+2} this is equivalent to compiling the following program: @example @verbatiminclude dtestval2.c @end example @noindent @noindent Note that it is a good idea to surround macros by parentheses whenever they are part of an expression. For example, the following program uses parentheses to ensure the correct precedence for the multiplication @code{10*NUM}: @cindex precedence, when using preprocessor @example @verbatiminclude dtestval3.c @end example @noindent With these parentheses, it produces the expected result when compiled with the same command line as above: @example $ gcc -Wall -DNUM="2+2" dtestmul10.c $ ./a.out Ten times NUM is 40 @end example @noindent Without parentheses, the program would produce the value @code{22} from the literal form of the expression @code{10*2+2 = 22}, instead of the desired value @code{10*(2+2) = 40}. When a macro is defined with @option{-D} alone, @code{gcc} uses a default value of @code{1}. For example, compiling the original test program with the option @option{-DNUM} generates an executable which produces the following output: @cindex default value, of macro defined with @option{-D} @cindex macros, default value of @cindex preprocessor macros, default value of @example $ gcc -Wall -DNUM dtestval.c $ ./a.out Value of NUM is 1 @end example @noindent @cindex quotes, for defining empty macro A macro can be defined to a empty value using quotes on the command line, @code{-D@var{NAME}=""}. Such a macro is still treated as @i{defined} by conditionals such as @code{#ifdef}, but expands to nothing. @cindex empty macro, compared with undefined macro @cindex undefined macro, compared with empty macro A macro containing quotes can be defined using shell-escaped quote characters. For example, the command-line option @code{-DMESSAGE="\"Hello, World!\""} defines a macro @code{MESSAGE} which expands to the sequence of characters @code{"Hello, World!"}. For an explanation of the different types of quoting and escaping used in the shell see the ``@cite{GNU Bash Reference Manual}'', @ref{Further reading}. @cindex shell quoting @node Preprocessing source files @section Preprocessing source files @cindex @option{-E} option, preprocess source files @cindex @option{E} option, preprocess source files @cindex preprocessing, source files It is possible to see the effect of the preprocessor on source files directly, using the @option{-E} option of @code{gcc}. For example, the file below defines and uses a macro @code{TEST}: @example @verbatiminclude test.c @end example @noindent If this file is called @file{test.c} the effect of the preprocessor can be seen with the following command line: @example $ gcc -E test.c # 1 "test.c" const char str[] = "Hello, World!" ; @end example @noindent The @option{-E} option causes @code{gcc} to run the preprocessor, display the expanded output, and then exit without compiling the resulting source code. The value of the macro @code{TEST} is substituted directly into the output, producing the sequence of characters @code{const char str[] = "Hello, World!" ;}. @cindex line numbers, recorded in preprocessed files The preprocessor also inserts lines recording the source file and line numbers in the form @code{# @var{line-number} "@var{source-file}"}, to aid in debugging and allow the compiler to issue error messages referring to this information. These lines do not affect the program itself. The ability to see the preprocessed source files can be useful for examining the effect of system header files, and finding declarations of system functions. The following program includes the header file @file{stdio.h} to obtain the declaration of the function @code{printf}: @example @verbatiminclude hello.c @end example @noindent It is possible to see the declarations from the included header file by preprocessing the file with @code{gcc -E}: @example $ gcc -E hello.c @end example @noindent On a GNU system, this produces output similar to the following: @example # 1 "hello.c" # 1 "/usr/include/stdio.h" 1 3 extern FILE *stdin; extern FILE *stdout; extern FILE *stderr; extern int fprintf (FILE * __stream, const char * __format, ...) ; extern int printf (const char * __format, ...) ; @r{@i{[ ... additional declarations ... ]}} # 1 "hello.c" 2 int main (void) @{ printf ("Hello, world!\n"); return 0; @} @end example @noindent The preprocessed system header files usually generate a lot of output. This can be redirected to a file, or saved more conveniently using the @code{gcc} @option{-save-temps} option: @cindex @option{-save-temps} option, keeps intermediate files @cindex @option{save-temps} option, keeps intermediate files @cindex preprocessed files, keeping @cindex intermediate files, keeping @cindex temporary files, keeping @example $ gcc -c -save-temps hello.c @end example @noindent After running this command, the preprocessed output will be available in the file @file{hello.i}. The @option{-save-temps} option also saves @file{.s} assembly files and @file{.o} object files in addition to preprocessed @file{.i} files. @node Compiling for debugging @chapter Compiling for debugging @cindex debugging, compilation flags @cindex @option{-g} option, enable debugging @cindex @option{g} option, enable debugging @cindex compilation, for debugging Normally, an executable file does not contain any references to the original program source code, such as variable names or line-numbers---the executable file is simply the sequence of machine code instructions produced by the compiler. This is insufficient for debugging, since there is no easy way to find the cause of an error if the program crashes. @cindex @code{gdb}, GNU debugger @cindex debugging, with @code{gdb} @cindex symbol table @cindex executable, symbol table stored in GCC provides the @option{-g} @dfn{debug option} to store additional debugging information in object files and executables. This debugging information allows errors to be traced back from a specific machine instruction to the corresponding line in the original source file. It also allows the execution of a program to be traced in a debugger, such as the GNU Debugger @code{gdb} (for more information, see ``@cite{Debugging with GDB: The GNU Source-Level Debugger}'', @ref{Further reading}). Using a debugger also allows the values of variables to be examined while the program is running. The debug option works by storing the names of functions and variables (and all the references to them), with their corresponding source code line-numbers, in a @dfn{symbol table} in object files and executables. @menu * Examining core files:: * Displaying a backtrace:: @end menu @node Examining core files @section Examining core files @cindex core file, examining from program crash @cindex crashes, saved in core file @cindex program crashes, saved in core file @cindex examining core files In addition to allowing a program to be run under the debugger, another helpful application of the @option{-g} option is to find the circumstances of a program crash. @cindex termination, abnormal (@code{core dumped}) When a program exits abnormally the operating system can write out a @dfn{core file}, usually named @file{core}, which contains the in-memory state of the program at the time it crashed. Combined with information from the symbol table produced by @option{-g}, the core file can be used to find the line where the program stopped, and the values of its variables at that point. @cindex deployment, options for This is useful both during the development of software, and after deployment---it allows problems to be investigated when a program has crashed ``in the field''. Here is a simple program containing an invalid memory access bug, which we will use to produce a core file: @example @verbatiminclude null.c @end example @noindent @cindex dereferencing, null pointer @cindex null pointer, attempt to dereference @cindex bug, example of The program attempts to dereference a null pointer @code{p}, which is an invalid operation. On most systems, this will cause a crash. @footnote{Historically, a null pointer has typically corresponded to memory location 0, which is usually restricted to the operating system kernel and not accessible to user programs.} In order to be able to find the cause of the crash later, we need to compile the program with the @option{-g} option: @example $ gcc -Wall -g null.c @end example @noindent Note that a null pointer will only cause a problem at run-time, so the option @option{-Wall} does not produce any warnings. Running the executable file on an x86 GNU/Linux system will cause the operating system to terminate the program abnormally: @cindex segmentation fault, error message @example $ ./a.out Segmentation fault (core dumped) @end example @noindent Whenever the error message @samp{core dumped} is displayed, the operating system should produce a file called @file{core} in the current directory.@footnote{Some systems, such as FreeBSD and Solaris, can also be configured to write core files in specific directories, e.g. @file{/var/coredumps/}, using the @code{sysctl} or @code{coreadm} commands.} This core file contains a complete copy of the pages of memory used by the program at the time it was terminated. Incidentally, the term @dfn{segmentation fault} refers to the fact that the program tried to access a restricted memory ``segment'' outside the area of memory which had been allocated to it. @cindex core file, not produced @cindex @code{ulimit} command Some systems are configured not to write core files by default, since the files can be large and rapidly fill up the available disk space on a system. In the @cite{GNU Bash} shell the command @code{ulimit -c} controls the maximum size of core files. If the size limit is zero, no core files are produced. The current size limit can be shown by typing the following command: @cindex @code{tcsh}, limit command @example $ ulimit -c 0 @end example @noindent If the result is zero, as shown above, then it can be increased with the following command to allow core files of any size to be written:@footnote{This example uses the @code{ulimit} command in the GNU Bash shell. On other systems the usage of the @code{ulimit} command may vary, or have a different name (the @code{tcsh} shell uses the @code{limit} command instead). The size limit for core files can also be set to a specific value in kilobytes.} @example $ ulimit -c unlimited @end example @noindent Note that this setting only applies to the current shell. To set the limit for future sessions the command should be placed in an appropriate login file, such as @file{.bash_profile} for the GNU Bash shell. @cindex @code{bash} profile file, login settings @cindex core file, debugging with @code{gdb} @cindex @code{gdb}, debugging core file with Core files can be loaded into the GNU Debugger @code{gdb} with the following command: @example $ gdb @var{EXECUTABLE-FILE} @var{CORE-FILE} @end example @noindent Note that both the original executable file and the core file are required for debugging---it is not possible to debug a core file without the corresponding executable. In this example, we can load the executable and core file with the command: @example $ gdb a.out core @end example @noindent The debugger immediately begins printing diagnostic information, and shows a listing of the line where the program crashed (line 13): @example $ gdb a.out core Core was generated by `./a.out'. Program terminated with signal 11, Segmentation fault. Reading symbols from /lib/libc.so.6...done. Loaded symbols for /lib/libc.so.6 Reading symbols from /lib/ld-linux.so.2...done. Loaded symbols for /lib/ld-linux.so.2 #0 0x080483ed in a (p=0x0) at null.c:13 13 int y = *p; (gdb) @end example @noindent The final line @code{(gdb)} is the GNU Debugger prompt---it indicates that further commands can be entered at this point. To investigate the cause of the crash, we display the value of the pointer @code{p} using the debugger @code{print} command: @cindex @code{print} debugger command @example (gdb) print p $1 = (int *) 0x0 @end example @noindent This shows that @code{p} is a null pointer (@code{0x0}) of type @samp{int *}, so we know that dereferencing it with the expression @code{*p} in this line has caused the crash. @node Displaying a backtrace @section Displaying a backtrace @cindex displaying a backtrace @cindex backtrace, displaying @cindex stack backtrace, displaying @cindex @code{backtrace}, debugger command The debugger can also show the function calls and arguments up to the current point of execution---this is called a @dfn{stack backtrace} and is displayed with the command @code{backtrace}: @example (gdb) backtrace #0 0x080483ed in a (p=0x0) at null.c:13 #1 0x080483d9 in main () at null.c:7 @end example @noindent In this case, the backtrace shows that the crash at line 13 occurred when the function @code{a()} was called with an argument of @code{p=0x0}, from line 7 in @code{main()}. It is possible to move to different levels in the stack trace, and examine their variables, using the debugger commands @code{up} and @code{down}. A complete description of all the commands available in @code{gdb} can be found in the manual ``@cite{Debugging with GDB: The GNU Source-Level Debugger}'' (@pxref{Further reading}). @node Compiling with optimization @chapter Compiling with optimization @cindex optimization, explanation of @cindex compiling with optimization GCC is an @dfn{optimizing} compiler. It provides a wide range of options which aim to increase the speed, or reduce the size, of the executable files it generates. Optimization is a complex process. For each high-level command in the source code there are usually many possible combinations of machine instructions that can be used to achieve the appropriate final result. The compiler must consider these possibilities and choose among them. In general, different code must be generated for different processors, as they use incompatible assembly and machine languages. Each type of processor also has its own characteristics---some CPUs provide a large number of @dfn{registers} for holding intermediate results of calculations, while others must store and fetch intermediate results from memory. Appropriate code must be generated in each case. Furthermore, different amounts of time are needed for different instructions, depending on how they are ordered. GCC takes all these factors into account and tries to produce the fastest executable for a given system when compiling with optimization. @menu * Source-level optimization:: * Speed-space tradeoffs:: * Scheduling:: * Optimization levels:: * Optimization examples:: * Optimization and debugging:: * Optimization and compiler warnings:: @end menu @node Source-level optimization @section Source-level optimization @cindex source-level optimization The first form of optimization used by GCC occurs at the source-code level, and does not require any knowledge of the machine instructions. There are many source-level optimization techniques---this section describes two common types: @dfn{common subexpression elimination} and @dfn{function inlining}. @subsection Common subexpression elimination @cindex common subexpression elimination, optimization @cindex optimization, common subexpression elimination @cindex subexpression elimination, optimization @cindex elimination, of common subexpressions One method of source-level optimization which is easy to understand involves computing an expression in the source code with fewer instructions, by reusing already-computed results. For example, the following assignment: @example x = cos(v)*(1+sin(u/2)) + sin(w)*(1-sin(u/2)) @end example @noindent can be rewritten with a temporary variable @code{t} to eliminate an unnecessary extra evaluation of the term @code{sin(u/2)}: @example t = sin(u/2) x = cos(v)*(1+t) + sin(w)*(1-t) @end example @noindent This rewriting is called @dfn{common subexpression elimination} (CSE), and is performed automatically when optimization is turned on.@footnote{Temporary values introduced by the compiler during common subexpression elimination are only used internally, and do not affect real variables. The name of the temporary variable @samp{t} shown above is only used as an illustration.} Common subexpression elimination is powerful, because it simultaneously increases the speed and reduces the size of the code. @subsection Function inlining @cindex function inlining, example of optimization @cindex inlining, example of optimization @comment An example of speed-space tradeoffs occurs with an optimization called @comment @dfn{function inlining}. Another type of source-level optimization, called @dfn{function inlining}, increases the efficiency of frequently-called functions. Whenever a function is used, a certain amount of extra time is required for the CPU to carry out the call: it must store the function arguments in the appropriate registers and memory locations, jump to the start of the function (bringing the appropriate virtual memory pages into physical memory or the CPU cache if necessary), begin executing the code, and then return to the original point of execution when the function call is complete. This additional work is referred to as @dfn{function-call overhead}. Function inlining eliminates this overhead by replacing calls to a function by the code of the function itself (known as placing the code @dfn{in-line}). @cindex function-call overhead @cindex overhead, from function call In most cases, function-call overhead is a negligible fraction of the total run-time of a program. It can become significant only when there are functions which contain relatively few instructions, and these functions account for a substantial fraction of the run-time---in this case the overhead then becomes a large proportion of the total run-time. Inlining is always favorable if there is only one point of invocation of a function. It is also unconditionally better if the invocation of a function requires more instructions (memory) than moving the body of the function in-line. This is a common situation for simple accessor functions in C++, which can benefit greatly from inlining. Moreover, inlining may facilitate further optimizations, such as common subexpression elimination, by merging several separate functions into a single large function. The following function @code{sq(x)} is a typical example of a function that would benefit from being inlined. It computes @math{x^2}, the square of its argument @math{x}: @example double sq (double x) @{ return x * x; @} @end example @noindent This function is small, so the overhead of calling it is comparable to the time taken to execute the single multiplication carried out by the function itself. If this function is used inside a loop, such as the one below, then the function-call overhead would become substantial: @example for (i = 0; i < 1000000; i++) @{ sum += sq (i + 0.5); @} @end example @noindent Optimization with inlining replaces the inner loop of the program with the body of the function, giving the following code: @example for (i = 0; i < 1000000; i++) @{ double t = (i + 0.5); /* temporary variable */ sum += t * t; @} @end example @noindent Eliminating the function call and performing the multiplication @dfn{in-line} allows the loop to run with maximum efficiency. GCC selects functions for inlining using a number of heuristics, such as the function being suitably small. As an optimization, inlining is carried out only within each object file. The @code{inline} keyword can be used to request explicitly that a specific function should be inlined wherever possible, including its use in other files.@footnote{In this case, the definition of the inline function must be made available to the other files (in a header file, for example).} The GCC Reference Manual ``@cite{Using GCC}'' provides full details of the @code{inline} keyword, and its use with the @code{static} and @code{extern} qualifiers to control the linkage of explicitly inlined functions (@pxref{Further reading}). @node Speed-space tradeoffs @section Speed-space tradeoffs @cindex speed-space tradeoffs, in optimization @cindex optimization, speed-space tradeoffs @cindex tradeoffs, between speed and space in optimization @cindex space vs speed, tradeoff in optimization While some forms of optimization, such as common subexpression elimination, are able to increase the speed and reduce the size of a program simultaneously, other types of optimization produce faster code at the expense of increasing the size of the executable. This choice between speed and memory is referred to as a @dfn{speed-space tradeoff}. Optimizations with a speed-space tradeoff can also be used to make an executable smaller, at the expense of making it run slower. @subsection Loop unrolling @cindex loop unrolling, optimization @cindex optimization, loop unrolling @cindex unrolling, of loops (optimization) A prime example of an optimization with a speed-space tradeoff is @dfn{loop unrolling}. This form of optimization increases the speed of loops by eliminating the ``end of loop'' condition on each iteration. For example, the following loop from 0 to 7 tests the condition @code{i < 8} on each iteration: @example for (i = 0; i < 8; i++) @{ y[i] = i; @} @end example @noindent At the end of the loop, this test will have been performed 9 times, and a large fraction of the run time will have been spent checking it. A more efficient way to write the same code is simply to @dfn{unroll the loop} and execute the assignments directly: @example y[0] = 0; y[1] = 1; y[2] = 2; y[3] = 3; y[4] = 4; y[5] = 5; y[6] = 6; y[7] = 7; @end example @noindent This form of the code does not require any tests, and executes at maximum speed. Since each assignment is independent, it also allows the compiler to use parallelism on processors that support it. Loop unrolling is an optimization that increases the speed of the resulting executable but also generally increases its size (unless the loop is very short, with only one or two iterations, for example). Loop unrolling is also possible when the upper bound of the loop is unknown, provided the start and end conditions are handled correctly. For example, the same loop with an arbitrary upper bound, @example for (i = 0; i < n; i++) @{ y[i] = i; @} @end example @noindent can be rewritten by the compiler as follows: @example for (i = 0; i < (n % 2); i++) @{ y[i] = i; @} for ( ; i + 1 < n; i += 2) /* no initializer */ @{ y[i] = i; y[i+1] = i+1; @} @end example @noindent The first loop handles the case @code{i = 0} when @code{n} is odd, and the second loop handles all the remaining iterations. Note that the second loop does not use an initializer in the first argument of the @code{for} statement, since it continues where the first loop finishes. The assignments in the second loop can be parallelized, and the overall number of tests is reduced by a factor of 2 (approximately). Higher factors can be achieved by unrolling more assignments inside the loop, at the cost of greater code size. @node Scheduling @section Scheduling @cindex scheduling, stage of optimization @cindex instruction scheduling, optimization @cindex pipelining, explanation of The lowest level of optimization is @dfn{scheduling}, in which the compiler determines the best ordering of individual instructions. Most CPUs allow one or more new instructions to start executing before others have finished. Many CPUs also support @dfn{pipelining}, where multiple instructions execute in parallel on the same CPU. When scheduling is enabled, instructions must be arranged so that their results become available to later instructions at the right time, and to allow for maximum parallel execution. Scheduling improves the speed of an executable without increasing its size, but requires additional memory and time in the compilation process itself (due to its complexity). @node Optimization levels @section Optimization levels @cindex optimization, compiling with @option{-O} @cindex optimization, levels of @cindex levels of optimization @cindex @option{O} option, optimization level In order to control compilation-time and compiler memory usage, and the trade-offs between speed and space for the resulting executable, GCC provides a range of general optimization levels, numbered from 0--3, as well as individual options for specific types of optimization. An optimization level is chosen with the command line option @option{-O@var{LEVEL}}, where @code{@var{LEVEL}} is a number from 0 to 3. The effects of the different optimization levels are described below: @table @asis @item @option{-O0} or @r{no @option{-O} option} (default) @cindex @option{-O0} option, optimization level zero @cindex unoptimized code (@option{-O0}) At this optimization level GCC does not perform any optimization and compiles the source code in the most straightforward way possible. Each command in the source code is converted directly to the corresponding instructions in the executable file, without rearrangement. This is the best option to use when debugging a program. The option @option{-O0} is equivalent to not specifying a @option{-O} option. @item @option{-O1} @r{or} @option{-O} @cindex @option{-O1} option, optimization level one This level turns on the most common forms of optimization that do not require any speed-space tradeoffs. With this option the resulting executables should be smaller and faster than with @option{-O0}. The more expensive optimizations, such as instruction scheduling, are not used at this level. Compiling with the option @option{-O1} can often take less time than compiling with @option{-O0}, due to the reduced amounts of data that need to be processed after simple optimizations. @item @option{-O2} @cindex @option{-O2} option, optimization level two @cindex deployment, options for This option turns on further optimizations, in addition to those used by @option{-O1}. These additional optimizations include instruction scheduling. Only optimizations that do not require any speed-space tradeoffs are used, so the executable should not increase in size. The compiler will take longer to compile programs and require more memory than with @option{-O1}. This option is generally the best choice for deployment of a program, because it provides maximum optimization without increasing the executable size. It is the default optimization level for releases of GNU packages. @item @option{-O3} @cindex @option{-O3} option, optimization level three This option turns on more expensive optimizations, such as function inlining, in addition to all the optimizations of the lower levels @option{-O2} and @option{-O1}. The @option{-O3} optimization level may increase the speed of the resulting executable, but can also increase its size. Under some circumstances where these optimizations are not favorable, this option might actually make a program slower. @item @option{-funroll-loops} @cindex @option{-funroll-loops} option, optimization by loop unrolling @cindex @option{funroll-loops} option, optimization by loop unrolling @cindex loop unrolling, optimization @cindex optimization, loop unrolling @cindex unrolling, of loops (optimization) This option turns on loop-unrolling, and is independent of the other optimization options. It will increase the size of an executable. Whether or not this option produces a beneficial result has to be examined on a case-by-case basis. @item @option{-Os} @cindex @option{-Os} option, optimization for size @cindex optimization for size, @option{-Os} @cindex size, optimization for, @option{-Os} This option selects optimizations which reduce the size of an executable. The aim of this option is to produce the smallest possible executable, for systems constrained by memory or disk space. In some cases a smaller executable will also run faster, due to better cache usage. @end table It is important to remember that the benefit of optimization at the highest levels must be weighed against the cost. The cost of optimization includes greater complexity in debugging, and increased time and memory requirements during compilation. For most purposes it is satisfactory to use @option{-O0} for debugging, and @option{-O2} for development and deployment. @node Optimization examples @section Examples @cindex optimization, example of @cindex examples, of optimization The following program will be used to demonstrate the effects of different optimization levels: @example @verbatiminclude optim.c @end example @noindent @cindex benchmarking, with @code{time} command @cindex @code{time} command, measuring run-time @cindex run-time, measuring with @code{time} command The main program contains a loop calling the @code{powern} function. This function computes the @i{n}-th power of a floating point number by repeated multiplication---it has been chosen because it is suitable for both inlining and loop-unrolling. The run-time of the program can be measured using the @code{time} command in the GNU Bash shell. Here are some results for the program above, compiled on a 566@dmn{MHz} Intel Celeron with 16@dmn{KB} L1-cache and 128@dmn{KB} L2-cache, using GCC 3.3.1 on a GNU/Linux system: @example $ gcc -Wall -O0 test.c -lm $ time ./a.out real 0m13.388s user 0m13.370s sys 0m0.010s $ gcc -Wall -O1 test.c -lm $ time ./a.out real 0m10.030s user 0m10.030s sys 0m0.000s $ gcc -Wall -O2 test.c -lm $ time ./a.out real 0m8.388s user 0m8.380s sys 0m0.000s $ gcc -Wall -O3 test.c -lm $ time ./a.out real 0m6.742s user 0m6.730s sys 0m0.000s $ gcc -Wall -O3 -funroll-loops test.c -lm $ time ./a.out real 0m5.412s user 0m5.390s sys 0m0.000s @end example @noindent The relevant entry in the output for comparing the speed of the resulting executables is the @samp{user} time, which gives the actual CPU time spent running the process. The other rows, @samp{real} and @samp{sys}, record the total real time for the process to run (including times where other processes were using the CPU) and the time spent waiting for operating system calls. Although only one run is shown for each case above, the benchmarks were executed several times to confirm the results. From the results it can be seen in this case that increasing the optimization level with @option{-O1}, @option{-O2} and @option{-O3} produces an increasing speedup, relative to the unoptimized code compiled with @option{-O0}. The additional option @option{-funroll-loops} produces a further speedup. The speed of the program is more than doubled overall, when going from unoptimized code to the highest level of optimization. Note that for a small program such as this there can be considerable variation between systems and compiler versions. For example, on a Mobile 2.0@dmn{GHz} Intel Pentium 4M system the trend of the results using the same version of GCC is similar except that the performance with @option{-O2} is slightly worse than with @option{-O1}. This illustrates an important point: optimizations may not necessarily make a program faster in every case. @node Optimization and debugging @section Optimization and debugging @cindex debugging, with optimization @cindex optimization, with debugging With GCC it is possible to use optimization in combination with the debugging option @option{-g}. Many other compilers do not allow this. When using debugging and optimization together, the internal rearrangements carried out by the optimizer can make it difficult to see what is going on when examining an optimized program in the debugger. For example, temporary variables are often eliminated, and the ordering of statements may be changed. @cindex deployment, options for However, when a program crashes unexpectedly, any debugging information is better than none---so the use of @option{-g} is recommended for optimized programs, both for development and deployment. The debugging option @option{-g} is enabled by default for releases of GNU packages, together with the optimization option @option{-O2}. @node Optimization and compiler warnings @section Optimization and compiler warnings @cindex optimization, and compiler warnings @cindex warnings, and optimization When optimization is turned on, GCC can produce additional warnings that do not appear when compiling without optimization. @cindex data-flow analysis As part of the optimization process, the compiler examines the use of all variables and their initial values---this is referred to as @dfn{data-flow analysis}. It forms the basis for other optimization strategies, such as instruction scheduling. A side-effect of data-flow analysis is that the compiler can detect the use of uninitialized variables. @cindex @option{-Wuninitialized} option, warn about uninitialized variables @cindex @option{Wuninitialized} option, warn about uninitialized variables The @option{-Wuninitialized} option (which is included in @option{-Wall}) warns about variables that are read without being initialized. It only works when the program is compiled with optimization to enable data-flow analysis. The following function contains an example of such a variable: @example @verbatiminclude uninit.c @end example @noindent @cindex C/C++, risks of using, example The function works correctly for most arguments, but has a bug when @code{x} is zero---in this case the return value of the variable @code{s} will be undefined. Compiling the program with the @option{-Wall} option alone does not produce any warnings, because data-flow analysis is not carried out without optimization: @example $ gcc -Wall -c uninit.c @end example @noindent To produce a warning, the program must be compiled with @option{-Wall} and optimization simultaneously. In practice, the optimization level @option{-O2} is needed to give good warnings: @cindex uninitialized variable, warning of @cindex variable, warning of uninitialized use @example $ gcc -Wall -O2 -c uninit.c uninit.c: In function `sign': uninit.c:4: warning: `s' might be used uninitialized in this function @end example @noindent This correctly detects the possibility of the variable @code{s} being used without being defined. Note that while GCC will usually find most uninitialized variables, it does so using heuristics which will occasionally miss some complicated cases or falsely warn about others. In the latter situation, it is often possible to rewrite the relevant lines in a simpler way that removes the warning and improves the readability of the source code. @node Compiling a C++ program @chapter Compiling a C++ program @cindex C++, @code{g++} as a true compiler @cindex translators, from C++ to C, compared with @code{g++} This chapter describes how to use GCC to compile programs written in C++, and the command-line options specific to that language. The GNU C++ compiler provided by GCC is a true C++ compiler---it compiles C++ source code directly into assembly language. Some other C++ ``compilers'' are translators which convert C++ programs into C, and then compile the resulting C program using an existing C compiler. A true C++ compiler, such as GCC, is able to provide better support for error reporting, debugging and optimization. @menu * Compiling a simple C++ program:: * Using the C++ standard library:: * Templates:: @end menu @node Compiling a simple C++ program @section Compiling a simple C++ program @cindex C++, compiling a simple program with @code{g++} @cindex @code{g++}, compiling C++ programs @cindex compiling C++ programs with @code{g++} @cindex simple C++ program, compiling with @code{g++} The procedure for compiling a C++ program is the same as for a C program, but uses the command @code{g++} instead of @code{gcc}. Both compilers are part of the GNU Compiler Collection. @cindex Hello World program, in C++ To demonstrate the use of @code{g++}, here is a version of the @dfn{Hello World} program written in C++: @example @verbatiminclude hello.cc @end example @noindent The program can be compiled with the following command line: @example $ g++ -Wall hello.cc -o hello @end example @noindent @cindex @code{.cc}, C++ file extension @cindex @code{cc}, C++ file extension @cindex @code{.cpp}, C++ file extension @cindex @code{cpp}, C++ file extension @cindex @code{.cxx}, C++ file extension @cindex @code{cxx}, C++ file extension @cindex extension, @code{.C}, C++ file @cindex extension, @code{.cc}, C++ file @cindex extension, @code{.cpp}, C++ file @cindex extension, @code{.cxx}, C++ file @cindex extension, @code{.C}, C++ file @cindex file extension, @code{.C}, C++ file @cindex file extension, @code{.cc}, C++ file @cindex file extension, @code{.cpp}, C++ file @cindex file extension, @code{.cxx}, C++ file @cindex file extension, @code{.C}, C++ file @cindex C++, file extensions The C++ frontend of GCC uses many of the same the same options as the C compiler @code{gcc}. It also supports some additional options for controlling C++ language features, which will be described in this chapter. Note that C++ source code should be given one of the valid C++ file extensions @file{.cc}, @file{.cpp}, @file{.cxx} or @file{.C} rather than the @file{.c} extension used for C programs. @cindex @option{-ansi} option, used with @code{g++} @cindex @option{ansi} option, used with @code{g++} @cindex ISO C++, controlled with @option{-ansi} option @cindex running an executable file, C++ The resulting executable can be run in exactly same way as the C version, simply by typing its filename: @example $ ./hello Hello, world! @end example @noindent The executable produces the same output as the C version of the program, using @code{std::cout} instead of the C @code{printf} function. All the options used in the @code{gcc} commands in previous chapters apply to @code{g++} without change, as do the procedures for compiling and linking files and libraries (using @code{g++} instead of @code{gcc}, of course). One natural difference is that the @option{-ansi} option requests compliance with the C++ standard, instead of the C standard, when used with @code{g++}. Note that programs using C++ object files must always be linked with @code{g++}, in order to supply the appropriate C++ libraries. Attempting to link a C++ object file with the C compiler @code{gcc} will cause ``undefined reference'' errors for C++ standard library functions: @cindex @code{undefined reference} to C++ function, due to linking with @code{gcc} @example $ g++ -Wall -c hello.cc $ gcc hello.o @r{(should use @code{g++})} hello.o: In function `main': hello.o(.text+0x1b): undefined reference to `std::cout' ..... hello.o(.eh_frame+0x11): undefined reference to `__gxx_personality_v0' @end example @noindent Linking the same object file with @code{g++} supplies all the necessary C++ libraries and will produce a working executable: @example $ g++ hello.o $ ./a.out Hello, world! @end example @noindent @cindex @code{__gxx_personality_v0}, undefined reference error @cindex @code{gxx_personality_v0}, undefined reference error @cindex @code{gcc}, used inconsistently with @code{g++} @cindex undefined reference error, @code{__gxx_personality_v0} A point that sometimes causes confusion is that @code{gcc} will actually compile C++ source code when it detects a C++ file extension, but cannot then link the resulting object files. @example $ gcc -Wall -c hello.cc @r{(succeeds, even for C++)} $ gcc hello.o hello.o: In function `main': hello.o(.text+0x1b): undefined reference to `std::cout' @end example @noindent In order to avoid this problem it is best to use @code{g++} consistently for C++ programs, and @code{gcc} for C programs. @node Using the C++ standard library @section Using the C++ standard library @cindex C++, standard library @cindex standard library, C++ @cindex library, C++ standard library An implementation of the C++ standard library is provided as a part of GCC. The following program uses the standard library @code{string} class to reimplement the @dfn{Hello World} program: @example @verbatiminclude hellostr.cc @end example @noindent The program can be compiled and run using the same commands as above: @example $ g++ -Wall hellostr.cc $ ./a.out Hello, World! @end example @noindent @cindex C++, namespace @code{std} @cindex Namespace @code{std} in C++ @cindex @code{std} namespace in C++ @cindex header file, without @code{.h} extension for C++ Note that in accordance with the C++ standard, the header files for the C++ library itself do not use a file extension. The classes in the library are also defined in the @code{std} namespace, so the directive @code{using namespace std} is needed to access them, unless the prefix @code{std::} is used throughout (as in the previous section). @node Templates @section Templates @cindex templates, in C++ @cindex generic programming, in C++ @cindex C++, templates Templates provide the ability to define C++ classes which support @dfn{generic programming} techniques. Templates can be considered as a powerful kind of macro facility. When a templated class or function is used with a specific class or type, such as @code{float} or @code{int}, the corresponding template code is compiled with that type substituted in the appropriate places. @menu * Using C++ standard library templates:: * Providing your own templates:: * Explicit template instantiation:: * The export keyword:: @end menu @node Using C++ standard library templates @subsection Using C++ standard library templates @cindex templates, in C++ standard library @cindex Standard Template Library (STL) @cindex C++, standard library templates The C++ standard library @file{libstdc++} supplied with GCC provides a wide range of generic container classes such as lists and queues, in addition to generic algorithms such as sorting. These classes were originally part of the Standard Template Library (STL), which was a separate package, but are now included in the C++ standard library itself. The following program demonstrates the use of the template library by creating a list of strings with the template @code{list}: @example @verbatiminclude string.cc @end example @noindent No special options are needed to use the template classes in the standard library; the command-line options for compiling this program are the same as before: @example $ g++ -Wall string.cc $ ./a.out List size = 2 @end example @noindent @cindex C++, standard library @code{libstdc++} @cindex @code{libstdc++}, C++ standard library Note that the executables created by @code{g++} using the C++ standard library will be linked to the shared library @file{libstdc++}, which is supplied as part of the default GCC installation. There are several versions of this library---if you distribute executables using the C++ standard library you need to ensure that the recipient has a compatible version of @file{libstdc++}, or link your program statically using the command-line option @option{-static}. @node Providing your own templates @subsection Providing your own templates @cindex instantiation, of templates in C++ @cindex C++, instantiation of templates @cindex inclusion compilation model, in C++ @cindex compilation, model for templates @cindex templates, inclusion compilation model In addition to the template classes provided by the C++ standard library you can define your own templates. The recommended way to use templates with @code{g++} is to follow the @dfn{inclusion compilation model}, where template definitions are placed in header files. This is the method used by the C++ standard library supplied with GCC itself. The header files can then be included with @samp{#include} in each source file where they are needed. @cindex circular buffer, template example @cindex buffer, template example For example, the following template file creates a simple @code{Buffer} class which represents a circular buffer holding objects of type @code{T}. @example @verbatiminclude buffer.h @end example @noindent @cindex include guards, in header file @cindex header file, with include guards The file contains both the declaration of the class and the definitions of the member functions. This class is only given for demonstration purposes and should not be considered an example of good programming. Note the use of @dfn{include guards}, which test for the presence of the macro @w{@code{BUFFER_H}}, ensuring that the definitions in the header file are only parsed once, if the file is included multiple times in the same context. The program below uses the templated @code{Buffer} class to create a buffer of size 10, storing the floating point values @math{0.25} and @math{1.0} in the buffer: @example @verbatiminclude tprog.cc @end example @noindent The definitions for the template class and its functions are included in the source file for the program with @samp{#include "buffer.h"} before they are used. The program can then be compiled using the following command line: @example $ g++ -Wall tprog.cc $ ./a.out stored value = 1.25 @end example @noindent At the points where the template functions are used in the source file, @code{g++} compiles the appropriate definition from the header file and places the compiled function in the corresponding object file. @cindex @code{multiply defined symbol} error, with C++ @cindex linker, GNU compared with other linkers If a template function is used several times in a program it will be stored in more than one object file. The GNU Linker ensures that only one copy is placed in the final executable. Other linkers may report ``@dfn{multiply defined symbol}'' errors when they encounter more than one copy of a template function---a method of working with these linkers is described below. @node Explicit template instantiation @subsection Explicit template instantiation @cindex templates, explicit instantiation @cindex instantiation, explicit vs implicit in C++ @cindex explicit instantiation of templates @cindex @option{-fno-implicit-templates} option, disable implicit instantiation @cindex @code{fno-implicit-templates} option, disable implicit instantiation To achieve complete control over the compilation of templates with @code{g++} it is possible to require explicit instantiation of each occurrence of a template, using the option @option{-fno-implicit-templates}. This method is not needed when using the GNU Linker---it is an alternative to the inclusion compilation model for systems with linkers which cannot eliminate duplicate definitions of template functions in object files. In this approach, template functions are no longer compiled at the point where they are used, as a result of the @option{-fno-implicit-templates} option. Instead, the compiler looks for an explicit instantiation of the template using the @code{template} keyword with a specific type to force its compilation (this is a GNU extension to the standard behavior). These instantiations are typically placed in a separate source file, which is then compiled to make an object file containing all the template functions required by a program. This ensures that each template appears in only one object file, and is compatible with linkers which cannot eliminate duplicate definitions in object files. For example, the following file @file{templates.cc} contains an explicit instantiation of the @code{Buffer} class used by the program @file{tprog.cc} given above: @example @verbatiminclude templates.cc @end example @noindent The whole program can be compiled and linked using explicit instantiation with the following commands: @example $ g++ -Wall -fno-implicit-templates -c tprog.cc $ g++ -Wall -fno-implicit-templates -c templates.cc $ g++ tprog.o templates.o $ ./a.out stored value = 1.25 @end example @noindent The object code for all the template functions is contained in the file @file{templates.o}. There is no object code for template functions in @file{tprog.o} when it is compiled with the @option{-fno-implicit-templates} option. If the program is modified to use additional types, then further explicit instantiations can be added to the file @file{templates.cc}. For example, the following code adds instantiations for Buffer objects containing @code{double} and @code{int} values: @example @verbatiminclude templates2.cc @end example @noindent @cindex C++, creating libraries with explicit instantiation @cindex libraries, creating with explicit instantiation in C++ The disadvantage of explicit instantiation is that it is necessary to know which template types are needed by the program. For a complicated program this may be difficult to determine in advance. Any missing template instantiations can be determined at link time, however, and added to the list of explicit instantiations, by noting which functions are undefined. Explicit instantiation can also be used to make libraries of precompiled template functions, by creating an object file containing all the required instantiations of a template function (as in the file @file{templates.cc} above). For example, the object file created from the template instantiations above contains the machine code needed for Buffer classes with @samp{float}, @samp{double} and @samp{int} types, and could be distributed in a library. @node The export keyword @subsection The @code{export} keyword @cindex @code{export} keyword, not supported in GCC @cindex templates, @code{export} keyword At the time of writing, GCC does not support the new C++ @code{export} keyword (GCC 3.3.2). This keyword was proposed as a way of separating the interface of templates from their implementation. However it adds its own complexity to the linking process, which can detract from any advantages in practice. The @code{export} keyword is not widely used, and most other compilers do not support it either. The inclusion compilation model described earlier is recommended as the simplest and most portable way to use templates. @node Platform-specific options @chapter Platform-specific options @cindex platform-specific options @cindex machine-specific options @cindex options, platform-specific @cindex @option{-m} option, platform-specific settings @cindex @option{m} option, platform-specific settings GCC provides a range of platform-specific options for different types of CPUs. These options control features such as hardware floating-point modes, and the use of special instructions for different CPUs. They can be selected with the @option{-m} option on the command line, and work with all the GCC language frontends, such as @code{gcc} and @code{g++}. The following sections describe some of the options available for common platforms. A complete list of all platform-specific options can be found in the GCC Reference Manual, ``@cite{Using GCC}'' (@pxref{Further reading}). Support for new processors is added to GCC as they become available, therefore some of the options described in this chapter may not be found in older versions of GCC. @menu * Intel and AMD x86 options:: * DEC Alpha options:: * SPARC options:: * POWER/PowerPC options:: * Multi-architecture support:: @end menu @node Intel and AMD x86 options @section Intel and AMD x86 options @cindex Intel x86, platform-specific options @cindex AMD x86, platform-specific options @cindex x86, platform-specific options The features of the widely used Intel and AMD x86 families of processors (386, 486, Pentium, etc) can be controlled with GCC platform-specific options. On these platforms, GCC produces executable code which is compatible with all the processors in the x86 family by default---going all the way back to the 386. However, it is also possible to compile for a specific processor to obtain better performance.@footnote{Also referred to as ``targeting'' a specific processor.} @cindex Pentium, platform-specific options @cindex Athlon, platform-specific options For example, recent versions of GCC have specific support for newer processors such as the Pentium 4 and AMD Athlon. These can be selected with the following option for the Pentium 4, @example $ gcc -Wall -march=pentium4 hello.c @end example @noindent @cindex @option{-march} option, compile for specific CPU @cindex @option{march} option, compile for specific CPU and for the Athlon: @example $ gcc -Wall -march=athlon hello.c @end example @noindent A complete list of supported CPU types can be found in the GCC Reference Manual. Code produced with a specific @option{-march=@var{CPU}} option will be faster but will not run on other processors in the x86 family. If you plan to distribute executable files for general use on Intel and AMD processors they should be compiled without any @option{-march} options. As an alternative, the @option{-mcpu=@var{CPU}} option provides a compromise between speed and portability---it generates code that is tuned for a specific processor, in terms of instruction scheduling, but does not use any instructions which are not available on other CPUs in the x86 family. The resulting code will be compatible with all the CPUs, and have a speed advantage on the CPU specified by @option{-mcpu}. The executables generated by @option{-mcpu} cannot achieve the same performance as @option{-march}, but may be more convenient in practice. @cindex 64-bit processor specific options, AMD64 and Intel @cindex AMD64, 64-bit processor specific options AMD has enhanced the 32-bit x86 instruction set to a 64-bit instruction set called x86-64, which is implemented in their AMD64 processors.@footnote{Intel has added support for this instruction set as the ``Intel 64-bit enhancements'' on their Xeon CPUs.} On AMD64 systems GCC generates 64-bit code by default. The option @option{-m32} allows 32-bit code to be generated instead. @cindex @option{-mcmodel} option, for AMD64 @cindex @option{mcmodel} option, for AMD64 The AMD64 processor has several different memory models for programs running in 64-bit mode. The default model is the small code model, which allows code and data up to 2@dmn{GB} in size. The medium code model allows unlimited data sizes and can be selected with @option{-mcmodel=medium}. There is also a large code model, which supports an unlimited code size in addition to unlimited data size. It is not currently implemented in GCC since the medium code model is sufficient for all practical purposes---executables with sizes greater than 2@dmn{GB} are not encountered in practice. @cindex red-zone, on AMD64 @cindex kernel mode, on AMD64 A special kernel code model @option{-mcmodel=kernel} is provided for system-level code, such as the Linux kernel. An important point to note is that by default on the AMD64 there is a 128-byte area of memory allocated below the stack pointer for temporary data, referred to as the ``red-zone'', which is not supported by the Linux kernel. Compilation of the Linux kernel on the AMD64 requires the options @option{-mcmodel=kernel -mno-red-zone}. @node DEC Alpha options @section DEC Alpha options @cindex Alpha, platform-specific options @cindex DEC Alpha, platform-specific options The DEC Alpha processor has default settings which maximize floating-point performance, at the expense of full support for IEEE arithmetic features. @cindex IEEE options, on DEC Alpha @cindex @option{-mieee} option, floating-point support on DEC Alpha @cindex @option{mieee} option, floating-point support on DEC Alpha @cindex NaN, not a number, on DEC Alpha @cindex Inf, infinity, on DEC Alpha @cindex denormalized numbers, on DEC Alpha @cindex underflow, on DEC Alpha @cindex gradual underflow, on DEC Alpha @cindex soft underflow, on DEC Alpha @cindex zero, rounding to by underflow, on DEC Alpha Support for infinity arithmetic and gradual underflow (denormalized numbers) is not enabled in the default configuration on the DEC Alpha processor. Operations which produce infinities or underflows will generate floating-point exceptions (also known as @dfn{traps}), and cause the program to terminate, unless the operating system catches and handles the exceptions (which is, in general, inefficient). The IEEE standard specifies that these operations should produce special results to represent the quantities in the IEEE numeric format. In most cases the DEC Alpha default behavior is acceptable, since the majority of programs do not produce infinities or underflows. For applications which require these features, GCC provides the option @option{-mieee} to enable full support for IEEE arithmetic. To demonstrate the difference between the two cases the following program divides 1 by 0: @cindex division by zero @cindex zero, division by @example @verbatiminclude alpha.c @end example @noindent In IEEE arithmetic the result of 1/0 is @code{inf} (@dfn{Infinity}). If the program is compiled for the Alpha processor with the default settings it generates an exception, which terminates the program: @example $ gcc -Wall alpha.c $ ./a.out Floating point exception @r{(on an Alpha processor)} @end example @noindent @cindex @code{Floating point exception}, on DEC Alpha Using the @option{-mieee} option ensures full IEEE compliance -- the division 1/0 correctly produces the result @code{inf} and the program continues executing successfully: @example $ gcc -Wall -mieee alpha.c $ ./a.out x/y = inf @end example @noindent Note that programs which generate floating-point exceptions run more slowly when compiled with @option{-mieee}, because the exceptions are handled in software rather than hardware. @node SPARC options @section SPARC options @cindex Sun SPARC, platform-specific options @cindex SPARC, platform-specific options @cindex @option{-mcpu} option, compile for specific CPU @cindex @option{mcpu} option, compile for specific CPU On the SPARC range of processors the @option{-mcpu=@var{CPU}} option generates processor-specific code. The valid options for @code{@var{CPU}} are @code{v7}, @code{v8} (SuperSPARC), @code{Sparclite}, @code{Sparclet} and @code{v9} (UltraSPARC). Code produced with a specific @option{-mcpu} option will not run on other processors in the SPARC family, except where supported by the backwards-compatibility of the processor itself. @cindex UltraSPARC, 32-bit mode vs 64-bit mode, @cindex word-size, on UltraSPARC @cindex bits, 32 vs 64 on UltraSPARC @cindex @option{-m32} and @option{-m64} options, compile for 32 or 64-bit environment @cindex @option{m32} and @option{m64} options, compile for 32 or 64-bit environment On 64-bit UltraSPARC systems the options @option{-m32} and @option{-m64} control code generation for 32-bit or 64-bit environments. The 32-bit environment selected by @option{-m32} uses @code{int}, @code{long} and pointer types with a size of 32 bits. The 64-bit environment selected by @option{-m64} uses a 32-bit @code{int} type and 64-bit @code{long} and pointer types. @node POWER/PowerPC options @section POWER/PowerPC options @cindex PowerPC and POWER, platform-specific options @cindex AIX, platform-specific options On systems using the POWER/PowerPC family of processors the option @option{-mcpu=@var{CPU}} selects code generation for specific CPU models. The possible values of @code{@var{CPU}} include @samp{power}, @samp{power2}, @samp{powerpc}, @samp{powerpc64} and @samp{common}, in addition to other more specific model numbers. Code generated with the option @option{-mcpu=common} will run on any of the processors. @cindex Altivec, on PowerPC @cindex @option{-maltivec} option, enables use of Altivec processor on PowerPC @cindex @option{maltivec} option, enables use of Altivec processor on PowerPC The option @option{-maltivec} enables use of the Altivec vector processing instructions, if the appropriate hardware support is available. @cindex multiply and add instruction @cindex fused multiply and add instruction @cindex combined multiply and add instruction @cindex @option{-mno-fused-madd} option, on PowerPC @cindex @option{mno-fused-madd} option, on PowerPC The POWER/PowerPC processors include a combined ``multiply and add'' instruction @math{a * x + b}, which performs the two operations simultaneously for speed---this is referred to as a @dfn{fused} multiply and add, and is used by GCC by default. Due to differences in the way intermediate values are rounded, the result of a fused instruction may not be exactly the same as performing the two operations separately. In cases where strict IEEE arithmetic is required, the use of the combined instructions can be disabled with the option @option{-mno-fused-madd}. @cindex TOC overflow error, on AIX @cindex table of contents, overflow error on AIX @cindex overflow error, for TOC on AIX @cindex AIX, TOC overflow error @cindex @option{-mminimal-toc} option, on AIX @cindex @option{mminimal-toc} option, on AIX On AIX systems, the option @option{-mminimal-toc} decreases the number of entries GCC puts in the global @dfn{table of contents} (TOC) in executables, to avoid ``TOC overflow'' errors at link time. @cindex @option{-mxl-call} option, compatibility with IBM XL compilers on AIX @cindex @option{mxl-call} option, compatibility with IBM XL compilers on AIX @cindex IBM XL compilers, compatibility on AIX @cindex XL compilers, compatibility on AIX @cindex AIX, compatibility with IBM XL compilers The option @option{-mxl-call} makes the linking of object files from GCC compatible with those from IBM's XL compilers. @cindex @option{-pthread} option, on AIX @cindex @option{pthread} option, on AIX @cindex threads, on AIX For applications using POSIX threads, AIX always requires the option @option{-pthread} when compiling, even when the program will only run in single-threaded mode. @node Multi-architecture support @section Multi-architecture support @cindex multi-architecture support, discussion of @cindex MIPS64, multi-architecture support @cindex Sparc64, multi-architecture support @cindex PowerPC64, multi-architecture support @cindex ARM, multi-architecture support @cindex Thumb, alternative code format on ARM A number of platforms can execute code for more than one architecture. For example, 64-bit platforms such as AMD64, MIPS64, Sparc64, and PowerPC64 support the execution of both 32-bit and 64-bit code. Similarly, ARM processors support both ARM code and a more compact code called ``Thumb''. GCC can be built to support multiple architectures on these platforms. By default, the compiler will generate 64-bit object files, but giving the @option{-m32} option will generate a 32-bit object file for the corresponding architecture.@footnote{The options @option{-maix64} and @option{-maix32} are used on AIX.} @cindex Itanium, multi-architecture support @cindex system libraries, location of Note that support for multiple architectures depends on the corresponding libraries being available. On 64-bit platforms supporting both 64 and 32-bit executables, the 64-bit libraries are often placed in @file{lib64} directories instead of @file{lib} directories, e.g. in @file{/usr/lib64} and @file{/lib64}. The 32-bit libraries are then found in the default @file{lib} directories as on other platforms. This allows both a 32-bit and a 64-bit library with the same name to exist on the same system. Other systems, such as the IA64/Itanium, use the directories @file{/usr/lib} and @file{/lib} for 64-bit libraries. GCC knows about these paths and uses the appropriate path when compiling 64-bit or 32-bit code. @node Troubleshooting @chapter Troubleshooting @cindex troubleshooting options @cindex help options @cindex command-line help option GCC provides several help and diagnostic options to assist in troubleshooting problems with the compilation process. All the options described in this chapter work with both @code{gcc} and @code{g++}. @menu * Help for command-line options:: * Version numbers:: * Verbose compilation:: @end menu @node Help for command-line options @section Help for command-line options @cindex @option{--help} option, display command-line options @cindex @option{help} option, display command-line options To obtain a brief reminder of various command-line options, GCC provides a help option which displays a summary of the top-level GCC command-line options: @example $ gcc --help @end example @noindent @cindex verbose help option @cindex @option{-v} option, verbose compilation @cindex @option{v} option, verbose compilation To display a complete list of options for @code{gcc} and its associated programs, such as the GNU Linker and GNU Assembler, use the help option above with the verbose (@option{-v}) option: @example $ gcc -v --help @end example @noindent The complete list of options produced by this command is extremely long---you may wish to page through it using the @code{more} command, or redirect the output to a file for reference: @example $ gcc -v --help 2>&1 | more @end example @node Version numbers @section Version numbers You can find the version number of @code{gcc} using the version option: @cindex version number of GCC, displaying @cindex @option{--version} option, display version number @cindex @option{version} option, display version number @example $ gcc --version gcc (GCC) 3.3.1 @end example @noindent The version number is important when investigating compilation problems, since older versions of GCC may be missing some features that a program uses. The version number has the form @var{major-version.minor-version} or @var{major-version.minor-version.micro-version}, where the additional third ``micro'' version number (as shown above) is used for subsequent bug-fix releases in a release series. @cindex major version number, of GCC @cindex minor version number, of GCC @cindex patch level, of GCC More details about the version can be found using @option{-v}: @example $ gcc -v Reading specs from /usr/lib/gcc-lib/i686/3.3.1/specs Configured with: ../configure --prefix=/usr Thread model: posix gcc version 3.3.1 @end example @noindent @cindex configuration files for GCC @cindex @code{specs} directory, compiler configuration files This includes information on the build flags of the compiler itself and the installed configuration file, @file{specs}. @node Verbose compilation @section Verbose compilation @cindex verbose compilation, @option{-v} option The @option{-v} option can also be used to display detailed information about the exact sequence of commands used to compile and link a program. Here is an example which shows the verbose compilation of the @cite{Hello World} program: @example $ gcc -v -Wall hello.c Reading specs from /usr/lib/gcc-lib/i686/3.3.1/specs Configured with: ../configure --prefix=/usr Thread model: posix gcc version 3.3.1 /usr/lib/gcc-lib/i686/3.3.1/cc1 -quiet -v -D__GNUC__=3 -D__GNUC_MINOR__=3 -D__GNUC_PATCHLEVEL__=1 hello.c -quiet -dumpbase hello.c -auxbase hello -Wall -version -o /tmp/cceCee26.s GNU C version 3.3.1 (i686-pc-linux-gnu) compiled by GNU C version 3.3.1 (i686-pc-linux-gnu) GGC heuristics: --param ggc-min-expand=51 --param ggc-min-heapsize=40036 ignoring nonexistent directory "/usr/i686/include" #include "..." search starts here: #include <...> search starts here: /usr/local/include /usr/include /usr/lib/gcc-lib/i686/3.3.1/include /usr/include End of search list. as -V -Qy -o /tmp/ccQynbTm.o /tmp/cceCee26.s GNU assembler version 2.12.90.0.1 (i386-linux) using BFD version 2.12.90.0.1 20020307 Debian/GNU Linux /usr/lib/gcc-lib/i686/3.3.1/collect2 --eh-frame-hdr -m elf_i386 -dynamic-linker /lib/ld-linux.so.2 /usr/lib/crt1.o /usr/lib/crti.o /usr/lib/gcc-lib/i686/3.3.1/crtbegin.o -L/usr/lib/gcc-lib/i686/3.3.1 -L/usr/lib/gcc-lib/i686/3.3.1/../../.. /tmp/ccQynbTm.o -lgcc -lgcc_eh -lc -lgcc -lgcc_eh /usr/lib/gcc-lib/i686/3.3.1/crtend.o /usr/lib/crtn.o @end example @noindent The output produced by @option{-v} can be useful whenever there is a problem with the compilation process itself. It displays the full directory paths used to search for header files and libraries, the predefined preprocessor symbols, and the object files and libraries used for linking. @node Compiler-related tools @chapter Compiler-related tools @cindex compiler-related tools @cindex tools, compiler-related This chapter describes a number of tools which are useful in combination with GCC. These include the GNU archiver @code{ar}, for creating libraries, and the GNU profiling and coverage testing programs, @code{gprof} and @code{gcov}. @menu * Creating a library with the GNU archiver:: * Using the profiler gprof:: * Coverage testing with gcov:: @end menu @node Creating a library with the GNU archiver @section Creating a library with the GNU archiver @cindex @code{ar}, GNU archiver @cindex libraries, creating with @code{ar} The GNU archiver @code{ar} combines a collection of object files into a single archive file, also known as a @dfn{library}. An archive file is simply a convenient way of distributing a large number of related object files together (as described earlier in @ref{Linking with external libraries}). To demonstrate the use of the GNU archiver we will create a small library @file{libhello.a} containing two functions @code{hello} and @code{bye}. The first object file will be generated from the source code for the @code{hello} function, in the file @file{hello_fn.c} seen earlier: @example @verbatiminclude hello_fn.c @end example @noindent The second object file will be generated from the source file @file{bye_fn.c}, which contains the new function @code{bye}: @example @verbatiminclude bye_fn.c @end example @noindent Both functions use the header file @file{hello.h}, now with a prototype for the function @code{bye()}: @example @verbatiminclude hello.h @end example @noindent The source code can be compiled to the object files @file{hello_fn.o} and @file{bye_fn.o} using the commands: @example $ gcc -Wall -c hello_fn.c $ gcc -Wall -c bye_fn.c @end example @noindent @cindex @option{cr} option, create/replace archive files These object files can be combined into a static library using the following command line: @example $ ar cr libhello.a hello_fn.o bye_fn.o @end example @noindent The option @option{cr} stands for ``create and replace''.@footnote{Note that @code{ar} does not require a prefix @samp{-} for its options.} If the library does not exist, it is first created. If the library already exists, any original files in it with the same names are replaced by the new files specified on the command line. The first argument @file{libhello.a} is the name of the library. The remaining arguments are the names of the object files to be copied into the library. @cindex @option{t} option, archive table of contents @cindex table of contents, in @code{ar} archive The archiver @code{ar} also provides a ``table of contents'' option @option{t} to list the object files in an existing library: @example $ ar t libhello.a hello_fn.o bye_fn.o @end example @noindent Note that when a library is distributed, the header files for the public functions and variables it provides should also be made available, so that the end-user can include them and obtain the correct prototypes. We can now write a program using the functions in the newly created library: @example @verbatiminclude main3.c @end example @noindent This file can be compiled with the following command line, as described in @ref{Linking with external libraries}, assuming the library @file{libhello.a} is stored in the current directory: @example $ gcc -Wall main.c libhello.a -o hello @end example @noindent The main program is linked against the object files found in the library file @file{libhello.a} to produce the final executable. The short-cut library linking option @option{-l} can also be used to link the program, without needing to specify the full filename of the library explicitly: @example $ gcc -Wall -L. main.c -lhello -o hello @end example @noindent The option @option{-L.} is needed to add the current directory to the library search path. The resulting executable can be run as usual: @example $ ./hello Hello, everyone! Goodbye! @end example @noindent It displays the output from both the @code{hello} and @code{bye} functions defined in the library. @node Using the profiler gprof @section Using the profiler @code{gprof} @cindex profiling, with @code{gprof} @cindex @code{gprof}, GNU Profiler The GNU profiler @code{gprof} is a useful tool for measuring the performance of a program---it records the number of calls to each function and the amount of time spent there, on a per-function basis. Functions which consume a large fraction of the run-time can be identified easily from the output of @code{gprof}. Efforts to speed up a program should concentrate first on those functions which dominate the total run-time. @cindex Collatz sequence We will use @code{gprof} to examine the performance of a small numerical program which computes the lengths of sequences occurring in the unsolved @cite{Collatz conjecture} in mathematics.@footnote{American Mathematical Monthly, Volume 92 (1985), 3--23} The Collatz conjecture involves sequences defined by the rule: @tex $$ x_{n+1} \leftarrow \cases{ x_n / 2 & if $x_n$ is even\cr 3 x_n + 1 & if $x_n$ is odd\cr} $$ @end tex @ifinfo @example x_@{n+1@} <= x_@{n@} / 2 if x_@{n@} is even 3 x_@{n@} + 1 if x_@{n@} is odd @end example @end ifinfo @noindent The sequence is iterated from an initial value @math{x_0} until it terminates with the value 1. According to the conjecture, all sequences do terminate eventually---the program below displays the longest sequences as @math{x_0} increases. The source file @file{collatz.c} contains three functions: @code{main}, @code{nseq} and @code{step}: @smallexample @verbatiminclude collatz.c @end smallexample @noindent @cindex @option{-pg} option, enable profiling @cindex @option{pg} option, enable profiling @cindex enable profiling, @option{-pg} option To use profiling, the program must be compiled and linked with the @option{-pg} profiling option: @example $ gcc -Wall -c -pg collatz.c $ gcc -Wall -pg collatz.o @end example @noindent @cindex instrumented executable, for profiling This creates an @dfn{instrumented} executable which contains additional instructions that record the time spent in each function. If the program consists of more than one source file then the @option{-pg} option should be used when compiling each source file, and used again when linking the object files to create the final executable (as shown above). Forgetting to link with the option @option{-pg} is a common error, which prevents profiling from recording any useful information. The executable must be run to create the profiling data: @example $ ./a.out @r{(normal program output is displayed)} @end example @noindent @cindex @code{gmon.out}, data file for @code{gprof} While running the instrumented executable, profiling data is silently written to a file @file{gmon.out} in the current directory. It can be analyzed with @code{gprof} by giving the name of the executable as an argument: @example $ gprof a.out Flat profile: Each sample counts as 0.01 seconds. % cumul. self self total time seconds seconds calls us/call us/call name 68.59 2.14 2.14 62135400 0.03 0.03 step 31.09 3.11 0.97 499999 1.94 6.22 nseq 0.32 3.12 0.01 main @end example @noindent The first column of the data shows that the program spends most of its time (almost 70%) in the function @code{step}, and 30% in @code{nseq}. Consequently efforts to decrease the run-time of the program should concentrate on the former. In comparison, the time spent within the @code{main} function itself is completely negligible (less than 1%). The other columns in the output provide information on the total number of function calls made, and the time spent in each function. Additional output breaking down the run-time further is also produced by @code{gprof} but not shown here. Full details can be found in the manual ``@cite{GNU gprof---The GNU Profiler}'', by Jay Fenlason and Richard Stallman. @node Coverage testing with gcov @section Coverage testing with @code{gcov} @cindex coverage testing, with @code{gcov} @cindex @code{gcov}, GNU coverage testing tool The GNU coverage testing tool @code{gcov} analyses the number of times each line of a program is executed during a run. This makes it possible to find areas of the code which are not used, or which are not exercised in testing. When combined with profiling information from @code{gprof} the information from coverage testing allows efforts to speed up a program to be concentrated on specific lines of the source code. We will use the example program below to demonstrate @code{gcov}. This program loops overs the integers 1 to 9 and tests their divisibility with the modulus (@code{%}) operator. @example @verbatiminclude cov.c @end example @noindent To enable coverage testing the program must be compiled with the following options: @example $ gcc -Wall -fprofile-arcs -ftest-coverage cov.c @end example @noindent @cindex @option{-fprofile-arcs} option, instrument branches @cindex @option{fprofile-arcs} option, instrument branches @cindex @option{-ftest-coverage} option, record coverage @cindex @option{ftest-coverage} option, record coverage @cindex instrumented executable, for coverage testing @cindex branches, instrumenting for coverage testing This creates an @dfn{instrumented} executable which contains additional instructions that record the number of times each line of the program is executed. The option @option{-ftest-coverage} adds instructions for counting the number of times individual lines are executed, while @option{-fprofile-arcs} incorporates instrumentation code for each branch of the program. Branch instrumentation records how frequently different paths are taken through @samp{if} statements and other conditionals. The executable must then be run to create the coverage data: @example $ ./a.out 3 is divisible by 3 6 is divisible by 3 9 is divisible by 3 @end example @noindent The data from the run is written to several files with the extensions @file{.bb} @file{.bbg} and @file{.da} respectively in the current directory. This data can be analyzed using the @code{gcov} command and the name of a source file: @example $ gcov cov.c 88.89% of 9 source lines executed in file cov.c Creating cov.c.gcov @end example @noindent The @code{gcov} command produces an annotated version of the original source file, with the file extension @file{.gcov}, containing counts of the number of times each line was executed: @example @verbatiminclude cov_c_gcov @end example @noindent The line counts can be seen in the first column of the output. Lines which were not executed are marked with hashes @samp{######}. The command @samp{grep '######' *.gcov} can be used to find parts of a program which have not been used. @node How the compiler works @chapter How the compiler works @cindex compilation, internal stages of @cindex stages of compilation, used internally @cindex compiler, how it works internally @cindex assembler, @code{as} @cindex preprocessor, @code{cpp} @cindex linker, @code{ld} @cindex archiver, @code{ar} This chapter describes in more detail how GCC transforms source files to an executable file. Compilation is a multi-stage process involving several tools, including the GNU Compiler itself (through the @code{gcc} or @code{g++} frontends), the GNU Assembler @code{as}, and the GNU Linker @code{ld}. The complete set of tools used in the compilation process is referred to as a @dfn{toolchain}. @menu * An overview of the compilation process:: * The preprocessor:: * The compiler:: * The assembler:: * The linker:: @end menu @node An overview of the compilation process @section An overview of the compilation process The sequence of commands executed by a single invocation of GCC consists of the following stages: @itemize @bullet @item preprocessing (to expand macros) @item compilation (from source code to assembly language) @item assembly (from assembly language to machine code) @item linking (to create the final executable) @end itemize @noindent As an example, we will examine these compilation stages individually using the @dfn{Hello World} program @file{hello.c}: @example @verbatiminclude hello.c @end example @noindent Note that it is not necessary to use any of the individual commands described in this section to compile a program. All the commands are executed automatically and transparently by GCC internally, and can be seen using the @option{-v} option described earlier (@pxref{Verbose compilation}). The purpose of this chapter is to provide an understanding of how the compiler works. Although the @dfn{Hello World} program is very simple it uses external header files and libraries, and so exercises all the major steps of the compilation process. @node The preprocessor @section The preprocessor @cindex preprocessor, first stage of compilation The first stage of the compilation process is the use of the preprocessor to expand macros and included header files. To perform this stage, GCC executes the following command:@footnote{As mentioned earlier, the preprocessor is integrated into the compiler in recent versions of GCC. Conceptually, the compilation process is the same as running the preprocessor as separate application.} @cindex @code{.i}, preprocessed file extension for C @cindex @code{i}, preprocessed file extension for C @cindex extension, @code{.i} preprocessed file @cindex file extension, @code{.i} preprocessed file @cindex @code{.ii}, preprocessed file extension for C++ @cindex @code{ii}, preprocessed file extension for C++ @cindex extension, @code{.ii} preprocessed file @cindex file extension, @code{.ii} preprocessed file @example $ cpp hello.c > hello.i @end example @noindent The result is a file @file{hello.i} which contains the source code with all macros expanded. By convention, preprocessed files are given the file extension @file{.i} for C programs and @file{.ii} for C++ programs. In practice, the preprocessed file is not saved to disk unless the @option{-save-temps} option is used. @node The compiler @section The compiler @cindex compiler, converting source code to assembly code @cindex @option{-S} option, create assembly code @cindex @option{S} option, create assembly code The next stage of the process is the actual compilation of preprocessed source code to assembly language, for a specific processor. The command-line option @option{-S} instructs @code{gcc} to convert the preprocessed C source code to assembly language without creating an object file: @example $ gcc -Wall -S hello.i @end example @noindent @cindex @code{.s}, assembly file extension @cindex @code{s}, assembly file extension @cindex extension, @code{.s} assembly file @cindex file extension, @code{.s} assembly file The resulting assembly language is stored in the file @file{hello.s}. Here is what the @dfn{Hello World} assembly language for an Intel x86 (i686) processor looks like: @example $ cat hello.s .file "hello.c" .section .rodata .LC0: .string "Hello, world!\n" .text .globl main .type main, @@function main: pushl %ebp movl %esp, %ebp subl $8, %esp andl $-16, %esp movl $0, %eax subl %eax, %esp movl $.LC0, (%esp) call printf movl $0, %eax leave ret .size main, .-main .ident "GCC: (GNU) 3.3.1" @end example @noindent Note that the assembly language contains a call to the external function @code{printf}. @node The assembler @section The assembler @cindex assembler, converting assembly language to machine code The purpose of the assembler is to convert assembly language into machine code and generate an object file. When there are calls to external functions in the assembly source file, the assembler leaves the addresses of the external functions undefined, to be filled in later by the linker. The assembler can be invoked with the following command line: @example $ as hello.s -o hello.o @end example @noindent As with GCC, the output file is specified with the @option{-o} option. The resulting file @file{hello.o} contains the machine instructions for the @dfn{Hello World} program, with an undefined reference to @code{printf}. @node The linker @section The linker @cindex linker, @code{ld} The final stage of compilation is the linking of object files to create an executable. In practice, an executable requires many external functions from system and C run-time (@code{crt}) libraries. Consequently, the actual link commands used internally by GCC are complicated. For example, the full command for linking the @dfn{Hello World} program is: @example $ ld -dynamic-linker /lib/ld-linux.so.2 /usr/lib/crt1.o /usr/lib/crti.o /usr/lib/gcc-lib/i686/3.3.1/crtbegin.o -L/usr/lib/gcc-lib/i686/3.3.1 hello.o -lgcc -lgcc_eh -lc -lgcc -lgcc_eh /usr/lib/gcc-lib/i686/3.3.1/crtend.o /usr/lib/crtn.o @end example @noindent Fortunately there is never any need to type the command above directly---the entire linking process is handled transparently by @code{gcc} when invoked as follows: @example $ gcc hello.o @end example @noindent This links the object file @file{hello.o} to the C standard library, and produces an executable file @file{a.out}: @example $ ./a.out Hello, world! @end example @noindent An object file for a C++ program can be linked to the C++ standard library in the same way with a single @code{g++} command. @node Examining compiled files @chapter Examining compiled files @cindex examining compiled files @cindex compiled files, examining This chapter describes several useful tools for examining the contents of executable files and object files. @menu * Identifying files:: * Examining the symbol table:: * Finding dynamically linked libraries:: @end menu @node Identifying files @section Identifying files @cindex @code{file} command, for identifying files @cindex identifying files, with @code{file} command @cindex object file, examining with @code{file} command @cindex executable, examining with @code{file} command When a source file has been compiled to an object file or executable the options used to compile it are no longer obvious. The @command{file} command looks at the contents of an object file or executable and determines some of its characteristics, such as whether it was compiled with dynamic or static linking. For example, here is the result of the @command{file} command for a typical executable: @example $ file a.out a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), not stripped @end example @noindent The output shows that the executable file is dynamically linked, and compiled for the Intel 386 and compatible processors. A full explanation of the output is shown below: @table @code @item ELF @cindex ELF format @cindex COFF format The internal format of the executable file (ELF stands for ``Executable and Linking Format'', other formats such as COFF ``Common Object File Format'' are used on some older operating systems (e.g. MS-DOS)). @item 32-bit @cindex word-size, determined from executable file The word size (for some platforms this would be 64-bit). @item LSB @cindex LSB, least significant byte @cindex MSB, most significant byte @cindex Motorola 680x0 @cindex word-ordering, endianness @cindex endianness, word-ordering @cindex big-endian, word-ordering @cindex little-endian, word-ordering Compiled for a platform with @dfn{least significant byte} first word-ordering, such as Intel and AMD x86 processors (the alternative MSB @dfn{most significant byte} first is used by other processors, such as the Motorola 680x0)@footnote{The MSB and LSB orderings are also known as big-endian and little-endian respectively (the terms originate from Jonathan Swift's satire ``Gulliver's Travels'', 1727).}. Some processors such as Itanium and MIPS support both LSB and MSB orderings. @item Intel 80386 The processor the executable file was compiled for. @item version 1 (SYSV) @cindex SYSV, System V executable format This is the version of the internal format of the file. @item dynamically linked The executable uses shared libraries (@code{statically linked} indicates programs linked statically, for example using the @option{-static} option) @item not stripped @cindex @code{strip} command The executable contains a symbol table (this can be removed with the @command{strip} command). @end table @noindent The @command{file} command can also be used on object files, where it gives similar output. The POSIX standard@footnote{POSIX.1 (2003 edition), IEEE Std 1003.1-2003.} for Unix systems defines the behavior of the @command{file} command. @node Examining the symbol table @section Examining the symbol table @cindex symbol table, examining with @code{nm} @cindex @code{nm} command As described earlier in the discussion of debugging, executables and object files can contain a symbol table (@pxref{Compiling for debugging}). This table stores the location of functions and variables by name, and can be displayed with the @command{nm} command: @example $ nm a.out 08048334 t Letext 08049498 ? _DYNAMIC 08049570 ? _GLOBAL_OFFSET_TABLE_ ........ 080483f0 T main 08049590 b object.11 0804948c d p.3 U printf@@GLIBC_2.0 @end example @noindent Among the contents of the symbol table, the output shows that the start of the @code{main} function has the hexadecimal offset @code{080483f0}. Most of the symbols are for internal use by the compiler and operating system. A @samp{T} in the second column indicates a function that is defined in the object file, while a @samp{U} indicates a function which is undefined (and should be resolved by linking against another object file). A complete explanation of the output of @code{nm} can be found in the @cite{GNU Binutils} manual. @cindex Binutils, GNU Binary Tools The most common use of the @command{nm} command is to check whether a library contains the definition of a specific function, by looking for a @samp{T} entry in the second column against the function name. @node Finding dynamically linked libraries @section Finding dynamically linked libraries @cindex dynamically linked libraries, examining with @code{ldd} @cindex @code{ldd}, dynamical loader @cindex dependencies, of shared libraries @cindex shared libraries, dependencies @cindex libraries, finding shared library dependencies When a program has been compiled using shared libraries it needs to load those libraries dynamically at run-time in order to call external functions. The command @command{ldd} examines an executable and displays a list of the shared libraries that it needs. These libraries are referred to as the shared library @dfn{dependencies} of the executable. For example, the following commands demonstrate how to find the shared library dependencies of the @dfn{Hello World} program: @example $ gcc -Wall hello.c $ ldd a.out libc.so.6 => /lib/libc.so.6 (0x40020000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000) @end example @noindent The output shows that the @dfn{Hello World} program depends on the C library @code{libc} (shared library version 6) and the dynamic loader library @code{ld-linux} (shared library version 2). If the program uses external libraries, such as the math library, these are also displayed. For example, the @code{calc} program (which uses the @code{sqrt} function) generates the following output: @example $ gcc -Wall calc.c -lm -o calc $ ldd calc libm.so.6 => /lib/libm.so.6 (0x40020000) libc.so.6 => /lib/libc.so.6 (0x40041000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000) @end example @noindent The first line shows that this program depends on the math library @code{libm} (shared library version 6), in addition to the C library and dynamic loader library. The @code{ldd} command can also be used to examine shared libraries themselves, in order to follow a chain of shared library dependencies. @node Getting help @chapter Getting help @cindex getting help If you encounter a problem not covered by this manual, there are several reference manuals which describe GCC and language-related topics in more detail (@pxref{Further reading}). These manuals contain answers to common questions, and careful study of them will usually yield a solution. If the manuals are unclear, the most appropriate way to obtain help is to ask a knowledgeable colleague for assistance. Alternatively, there are many companies and consultants who offer commercial support for programming matters related to GCC on an hourly or ongoing basis. For businesses this can be a cost-effective way to obtain high-quality support. @cindex support, commercial @cindex consultants, providing commercial support A directory of free software support companies and their current rates can be found on the GNU Project website.@footnote{@uref{http://www.gnu.org/prep/service.html}} With free software, commercial support is available in a free market---service companies compete in quality and price, and users are not tied to any particular one. In contrast, support for proprietary software is usually only available from the original vendor. @cindex enhancements, to GCC A higher-level of commercial support for GCC is available from companies involved in the development of the GNU compiler toolchain itself. A listing of these companies can be found in the ``Development Companies'' section of the publisher's webpage for this book.@footnote{@uref{http://www.network-theory.co.uk/gcc/intro/}} These companies can provide services such as extending GCC to generate code for new CPUs or fixing bugs in the compiler. @node Further reading @unnumbered Further reading @cindex manuals, for GNU software The definitive guide to GCC is the official reference manual, ``@cite{Using GCC}'', published by GNU Press: @cindex Using GCC (Reference Manual) @cindex GNU Compilers, Reference Manual @itemize @asis{} @item @cite{Using GCC (for GCC version 3.3.1)} by Richard M. Stallman and the GCC Developer Community (Published by GNU Press, ISBN 1-882114-39-6) @end itemize @noindent This manual is essential for anyone working with GCC because it describes every option in detail. Note that the manual is updated when new releases of GCC become available, so the ISBN number may change in the future. If you are new to programming with GCC you will also want to learn how to use the GNU Debugger GDB, and how to compile large programs easily with GNU Make. These tools are described in the following manuals: @cindex GNU GDB Manual @cindex GNU Make Manual @itemize @asis{} @item @cite{Debugging with GDB: The GNU Source-Level Debugger} by Richard M. Stallman, Roland Pesch, Stan Shebs, et al. (Published by GNU Press, ISBN 1-882114-88-4) @item @cite{GNU Make: A Program for Directing Recompilation} by Richard M. Stallman and Roland McGrath (Published by GNU Press, ISBN 1-882114-82-5) @end itemize @noindent For effective C programming it is also essential to have a good knowledge of the C standard library. The following manual documents all the functions in the GNU C Library: @cindex GNU C Library Reference Manual @itemize @asis{} @item @cite{The GNU C Library Reference Manual} by Sandra Loosemore with Richard M. Stallman, et al (2 vols) (Published by GNU Press, ISBN 1-882114-22-1 and 1-882114-24-8) @end itemize @noindent @cindex GNU Press, manuals Be sure to check the website @uref{http://www.gnupress.org/} for the latest printed editions of manuals published by GNU Press. The manuals can be purchased online using a credit card at the FSF website@footnote{@uref{http://order.fsf.org/}} in addition to being available for order through most bookstores using the ISBNs. Manuals published by GNU Press raise funds for the Free Software Foundation and the GNU Project. @cindex shell quoting Information about shell commands, environment variables and shell-quoting rules can be found in the following book: @itemize @asis @item @uref{http://www.network-theory.co.uk/bash/manual/,@cite{The GNU Bash Reference Manual},@cite{The GNU Bash Reference Manual}} by Chet Ramey and Brian Fox (Published by Network Theory Ltd, ISBN 0-9541617-7-7) @end itemize @noindent Other GNU Manuals mentioned in this book (such as @cite{GNU gprof---The GNU Profiler} and @cite{The GNU Binutils Manual}) were not available in print at the time this book went to press. Links to online copies can be found at the publisher's webpage for this book.@footnote{@uref{http://www.network-theory.co.uk/gcc/intro/}} The official GNU Project webpages for GCC can be found at @uref{http://www.gnu.org/software/gcc/}. These include a list of frequently asked questions, as well as the GCC bug tracking database and a lot of other useful information about GCC. There are many books about the C and C++ languages themselves. Two of the standard references are: @cindex reference books, C language @cindex C language, further reading @cindex books, further reading @cindex Kernighan and Ritchie, @cite{The C Programming Language} @itemize @asis{} @item @cite{The C Programming Language} (ANSI edition) Brian W. Kernighan, Dennis Ritchie (ISBN 0-13110362-8) @item @cite{The C++ Programming Language} (3rd edition) Bjarne Stroustrup (ISBN 0-20188954-4) @end itemize @noindent @cindex standards, C, C++ and IEEE arithmetic @cindex IEEE arithmetic standard, printed form @cindex C/C++ languages, standards in printed form @cindex ISO standards for C/C++ languages, available as books @cindex ANSI standards for C/C++ languages, available as books Anyone using the C and C++ languages in a professional context should obtain a copy of the official language standards. The official C standard number is ISO/IEC 9899:1990, for the original C standard published in 1990 and implemented by GCC. A revised C standard ISO/IEC 9899:1999 (known as C99) was published in 1999, and this is mostly (but not yet fully) supported by GCC. The C++ standard is ISO/IEC 14882. The IEEE floating-point arithmetic standard (IEEE-754) is also important for any programs involving numerical computations. These standards documents are available commercially from the relevant standards bodies. The C and C++ standards are also available as printed books: @itemize @asis{} @item @cite{The C Standard: Incorporating Technical Corrigendum 1} (Published by Wiley, ISBN 0-470-84573-2) @item @cite{The C++ Standard} (Published by Wiley, ISBN 0-470-84674-7) @end itemize For ongoing learning, anyone using GCC might consider joining the Association of C and C++ Users (ACCU). The ACCU is a non-profit organization devoted to professionalism in programming at all levels, and is recognized as an authority in the field. More information is available from the ACCU website @uref{http://www.accu.org/}. @cindex journals, about C and C++ programming @cindex ACCU, Association of C and C++ users @cindex Association of C and C++ users (ACCU) The ACCU publishes two journals about programming in C and C++ and organizes regular conferences. The annual membership fee represents a good investment for individuals, or for companies who want to encourage a higher standard of professional development among their staff. @node Acknowledgements @unnumbered Acknowledgements Many people have contributed to this book, and it is important to record their names here: Thanks to Gerald Pfeifer, for his careful reviewing and numerous suggestions for improving the book. Thanks to Andreas Jaeger, for information on AMD64 and multi-architecture support, and many helpful comments. Thanks to David Edelsohn, for information on the POWER/PowerPC series of processors. Thanks to Jamie Lokier, for research. Thanks to Stephen Compall, for helpful corrections. Thanks to Gerard Jungman, for useful comments. Thanks to Steven Rubin, for generating the chip layout for the cover with Electric. And most importantly, thanks to Richard Stallman, founder of the GNU Project, for writing GCC and making it free software. @ifnotinfo @page @unnumbered Other books from the publisher @include books.texi @unnumbered Free software organizations @include associations.texi @unnumbered GNU Free Documentation License @include fdl.texi @end ifnotinfo @ifnothtml @node Index @unnumbered Index @printindex cp @end ifnothtml @ifset extrablankpages @comment final page must be blank for printed version @page @headings off @* @page @* @c @page @c @* @c @page @c @* @end ifset @bye gccintro-1.0/alpha.c0000664000175000017500000000015707746205531010112 #include int main (void) { double x = 1.0, y = 0.0; printf ("x/y = %g\n", x / y); return 0; } gccintro-1.0/ansi.c0000664000175000017500000000017510021645024007740 #include int main (void) { const char asm[] = "6502"; printf ("the string asm is '%s'\n", asm); return 0; } gccintro-1.0/bad.c0000664000175000017500000000013207716171611007542 #include int main (void) { printf ("Two plus two is %f\n", 4); return 0; } gccintro-1.0/badpow.c0000664000175000017500000000016610035255475010276 #include int main (void) { double x = pow (2.0, 3.0); printf ("Two cubed is %f\n", x); return 0; } gccintro-1.0/bye_fn.c0000664000175000017500000000012510010457667010257 #include #include "hello.h" void bye (void) { printf ("Goodbye!\n"); } gccintro-1.0/calc.c0000664000175000017500000000021710003010023007666 #include #include int main (void) { double x = sqrt (2.0); printf ("The square root of 2.0 is %f\n", x); return 0; } gccintro-1.0/calc2.c0000664000175000017500000000016710004227623007775 #include int main (void) { double x = exp (1.0); printf ("The value of e(1) is %f\n", x); return 0; } gccintro-1.0/castqual.c0000664000175000017500000000011010004242126010605 void f (const char * str) { char * s = (char *)str; s[0] = '\0'; } gccintro-1.0/check.c0000664000175000017500000000007707736024220010074 int main (void) { double x[10], y; y = x[3]; return 0; } gccintro-1.0/collatz.c0000664000175000017500000000126310023063702010454 #include /* Computes the length of Collatz sequences */ unsigned int step (unsigned int x) { if (x % 2 == 0) { return (x / 2); } else { return (3 * x + 1); } } unsigned int nseq (unsigned int x0) { unsigned int i = 1, x; if (x0 == 1 || x0 == 0) return i; x = step (x0); while (x != 1 && x != 0) { x = step (x); i++; } return i; } int main (void) { unsigned int i, m = 0, im = 0; for (i = 1; i < 500000; i++) { unsigned int k = nseq (i); if (k > m) { m = k; im = i; printf ("sequence length = %u for %u\n", m, im); } } return 0; } gccintro-1.0/cov.c0000664000175000017500000000036007736024651007611 #include int main (void) { int i; for (i = 1; i < 10; i++) { if (i % 3 == 0) printf ("%d is divisible by 3\n", i); if (i % 11 == 0) printf ("%d is divisible by 11\n", i); } return 0; } gccintro-1.0/dbmain.c0000664000175000017500000000057210022660224010241 #include #include int main (void) { GDBM_FILE dbf; datum key = { "testkey", 7 }; /* key, length */ datum value = { "testvalue", 9 }; /* value, length */ printf ("Storing key-value pair... "); dbf = gdbm_open ("test", 0, GDBM_NEWDB, 0644, 0); gdbm_store (dbf, key, value, GDBM_INSERT); gdbm_close (dbf); printf ("done.\n"); return 0; } gccintro-1.0/dtest.c0000664000175000017500000000017410022662663010141 #include int main (void) { #ifdef TEST printf ("Test mode\n"); #endif printf ("Running...\n"); return 0; } gccintro-1.0/dtestval.c0000664000175000017500000000013307741773517010656 #include int main (void) { printf("Value of NUM is %d\n", NUM); return 0; } gccintro-1.0/dtestval2.c0000664000175000017500000000013307741774004010730 #include int main (void) { printf("Value of NUM is %d\n", 2+2); return 0; } gccintro-1.0/dtestval3.c0000664000175000017500000000014407741774203010734 #include int main (void) { printf ("Ten times NUM is %d\n", 10 * (NUM)); return 0; } gccintro-1.0/gnuarray.c0000664000175000017500000000017710004221737010642 int main (int argc, char *argv[]) { int i, n = argc; double x[n]; for (i = 0; i < n; i++) x[i] = i; return 0; } gccintro-1.0/heap.c0000664000175000017500000000150307736033670007740 void swap (double *base, int i, int j) { double tmp = base[i]; base[i] = base[j]; base[j] = tmp; } void downheap (double *data, const int N, int k) { while (k <= N / 2) { int j = 2 * k; if (j < N && (data[j] < data[j + 1])) j++; if (data[k] < data[j]) swap (data, j, k); else break; k = j; } } void heapsort (double *data, int count) { int N = count - 1, k = (N / 2) + 1; do { downheap (data, N, --k); } while (k > 0); while (N > 0) { swap (data, 0, N); downheap (data, --N, 0); } } #include #include int main (void) { int i, N = 1000000; double *x = malloc (N * sizeof (double)); for (i = 0; i < N; i++) x[i] = N / (1.0 + i); heapsort (x, N); for (i = 0; i < N; i++) printf ("%d %g\n", i, x[i]); } gccintro-1.0/hello.c0000664000175000017500000000012210010455421010077 #include int main (void) { printf ("Hello, world!\n"); return 0; } gccintro-1.0/hello2.c0000664000175000017500000000016310010457677010205 #include #include "hello.h" void hello (const char * name) { printf ("Good morning, %s!\n", name); } gccintro-1.0/hello_fn.c0000664000175000017500000000015410010457673010602 #include #include "hello.h" void hello (const char * name) { printf ("Hello, %s!\n", name); } gccintro-1.0/hellobad.c0000664000175000017500000000007607715715106010577 int main (void) { printf ("Hello, world!\n"); return 0; } gccintro-1.0/hellocc.c0000664000175000017500000000013710031550457010423 #include int main () { std::cout << "Hello, world!" << std::endl; return 0; } gccintro-1.0/inline.c0000664000175000017500000000026710006233317010267 #include double sq (double x) { return x * x; } int main (void) { int i; double sum = 0; for (i = 0; i < 100000000; i++) sum += sq (i + 0.5); return 0; } gccintro-1.0/inline2.c0000664000175000017500000000030007735040144010346 #include #include int main (void) { int i; double sum = 0; for (i = 0; i < 1e7; i++) sum += (i + 0.5) * (i + 0.5); printf ("sum = %g\n", sum); return 0; } gccintro-1.0/main.c0000664000175000017500000000010710010457706007733 #include "hello.h" int main (void) { hello ("world"); return 0; } gccintro-1.0/main2.c0000664000175000017500000000014610026305561010016 #include "hello.h" int main (void) { hello ("everyone"); /* changed from "world" */ return 0; } gccintro-1.0/main3.c0000664000175000017500000000012410010457721010012 #include "hello.h" int main (void) { hello ("everyone"); bye (); return 0; } gccintro-1.0/main4.c0000664000175000017500000000011610004230177010011 #include int main (void) { printf ("hello world\n"); return; } gccintro-1.0/nested.c0000664000175000017500000000000007730422731010266 gccintro-1.0/null.c0000664000175000017500000000020610005727576007772 int a (int *p); int main (void) { int *p = 0; /* null pointer */ return a (p); } int a (int *p) { int y = *p; return y; } gccintro-1.0/optim.c0000664000175000017500000000050610023056762010143 #include double powern (double d, unsigned n) { double x = 1.0; unsigned j; for (j = 1; j <= n; j++) x *= d; return x; } int main (void) { double sum = 0.0; unsigned i; for (i = 1; i <= 100000000; i++) { sum += powern (i, i % 5); } printf ("sum = %g\n", sum); return 0; } gccintro-1.0/pi.c0000664000175000017500000000016310021647565007427 #include #include int main (void) { printf("the value of pi is %f\n", M_PI); return 0; } gccintro-1.0/prof.c0000664000175000017500000000044707736027225007776 #include double f (int i) { if (i == 1) return i; return i * f (i - 1); } double g (int i) { double p = 1; int k; for (k = 1; k <= i; k++) p *= k; return p; } int main (void) { printf ("f(30) = %g\n", f (30)); printf ("g(30) = %g\n", g (30)); return 0; } gccintro-1.0/readdir.c0000664000175000017500000000035410021642664010426 #include #include #include int main (void) { DIR * d = opendir("."); struct dirent * f; while ((f = readdir(d)) != 0) { printf("%s\n", f->d_name); } return closedir(d); } gccintro-1.0/shadow.c0000664000175000017500000000013510004233274010270 double test (double x) { double y = 1.0; { double y; y = x; } return y; } gccintro-1.0/shadow2.c0000664000175000017500000000017610035032467010364 double sin_series (double x) { /* series expansion for small x */ double sin = x * (1.0 - x * x / 6.0); return sin; } gccintro-1.0/stl2.c0000664000175000017500000000037607742471605007717 #include #include #include using namespace std; void foo(); int main () { list list; list.push_front("Hello"); list.push_back("World"); cout << "List size = " << list.size() << endl; foo(); return 0; } gccintro-1.0/stl3.c0000664000175000017500000000033707742471621007713 #include #include #include using namespace std; void foo(void) { list list; list.push_front("Hello"); list.push_back("World"); cout << "List size = " << list.size() << endl; } gccintro-1.0/sus.c0000664000175000017500000000007007730425351007626 int foo(unsigned int i) { return (i < 12345678901); } gccintro-1.0/test.c0000664000175000017500000000006610031547420007766 #define TEST "Hello, World!" const char str[] = TEST; gccintro-1.0/uninit.c0000664000175000017500000000014707734337366010343 int sign (int x) { int s; if (x > 0) s = 1; else if (x < 0) s = -1; return s; } gccintro-1.0/uninit2.c0000664000175000017500000000011407733063742010410 #include int main (void) { double x; printf("x=%g\n", x); } gccintro-1.0/w.c0000664000175000017500000000014110031546363007254 int foo (unsigned int x) { if (x < 0) return 0; /* cannot occur */ else return 1; } gccintro-1.0/whetstone.c0000664000175000017500000001767106505267124011054 /* * C Converted Whetstone Double Precision Benchmark * Version 1.2 22 March 1998 * * (c) Copyright 1998 Painter Engineering, Inc. * All Rights Reserved. * * Permission is granted to use, duplicate, and * publish this text and program as long as it * includes this entire comment block and limited * rights reference. * * Converted by Rich Painter, Painter Engineering, Inc. based on the * www.netlib.org benchmark/whetstoned version obtained 16 March 1998. * * A novel approach was used here to keep the look and feel of the * FORTRAN version. Altering the FORTRAN-based array indices, * starting at element 1, to start at element 0 for C, would require * numerous changes, including decrementing the variable indices by 1. * Instead, the array E1[] was declared 1 element larger in C. This * allows the FORTRAN index range to function without any literal or * variable indices changes. The array element E1[0] is simply never * used and does not alter the benchmark results. * * The major FORTRAN comment blocks were retained to minimize * differences between versions. Modules N5 and N12, like in the * FORTRAN version, have been eliminated here. * * An optional command-line argument has been provided [-c] to * offer continuous repetition of the entire benchmark. * An optional argument for setting an alternate LOOP count is also * provided. Define PRINTOUT to cause the POUT() function to print * outputs at various stages. Final timing measurements should be * made with the PRINTOUT undefined. * * Questions and comments may be directed to the author at * r.painter@ieee.org */ /* C********************************************************************** C Benchmark #2 -- Double Precision Whetstone (A001) C C o This is a REAL*8 version of C the Whetstone benchmark program. C C o DO-loop semantics are ANSI-66 compatible. C C o Final measurements are to be made with all C WRITE statements and FORMAT sttements removed. C C********************************************************************** */ /* standard C library headers required */ #include #include #include #include /* the following is optional depending on the timing function used */ #include /* map the FORTRAN math functions, etc. to the C versions */ #define DSIN sin #define DCOS cos #define DATAN atan #define DLOG log #define DEXP exp #define DSQRT sqrt #define IF if /* function prototypes */ void POUT(long N, long J, long K, double X1, double X2, double X3, double X4); void PA(double E[]); void P0(void); void P3(double X, double Y, double *Z); #define USAGE "usage: whetdc [-c] [loops]\n" /* COMMON T,T1,T2,E1(4),J,K,L */ double T,T1,T2,E1[5]; int J,K,L; int main(int argc, char *argv[]) { /* used in the FORTRAN version */ long I; long N1, N2, N3, N4, N6, N7, N8, N9, N10, N11; double X1,X2,X3,X4,X,Y,Z; long LOOP; int II, JJ; /* added for this version */ long loopstart; long startsec, finisec; float KIPS; int continuous; loopstart = 1000; /* see the note about LOOP below */ continuous = 0; II = 1; /* start at the first arg (temp use of II here) */ while (II < argc) { if (strncmp(argv[II], "-c", 2) == 0 || argv[II][0] == 'c') { continuous = 1; } else if (atol(argv[II]) > 0) { loopstart = atol(argv[II]); } else { fprintf(stderr, USAGE); return(1); } II++; } LCONT: /* C C Start benchmark timing at this point. C */ startsec = time(0); /* C C The actual benchmark starts here. C */ T = .499975; T1 = 0.50025; T2 = 2.0; /* C C With loopcount LOOP=10, one million Whetstone instructions C will be executed in EACH MAJOR LOOP..A MAJOR LOOP IS EXECUTED C 'II' TIMES TO INCREASE WALL-CLOCK TIMING ACCURACY. C LOOP = 1000; */ LOOP = loopstart; II = 1; JJ = 1; IILOOP: N1 = 0; N2 = 12 * LOOP; N3 = 14 * LOOP; N4 = 345 * LOOP; N6 = 210 * LOOP; N7 = 32 * LOOP; N8 = 899 * LOOP; N9 = 616 * LOOP; N10 = 0; N11 = 93 * LOOP; /* C C Module 1: Simple identifiers C */ X1 = 1.0; X2 = -1.0; X3 = -1.0; X4 = -1.0; for (I = 1; I <= N1; I++) { X1 = (X1 + X2 + X3 - X4) * T; X2 = (X1 + X2 - X3 + X4) * T; X3 = (X1 - X2 + X3 + X4) * T; X4 = (-X1+ X2 + X3 + X4) * T; } #ifdef PRINTOUT IF (JJ==II)POUT(N1,N1,N1,X1,X2,X3,X4); #endif /* C C Module 2: Array elements C */ E1[1] = 1.0; E1[2] = -1.0; E1[3] = -1.0; E1[4] = -1.0; for (I = 1; I <= N2; I++) { E1[1] = ( E1[1] + E1[2] + E1[3] - E1[4]) * T; E1[2] = ( E1[1] + E1[2] - E1[3] + E1[4]) * T; E1[3] = ( E1[1] - E1[2] + E1[3] + E1[4]) * T; E1[4] = (-E1[1] + E1[2] + E1[3] + E1[4]) * T; } #ifdef PRINTOUT IF (JJ==II)POUT(N2,N3,N2,E1[1],E1[2],E1[3],E1[4]); #endif /* C C Module 3: Array as parameter C */ for (I = 1; I <= N3; I++) PA(E1); #ifdef PRINTOUT IF (JJ==II)POUT(N3,N2,N2,E1[1],E1[2],E1[3],E1[4]); #endif /* C C Module 4: Conditional jumps C */ J = 1; for (I = 1; I <= N4; I++) { if (J == 1) J = 2; else J = 3; if (J > 2) J = 0; else J = 1; if (J < 1) J = 1; else J = 0; } #ifdef PRINTOUT IF (JJ==II)POUT(N4,J,J,X1,X2,X3,X4); #endif /* C C Module 5: Omitted C Module 6: Integer arithmetic C */ J = 1; K = 2; L = 3; for (I = 1; I <= N6; I++) { J = J * (K-J) * (L-K); K = L * K - (L-J) * K; L = (L-K) * (K+J); E1[L-1] = J + K + L; E1[K-1] = J * K * L; } #ifdef PRINTOUT IF (JJ==II)POUT(N6,J,K,E1[1],E1[2],E1[3],E1[4]); #endif /* C C Module 7: Trigonometric functions C */ X = 0.5; Y = 0.5; for (I = 1; I <= N7; I++) { X = T * DATAN(T2*DSIN(X)*DCOS(X)/(DCOS(X+Y)+DCOS(X-Y)-1.0)); Y = T * DATAN(T2*DSIN(Y)*DCOS(Y)/(DCOS(X+Y)+DCOS(X-Y)-1.0)); } #ifdef PRINTOUT IF (JJ==II)POUT(N7,J,K,X,X,Y,Y); #endif /* C C Module 8: Procedure calls C */ X = 1.0; Y = 1.0; Z = 1.0; for (I = 1; I <= N8; I++) P3(X,Y,&Z); #ifdef PRINTOUT IF (JJ==II)POUT(N8,J,K,X,Y,Z,Z); #endif /* C C Module 9: Array references C */ J = 1; K = 2; L = 3; E1[1] = 1.0; E1[2] = 2.0; E1[3] = 3.0; for (I = 1; I <= N9; I++) P0(); #ifdef PRINTOUT IF (JJ==II)POUT(N9,J,K,E1[1],E1[2],E1[3],E1[4]); #endif /* C C Module 10: Integer arithmetic C */ J = 2; K = 3; for (I = 1; I <= N10; I++) { J = J + K; K = J + K; J = K - J; K = K - J - J; } #ifdef PRINTOUT IF (JJ==II)POUT(N10,J,K,X1,X2,X3,X4); #endif /* C C Module 11: Standard functions C */ X = 0.75; for (I = 1; I <= N11; I++) X = DSQRT(DEXP(DLOG(X)/T1)); #ifdef PRINTOUT IF (JJ==II)POUT(N11,J,K,X,X,X,X); #endif /* C C THIS IS THE END OF THE MAJOR LOOP. C */ if (++JJ <= II) goto IILOOP; /* C C Stop benchmark timing at this point. C */ finisec = time(0); /* C---------------------------------------------------------------- C Performance in Whetstone KIP's per second is given by C C (100*LOOP*II)/TIME C C where TIME is in seconds. C-------------------------------------------------------------------- */ printf("\n"); if (finisec-startsec <= 0) { printf("Insufficient duration- Increase the LOOP count\n"); return(1); } printf("Loops: %ld, Iterations: %d, Duration: %ld sec.\n", LOOP, II, finisec-startsec); KIPS = (100.0*LOOP*II)/(float)(finisec-startsec); if (KIPS >= 1000.0) printf("C Converted Double Precision Whetstones: %.1f MIPS\n", KIPS/1000.0); else printf("C Converted Double Precision Whetstones: %.1f KIPS\n", KIPS); if (continuous) goto LCONT; return(0); } void PA(double E[]) { J = 0; L10: E[1] = ( E[1] + E[2] + E[3] - E[4]) * T; E[2] = ( E[1] + E[2] - E[3] + E[4]) * T; E[3] = ( E[1] - E[2] + E[3] + E[4]) * T; E[4] = (-E[1] + E[2] + E[3] + E[4]) / T2; J += 1; if (J < 6) goto L10; } void P0(void) { E1[J] = E1[K]; E1[K] = E1[L]; E1[L] = E1[J]; } void P3(double X, double Y, double *Z) { double X1, Y1; X1 = X; Y1 = Y; X1 = T * (X1 + Y1); Y1 = T * (X1 + Y1); *Z = (X1 + Y1) / T2; } #ifdef PRINTOUT void POUT(long N, long J, long K, double X1, double X2, double X3, double X4) { printf("%7ld %7ld %7ld %12.4e %12.4e %12.4e %12.4e\n", N, J, K, X1, X2, X3, X4); } #endif gccintro-1.0/buffer.h0000664000175000017500000000105010026074366010267 #ifndef BUFFER_H #define BUFFER_H template class Buffer { public: Buffer (unsigned int n); void insert (const T & x); T get (unsigned int k) const; private: unsigned int i; unsigned int size; T *pT; }; template Buffer::Buffer (unsigned int n) { i = 0; size = n; pT = new T[n]; }; template void Buffer::insert (const T & x) { i = (i + 1) % size; pT[i] = x; }; template T Buffer::get (unsigned int k) const { return pT[(i + (size - k)) % size]; }; #endif /* BUFFER_H */ gccintro-1.0/grid.h0000664000175000017500000000062507743002311007743 template class Grid { public: Grid(int x, int y); void set(int x, int y, T f); T get(int x, int y); private: int nx; int ny; T * pT; }; template Grid::Grid(int x, int y) { nx = x; ny = y; pT = new T[nx * ny]; }; template void Grid::set(int x, int y, T f) { pT[x*ny+y] = f; }; template T Grid::get(int x, int y) { return pT[x*ny+y]; }; gccintro-1.0/hello.h0000664000175000017500000000006110010460163010105 void hello (const char * name); void bye (void); gccintro-1.0/hello1.h0000664000175000017500000000004010023077761010177 void hello (const char * name); gccintro-1.0/tprint.h0000664000175000017500000000004107744261572010347 void print (Buffer buff); gccintro-1.0/hello.cc0000664000175000017500000000013710006241047010252 #include int main () { std::cout << "Hello, world!" << std::endl; return 0; } gccintro-1.0/hellostr.cc0000664000175000017500000000025207743004735011020 #include #include using namespace std; int main () { string s1 = "Hello,"; string s2 = "World!"; cout << s1 + " " + s2 << endl; return 0; } gccintro-1.0/string.cc0000664000175000017500000000034507742515575010505 #include #include #include using namespace std; int main () { list list; list.push_back("Hello"); list.push_back("World"); cout << "List size = " << list.size() << endl; return 0; } gccintro-1.0/templates.cc0000664000175000017500000000006207746205445011165 #include "buffer.h" template class Buffer; gccintro-1.0/templates2.cc0000664000175000017500000000015507746205456011254 #include "buffer.h" template class Buffer; template class Buffer; template class Buffer; gccintro-1.0/tmain.cc0000664000175000017500000000030307744261557010301 #include #include "buffer.h" #include "tprint.h" using namespace std; int main () { Buffer f(10); f.insert (0.23); f.insert (1.0 + f.get()); print (f); return 0; } gccintro-1.0/tmp.cc0000664000175000017500000000041010026074517007751 #include #include "buffer.h" using namespace std; int main () { Buffer f(10); f.insert (0.25); f.insert (1.0 + f.get(0)); cout << "stored value 0 = " << f.get(0) << endl; cout << "stored value 1 = " << f.get(1) << endl; return 0; } gccintro-1.0/tprint.cc0000664000175000017500000000025307744261667010517 #include #include "buffer.h" #include "tprint.h" using namespace std; void print (Buffer buff) { cout << "stored value = " << buff.get() << endl; } gccintro-1.0/tprog.cc0000664000175000017500000000032310006440036010276 #include #include "buffer.h" using namespace std; int main () { Buffer f(10); f.insert (0.25); f.insert (1.0 + f.get(0)); cout << "stored value = " << f.get(0) << endl; return 0; } gccintro-1.0/cov_c_gcov0000664000175000017500000000054007746522654010717 #include int main (void) { 1 int i; 10 for (i = 1; i < 10; i++) { 9 if (i % 3 == 0) 3 printf ("%d is divisible by 3\n", i); 9 if (i % 11 == 0) ###### printf ("%d is divisible by 11\n", i); 9 } 1 return 0; 1 } gccintro-1.0/COPYING.FDL0000664000175000017500000004766207630705563010336 GNU Free Documentation License Version 1.2, November 2002 Copyright (C) 2000,2001,2002 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. 0. PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. 1. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque". Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only. The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text. A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. 2. VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. 3. COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. 4. MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version: A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement. C. State on the Title page the name of the publisher of the Modified Version, as the publisher. D. Preserve all the copyright notices of the Document. E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice. H. Include an unaltered copy of this License. I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence. J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version. N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section. O. Preserve any Warranty Disclaimers. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles. You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. 5. COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements". 6. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. 7. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. 8. TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. 9. TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 10. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. ADDENDUM: How to use this License for your documents To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the "with...Texts." line with this: with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST. If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation. If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software. gccintro-1.0/gccintro.info0000664000175000017500000064234110046440753011347 This is gccintro.info, produced by makeinfo version 4.6 from gccintro.texi.  File: gccintro.info, Node: Top, Next: Introduction, Up: (dir) An Introduction to GCC ********************** This manual provides an introduction to the GNU C and C++ Compilers, `gcc' and `g++', which are part of the GNU Compiler Collection (GCC). The development of this manual was funded entirely by Network Theory Ltd. Copies published by Network Theory Ltd raise money for more free documentation. * Menu: * Introduction:: * Compiling a C program:: * Compilation options:: * Using the preprocessor:: * Compiling for debugging:: * Compiling with optimization:: * Compiling a C++ program:: * Platform-specific options:: * Troubleshooting:: * Compiler-related tools:: * How the compiler works:: * Examining compiled files:: * Getting help:: * Further reading:: * Acknowledgements:: * Index::  File: gccintro.info, Node: Introduction, Next: Compiling a C program, Prev: Top, Up: Top Introduction ************ The purpose of this book is to explain the use of the GNU C and C++ compilers, `gcc' and `g++'. After reading this book you should understand how to compile a program, and how to use basic compiler options for optimization and debugging. This book does not attempt to teach the C or C++ languages themselves, since this material can be found in many other places (*note Further reading::). Experienced programmers who are familiar with other systems, but new to the GNU compilers, can skip the early sections of the chapters "`Compiling a C program'", "`Using the preprocessor'" and "`Compiling a C++ program'". The remaining sections and chapters should provide a good overview of the features of GCC for those already know how to use other compilers. * Menu: * A brief history of GCC:: * Major features of GCC:: * Programming in C and C++:: * Conventions used in this manual::  File: gccintro.info, Node: A brief history of GCC, Next: Major features of GCC, Up: Introduction A brief history of GCC ====================== The original author of the GNU C Compiler (GCC) is Richard Stallman, the founder of the GNU Project. The GNU project was started in 1984 to create a complete Unix-like operating system as free software, in order to promote freedom and cooperation among computer users and programmers. Every Unix-like operating system needs a C compiler, and as there were no free compilers in existence at that time, the GNU Project had to develop one from scratch. The work was funded by donations from individuals and companies to the Free Software Foundation, a non-profit organization set up to support the work of the GNU Project. The first release of GCC was made in 1987. This was a significant breakthrough, being the first portable ANSI C optimizing compiler released as free software. Since that time GCC has become one of the most important tools in the development of free software. A major revision of the compiler came with the 2.0 series in 1992, which added the ability to compile C++. In 1997 an experimental branch of the compiler (EGCS) was created, to improve optimization and C++ support. Following this work, EGCS was adopted as the new main-line of GCC development, and these features became widely available in the 3.0 release of GCC in 2001. Over time GCC has been extended to support many additional languages, including Fortran, ADA, Java and Objective-C. The acronym GCC is now used to refer to the "GNU Compiler Collection". Its development is guided by the "GCC Steering Committee", a group composed of representatives from GCC user communities in industry, research and academia.  File: gccintro.info, Node: Major features of GCC, Next: Programming in C and C++, Prev: A brief history of GCC, Up: Introduction Major features of GCC ===================== This section describes some of the most important features of GCC. First of all, GCC is a portable compiler--it runs on most platforms available today, and can produce output for many types of processors. In addition to the processors used in personal computers, it also supports microcontrollers, DSPs and 64-bit CPUs. GCC is not only a native compiler--it can also "cross-compile" any program, producing executable files for a different system from the one used by GCC itself. This allows software to be compiled for embedded systems which are not capable of running a compiler. GCC is written in C with a strong focus on portability, and can compile itself, so it can be adapted to new systems easily. GCC has multiple language "frontends", for parsing different languages. Programs in each language can be compiled, or cross-compiled, for any architecture. For example, an ADA program can be compiled for a microcontroller, or a C program for a supercomputer. GCC has a modular design, allowing support for new languages and architectures to be added. Adding a new language front-end to GCC enables the use of that language on any architecture, provided that the necessary run-time facilities (such as libraries) are available. Similarly, adding support for a new architecture makes it available to all languages. Finally, and most importantly, GCC is free software, distributed under the GNU General Public License (GNU GPL).(1) This means you have the freedom to use and to modify GCC, as with all GNU software. If you need support for a new type of CPU, a new language, or a new feature you can add it yourself, or hire someone to enhance GCC for you. You can hire someone to fix a bug if it is important for your work. Furthermore, you have the freedom to share any enhancements you make to GCC. As a result of this freedom you can also make use of enhancements to GCC developed by others. The many features offered by GCC today show how this freedom to cooperate works to benefit you, and everyone else who uses GCC. ---------- Footnotes ---------- (1) For details see the license file `COPYING' distributed with GCC.  File: gccintro.info, Node: Programming in C and C++, Next: Conventions used in this manual, Prev: Major features of GCC, Up: Introduction Programming in C and C++ ======================== C and C++ are languages that allow direct access to the computer's memory. Historically, they have been used for writing low-level systems software, and applications where high-performance or control over resource usage are critical. However, great care is required to ensure that memory is accessed correctly, to avoid corrupting other data-structures. This book describes techniques that will help in detecting potential errors during compilation, but the risk in using languages like C or C++ can never be eliminated. In addition to C and C++ the GNU Project also provides other high-level languages, such as GNU Common Lisp (`gcl'), GNU Smalltalk (`gst'), the GNU Scheme extension language (`guile') and the GNU Compiler for Java (`gcj'). These languages do not allow the user to access memory directly, eliminating the possibility of memory access errors. They are a safer alternative to C and C++ for many applications.  File: gccintro.info, Node: Conventions used in this manual, Prev: Programming in C and C++, Up: Introduction Conventions used in this manual =============================== This manual contains many examples which can be typed at the keyboard. A command entered at the terminal is shown like this, $ command followed by its output. For example: $ echo "hello world" hello world The first character on the line is the terminal prompt, and should not be typed. The dollar sign `$' is used as the standard prompt in this manual, although some systems may use a different character. When a command in an example is too long to fit in a single line it is wrapped and then indented on subsequent lines, like this: $ echo "an example of a line which is too long to fit in this manual" When entered at the keyboard, the entire command should be typed on a single line. The example source files used in this manual can be downloaded from the publisher's website,(1) or entered by hand using any text editor, such as the standard GNU editor, `emacs'. The example compilation commands use `gcc' and `g++' as the names of the GNU C and C++ compilers, and `cc' to refer to other compilers. The example programs should work with any version of GCC. Any command-line options which are only available in recent versions of GCC are noted in the text. The examples assume the use of a GNU operating system--there may be minor differences in the output on other systems. Some non-essential and verbose system-dependent output messages (such as very long system paths) have been edited in the examples for brevity. The commands for setting environment variables use the syntax of the standard GNU shell (`bash'), and should work with any version of the Bourne shell. ---------- Footnotes ---------- (1) See `http://www.network-theory.co.uk/gcc/intro/'  File: gccintro.info, Node: Compiling a C program, Next: Compilation options, Prev: Introduction, Up: Top Compiling a C program ********************* This chapter describes how to compile C programs using `gcc'. Programs can be compiled from a single source file or from multiple source files, and may use system libraries and header files. Compilation refers to the process of converting a program from the textual "source code", in a programming language such as C or C++, into "machine code", the sequence of 1's and 0's used to control the central processing unit (CPU) of the computer. This machine code is then stored in a file known as an "executable file", sometimes referred to as a "binary file". * Menu: * Compiling a simple C program:: * Finding errors in a simple program:: * Compiling multiple source files:: * Compiling files independently:: * Recompiling and relinking:: * Linking with external libraries:: * Using library header files::  File: gccintro.info, Node: Compiling a simple C program, Next: Finding errors in a simple program, Up: Compiling a C program Compiling a simple C program ============================ The classic example program for the C language is "Hello World". Here is the source code for our version of the program: #include int main (void) { printf ("Hello, world!\n"); return 0; } We will assume that the source code is stored in a file called `hello.c'. To compile the file `hello.c' with `gcc', use the following command: $ gcc -Wall hello.c -o hello This compiles the source code in `hello.c' to machine code and stores it in an executable file `hello'. The output file for the machine code is specified using the `-o' option. This option is usually given as the last argument on the command line. If it is omitted, the output is written to a default file called `a.out'. Note that if a file with the same name as the executable file already exists in the current directory it will be overwritten. The option `-Wall' turns on all the most commonly-used compiler warnings--*it is recommended that you always use this option!* There are many other warning options which will be discussed in later chapters, but `-Wall' is the most important. GCC will not produce any warnings unless they are enabled. Compiler warnings are an essential aid in detecting problems when programming in C and C++. In this case, the compiler does not produce any warnings with the `-Wall' option, since the program is completely valid. Source code which does not produce any warnings is said to "compile cleanly". To run the program, type the path name of the executable like this: $ ./hello Hello, world! This loads the executable file into memory and causes the CPU to begin executing the instructions contained within it. The path `./' refers to the current directory, so `./hello' loads and runs the executable file `hello' located in the current directory.  File: gccintro.info, Node: Finding errors in a simple program, Next: Compiling multiple source files, Prev: Compiling a simple C program, Up: Compiling a C program Finding errors in a simple program ================================== As mentioned above, compiler warnings are an essential aid when programming in C and C++. To demonstrate this, the program below contains a subtle error: it uses the function `printf' incorrectly, by specifying a floating-point format `%f' for an integer value: #include int main (void) { printf ("Two plus two is %f\n", 4); return 0; } This error is not obvious at first sight, but can be detected by the compiler if the warning option `-Wall' has been enabled. Compiling the program above, `bad.c', with the warning option `-Wall' produces the following message: $ gcc -Wall bad.c -o bad bad.c: In function `main': bad.c:6: warning: double format, different type arg (arg 2) This indicates that a format string has been used incorrectly in the file `bad.c' at line 6. The messages produced by GCC always have the form file:line-number:message. The compiler distinguishes between "error messages", which prevent successful compilation, and "warning messages" which indicate possible problems (but do not stop the program from compiling). In this case, the correct format specifier would have been `%d' (the allowed format specifiers for `printf' can be found in any general book on C, such as the `GNU C Library Reference Manual', *note Further reading::). Without the warning option `-Wall' the program appears to compile cleanly, but produces incorrect results: $ gcc bad.c -o bad $ ./bad Two plus two is 2.585495 (incorrect output) The incorrect format specifier causes the output to be corrupted, because the function `printf' is passed an integer instead of a floating-point number. Integers and floating-point numbers are stored in different formats in memory, and generally occupy different numbers of bytes, leading to a spurious result. The actual output shown above may differ, depending on the specific platform and environment. Clearly, it is very dangerous to develop a program without checking for compiler warnings. If there are any functions which are not used correctly they can cause the program to crash, or to produce incorrect results. Turning on the compiler warning option `-Wall' will catch many of the commonest errors which occur in C programming.  File: gccintro.info, Node: Compiling multiple source files, Next: Compiling files independently, Prev: Finding errors in a simple program, Up: Compiling a C program Compiling multiple source files =============================== A program can be split up into multiple files. This makes it easier to edit and understand, especially in the case of large programs--it also allows the individual parts to be compiled independently. In the following example we will split up the program "Hello World" into three files: `main.c', `hello_fn.c' and the header file `hello.h'. Here is the main program `main.c': #include "hello.h" int main (void) { hello ("world"); return 0; } The original call to the `printf' system function in the previous program `hello.c' has been replaced by a call to a new external function `hello', which we will define in a separate file `hello_fn.c'. The main program also includes the header file `hello.h' which will contain the declaration of the function `hello'. The declaration is used to ensure that the types of the arguments and return value match up correctly between the function call and the function definition. We no longer need to include the system header file `stdio.h' in `main.c' to declare the function `printf', since the file `main.c' does not call `printf' directly. The declaration in `hello.h' is a single line specifying the prototype of the function `hello': void hello (const char * name); The definition of the function `hello' itself is contained in the file `hello_fn.c': #include #include "hello.h" void hello (const char * name) { printf ("Hello, %s!\n", name); } This function prints the message "`Hello, 'NAME`!'" using its argument as the value of NAME. Incidentally, the difference between the two forms of the include statement `#include "FILE.h"' and `#include ' is that the former searches for `FILE.h' in the current directory before looking in the system header file directories. The include statement `#include ' searches the system header files, but does not look in the current directory by default. To compile these source files with `gcc', use the following command: $ gcc -Wall main.c hello_fn.c -o newhello In this case, we use the `-o' option to specify a different output file for the executable, `newhello'. Note that the header file `hello.h' is not specified in the list of files on the command line. The directive `#include "hello.h"' in the source files instructs the compiler to include it automatically at the appropriate points. To run the program, type the path name of the executable: $ ./newhello Hello, world! All the parts of the program have been combined into a single executable file, which produces the same result as the executable created from the single source file used earlier.  File: gccintro.info, Node: Compiling files independently, Next: Recompiling and relinking, Prev: Compiling multiple source files, Up: Compiling a C program Compiling files independently ============================= If a program is stored in a single file then any change to an individual function requires the whole program to be recompiled to produce a new executable. The recompilation of large source files can be very time-consuming. When programs are stored in independent source files, only the files which have changed need to be recompiled after the source code has been modified. In this approach, the source files are compiled separately and then "linked" together--a two stage process. In the first stage, a file is compiled without creating an executable. The result is referred to as an "object file", and has the extension `.o' when using GCC. In the second stage, the object files are merged together by a separate program called the "linker". The linker combines all the object files together to create a single executable. An object file contains machine code where any references to the memory addresses of functions (or variables) in other files are left undefined. This allows source files to be compiled without direct reference to each other. The linker fills in these missing addresses when it produces the executable. * Menu: * Creating object files from source files:: * Creating executables from object files:: * Link order of object files::  File: gccintro.info, Node: Creating object files from source files, Next: Creating executables from object files, Up: Compiling files independently Creating object files from source files --------------------------------------- The command-line option `-c' is used to compile a source file to an object file. For example, the following command will compile the source file `main.c' to an object file: $ gcc -Wall -c main.c This produces an object file `main.o' containing the machine code for the `main' function. It contains a reference to the external function `hello', but the corresponding memory address is left undefined in the object file at this stage (it will be filled in later by linking). The corresponding command for compiling the `hello' function in the source file `hello_fn.c' is: $ gcc -Wall -c hello_fn.c This produces the object file `hello_fn.o'. Note that there is no need to use the option `-o' to specify the name of the output file in this case. When compiling with `-c' the compiler automatically creates an object file whose name is the same as the source file, with `.o' instead of the original extension. There is no need to put the header file `hello.h' on the command line, since it is automatically included by the `#include' statements in `main.c' and `hello_fn.c'.  File: gccintro.info, Node: Creating executables from object files, Next: Link order of object files, Prev: Creating object files from source files, Up: Compiling files independently Creating executables from object files -------------------------------------- The final step in creating an executable file is to use `gcc' to link the object files together and fill in the missing addresses of external functions. To link object files together, they are simply listed on the command line: $ gcc main.o hello_fn.o -o hello This is one of the few occasions where there is no need to use the `-Wall' warning option, since the individual source files have already been successfully compiled to object code. Once the source files have been compiled, linking is an unambiguous process which either succeeds or fails (it fails only if there are references which cannot be resolved). To perform the linking step `gcc' uses the linker `ld', which is a separate program. On GNU systems the GNU linker, GNU `ld', is used. Other systems may use the GNU linker with GCC, or may have their own linkers. The linker itself will be discussed later (*note How the compiler works::). By running the linker, `gcc' creates an executable file from the object files. The resulting executable file can now be run: $ ./hello Hello, world! It produces the same output as the version of the program using a single source file in the previous section.  File: gccintro.info, Node: Link order of object files, Prev: Creating executables from object files, Up: Compiling files independently Link order of object files -------------------------- On Unix-like systems, the traditional behavior of compilers and linkers is to search for external functions from left to right in the object files specified on the command line. This means that the object file which contains the definition of a function should appear after any files which call that function. In this case, the file `hello_fn.o' containing the function `hello' should be specified after `main.o' itself, since `main' calls `hello': $ gcc main.o hello_fn.o -o hello (correct order) With some compilers or linkers the opposite ordering would result in an error, $ cc hello_fn.o main.o -o hello (incorrect order) main.o: In function `main': main.o(.text+0xf): undefined reference to `hello' because there is no object file containing `hello' after `main.o'. Most current compilers and linkers will search all object files, regardless of order, but since not all compilers do this it is best to follow the convention of ordering object files from left to right. This is worth keeping in mind if you ever encounter unexpected problems with undefined references, and all the necessary object files appear to be present on the command line.  File: gccintro.info, Node: Recompiling and relinking, Next: Linking with external libraries, Prev: Compiling files independently, Up: Compiling a C program Recompiling and relinking ========================= To show how source files can be compiled independently we will edit the main program `main.c' and modify it to print a greeting to `everyone' instead of `world': #include "hello.h" int main (void) { hello ("everyone"); /* changed from "world" */ return 0; } The updated file `main.c' can now be recompiled with the following command: $ gcc -Wall -c main.c This produces a new object file `main.o'. There is no need to create a new object file for `hello_fn.c', since that file and the related files that it depends on, such as header files, have not changed. The new object file can be relinked with the `hello' function to create a new executable file: $ gcc main.o hello_fn.o -o hello The resulting executable `hello' now uses the new `main' function to produce the following output: $ ./hello Hello, everyone! Note that only the file `main.c' has been recompiled, and then relinked with the existing object file for the `hello' function. If the file `hello_fn.c' had been modified instead, we could have recompiled `hello_fn.c' to create a new object file `hello_fn.o' and relinked this with the existing file `main.o'.(1) In general, linking is faster than compilation--in a large project with many source files, recompiling only those that have been modified can make a significant saving. The process of recompiling only the modified files in a project can be automated using `GNU Make' (*note Further reading::). ---------- Footnotes ---------- (1) If the prototype of a function has changed, it is necessary to modify and recompile all of the other source files which use it.  File: gccintro.info, Node: Linking with external libraries, Next: Using library header files, Prev: Recompiling and relinking, Up: Compiling a C program Linking with external libraries =============================== A library is a collection of precompiled object files which can be linked into programs. The most common use of libraries is to provide system functions, such as the square root function `sqrt' found in the C math library. Libraries are typically stored in special "archive files" with the extension `.a', referred to as "static libraries". They are created from object files with a separate tool, the GNU archiver `ar', and used by the linker to resolve references to functions at compile-time. We will see later how to create libraries using the `ar' command (*note Compiler-related tools::). For simplicity, only static libraries are covered in this section--dynamic linking at runtime using "shared libraries" will be described in the next chapter. The standard system libraries are usually found in the directories `/usr/lib' and `/lib'.(1) For example, the C math library is typically stored in the file `/usr/lib/libm.a' on Unix-like systems. The corresponding prototype declarations for the functions in this library are given in the header file `/usr/include/math.h'. The C standard library itself is stored in `/usr/lib/libc.a' and contains functions specified in the ANSI/ISO C standard, such as `printf'--this library is linked by default for every C program. Here is an example program which makes a call to the external function `sqrt' in the math library `libm.a': #include #include int main (void) { double x = sqrt (2.0); printf ("The square root of 2.0 is %f\n", x); return 0; } Trying to create an executable from this source file alone causes the compiler to give an error at the link stage: $ gcc -Wall calc.c -o calc /tmp/ccbR6Ojm.o: In function `main': /tmp/ccbR6Ojm.o(.text+0x19): undefined reference to `sqrt' The problem is that the reference to the `sqrt' function cannot be resolved without the external math library `libm.a'. The function `sqrt' is not defined in the program or the default library `libc.a', and the compiler does not link to the file `libm.a' unless it is explicitly selected. Incidentally, the file mentioned in the error message `/tmp/ccbR60jm.o' is a temporary object file created by the compiler from `calc.c', in order to carry out the linking process. To enable the compiler to link the `sqrt' function to the main program `calc.c' we need to supply the library `libm.a'. One obvious but cumbersome way to do this is to specify it explicitly on the command line: $ gcc -Wall calc.c /usr/lib/libm.a -o calc The library `libm.a' contains object files for all the mathematical functions, such as `sin', `cos', `exp', `log' and `sqrt'. The linker searches through these to find the object file containing the `sqrt' function. Once the object file for the `sqrt' function has been found, the main program can be linked and a complete executable produced: $ ./calc The square root of 2.0 is 1.414214 The executable file includes the machine code for the main function and the machine code for the `sqrt' function, copied from the corresponding object file in the library `libm.a'. To avoid the need to specify long paths on the command line, the compiler provides a short-cut option `-l' for linking against libraries. For example, the following command, $ gcc -Wall calc.c -lm -o calc is equivalent to the original command above using the full library name `/usr/lib/libm.a'. In general, the compiler option `-lNAME' will attempt to link object files with a library file `libNAME.a' in the standard library directories. Additional directories can specified with command-line options and environment variables, to be discussed shortly. A large program will typically use many `-l' options to link libraries such as the math library, graphics libraries and networking libraries. * Menu: * Link order of libraries:: ---------- Footnotes ---------- (1) On systems supporting both 64 and 32-bit executables the 64-bit versions of the libraries will often be stored in `/usr/lib64' and `/lib64', with the 32-bit versions in `/usr/lib' and `/lib'.  File: gccintro.info, Node: Link order of libraries, Up: Linking with external libraries Link order of libraries ----------------------- The ordering of libraries on the command line follows the same convection as for object files: they are searched from left to right--a library containing the definition of a function should appear after any source files or object files which use it. This includes libraries specified with the short-cut `-l' option, as shown in the following command: $ gcc -Wall calc.c -lm -o calc (correct order) With some compilers the opposite ordering (placing the `-lm' option before the file which uses it) would result in an error, $ cc -Wall -lm calc.c -o calc (incorrect order) main.o: In function `main': main.o(.text+0xf): undefined reference to `sqrt' because there is no library or object file containing `sqrt' after `calc.c'. The option `-lm' should appear after the file `calc.c'. When several libraries are being used, the same convention should be followed for the libraries themselves. A library which calls an external function defined in another library should appear before the library containing the function. For example, a program `data.c' using the GNU Linear Programming library `libglpk.a', which in turn uses the math library `libm.a', should be compiled as, $ gcc -Wall data.c -lglpk -lm since the object files in `libglpk.a' use functions defined in `libm.a'. As for object files, most current compilers will search all libraries, regardless of order. However, since not all compilers do this it is best to follow the convention of ordering libraries from left to right.  File: gccintro.info, Node: Using library header files, Prev: Linking with external libraries, Up: Compiling a C program Using library header files ========================== When using a library it is essential to include the appropriate header files, in order to declare the function arguments and return values with the correct types. Without declarations, the arguments of a function can be passed with the wrong type, causing corrupted results. The following example shows another program which makes a function call to the C math library. In this case, the function `pow' is used to compute the cube of two (2 raised to the power of 3): #include int main (void) { double x = pow (2.0, 3.0); printf ("Two cubed is %f\n", x); return 0; } However, the program contains an error--the `#include' statement for `math.h' is missing, so the prototype `double pow (double x, double y)' given there will not be seen by the compiler. Compiling the program without any warning options will produce an executable file which gives incorrect results: $ gcc badpow.c -lm $ ./a.out Two cubed is 2.851120 (incorrect result, should be 8) The results are corrupted because the arguments and return value of the call to `pow' are passed with incorrect types.(1) This can be detected by turning on the warning option `-Wall': $ gcc -Wall badpow.c -lm badpow.c: In function `main': badpow.c:6: warning: implicit declaration of function `pow' This example shows again the importance of using the warning option `-Wall' to detect serious problems that could otherwise easily be overlooked. ---------- Footnotes ---------- (1) The actual output shown above may differ, depending on the specific platform and environment.  File: gccintro.info, Node: Compilation options, Next: Using the preprocessor, Prev: Compiling a C program, Up: Top Compilation options ******************* This chapter describes other commonly-used compiler options available in GCC. These options control features such as the search paths used for locating libraries and include files, the use of additional warnings and diagnostics, preprocessor macros and C language dialects. * Menu: * Setting search paths:: * Shared libraries and static libraries:: * C language standards:: * Warning options in -Wall:: * Additional warning options::  File: gccintro.info, Node: Setting search paths, Next: Shared libraries and static libraries, Up: Compilation options Setting search paths ==================== In the last chapter, we saw how to link to a program with functions in the C math library `libm.a', using the short-cut option `-lm' and the header file `math.h'. A common problem when compiling a program using library header files is the error: FILE.H: No such file or directory This occurs if a header file is not present in the standard include file directories used by `gcc'. A similar problem can occur for libraries: /usr/bin/ld: cannot find LIBRARY This happens if a library used for linking is not present in the standard library directories used by `gcc'. By default, `gcc' searches the following directories for header files: /usr/local/include/ /usr/include/ and the following directories for libraries: /usr/local/lib/ /usr/lib/ The list of directories for header files is often referred to as the "include path", and the list of directories for libraries as the "library search path" or "link path". The directories on these paths are searched in order, from first to last in the two lists above.(1) For example, a header file found in `/usr/local/include' takes precedence over a file with the same name in `/usr/include'. Similarly, a library found in `/usr/local/lib' takes precedence over a library with the same name in `/usr/lib'. When additional libraries are installed in other directories it is necessary to extend the search paths, in order for the libraries to be found. The compiler options `-I' and `-L' add new directories to the beginning of the include path and library search path respectively. * Menu: * Search path example:: * Environment variables:: * Extended search paths:: ---------- Footnotes ---------- (1) The default search paths may also include additional system-dependent or site-specific directories, and directories in the GCC installation itself. For example, on 64-bit platforms additional `lib64' directories may also be searched by default.  File: gccintro.info, Node: Search path example, Next: Environment variables, Up: Setting search paths Search path example ------------------- The following example program uses a library that might be installed as an additional package on a system--the GNU Database Management Library (GDBM). The GDBM Library stores key-value pairs in a DBM file, a type of data file which allows values to be stored and indexed by a "key" (an arbitrary sequence of characters). Here is the example program `dbmain.c', which creates a DBM file containing a key `testkey' with the value `testvalue': #include #include int main (void) { GDBM_FILE dbf; datum key = { "testkey", 7 }; /* key, length */ datum value = { "testvalue", 9 }; /* value, length */ printf ("Storing key-value pair... "); dbf = gdbm_open ("test", 0, GDBM_NEWDB, 0644, 0); gdbm_store (dbf, key, value, GDBM_INSERT); gdbm_close (dbf); printf ("done.\n"); return 0; } The program uses the header file `gdbm.h' and the library `libgdbm.a'. If the library has been installed in the default location of `/usr/local/lib', with the header file in `/usr/local/include', then the program can be compiled with the following simple command: $ gcc -Wall dbmain.c -lgdbm Both these directories are part of the default `gcc' include and link paths. However, if GDBM has been installed in a different location, trying to compile the program will give the following error: $ gcc -Wall dbmain.c -lgdbm dbmain.c:1: gdbm.h: No such file or directory For example, if version 1.8.3 of the GDBM package is installed under the directory `/opt/gdbm-1.8.3' the location of the header file would be, /opt/gdbm-1.8.3/include/gdbm.h which is not part of the default `gcc' include path. Adding the appropriate directory to the include path with the command-line option `-I' allows the program to be compiled, but not linked: $ gcc -Wall -I/opt/gdbm-1.8.3/include dbmain.c -lgdbm /usr/bin/ld: cannot find -lgdbm collect2: ld returned 1 exit status The directory containing the library is still missing from the link path. It can be added to the link path using the following option: -L/opt/gdbm-1.8.3/lib/ The following command line allows the program to be compiled and linked: $ gcc -Wall -I/opt/gdbm-1.8.3/include -L/opt/gdbm-1.8.3/lib dbmain.c -lgdbm This produces the final executable linked to the GDBM library. Before seeing how to run this executable we will take a brief look at the environment variables that affect the `-I' and `-L' options. Note that you should never place the absolute paths of header files in `#include' statements in your source code, as this will prevent the program from compiling on other systems. The `-I' option or the `INCLUDE_PATH' variable described below should always be used to set the include path for header files.  File: gccintro.info, Node: Environment variables, Next: Extended search paths, Prev: Search path example, Up: Setting search paths Environment variables --------------------- The search paths for header files and libraries can also be controlled through environment variables in the shell. These may be set automatically for each session using the appropriate login file, such as `.bash_profile'. Additional directories can be added to the include path using the environment variable `C_INCLUDE_PATH' (for C header files) or `CPLUS_INCLUDE_PATH' (for C++ header files). For example, the following commands will add `/opt/gdbm-1.8.3/include' to the include path when compiling C programs: $ C_INCLUDE_PATH=/opt/gdbm-1.8.3/include $ export C_INCLUDE_PATH This directory will be searched after any directories specified on the command line with the option `-I', and before the standard default directories `/usr/local/include' and `/usr/include'. The shell command `export' is needed to make the environment variable available to programs outside the shell itself, such as the compiler--it is only needed once for each variable in each shell session, and can also be set in the appropriate login file. Similarly, additional directories can be added to the link path using the environment variable `LIBRARY_PATH'. For example, the following commands will add `/opt/gdbm-1.8.3/lib' to the link path: $ LIBRARY_PATH=/opt/gdbm-1.8.3/lib $ export LIBRARY_PATH This directory will be searched after any directories specified on the command line with the option `-L', and before the standard default directories `/usr/local/lib' and `/usr/lib'. With the environment variable settings given above the program `dbmain.c' can be compiled without the `-I' and `-L' options, $ gcc -Wall dbmain.c -lgdbm because the default paths now use the directories specified in the environment variables `C_INCLUDE_PATH' and `LIBRARY_PATH'.  File: gccintro.info, Node: Extended search paths, Prev: Environment variables, Up: Setting search paths Extended search paths --------------------- Following the standard Unix convention for search paths, several directories can be specified together in an environment variable as a colon separated list: DIR1:DIR2:DIR3:... The directories are then searched in order from left to right. A single dot `.' can be used to specify the current directory.(1) For example, the following settings create default include and link paths for packages installed in the current directory `.' and the `include' and `lib' directories under `/opt/gdbm-1.8.3' and `/net' respectively: $ C_INCLUDE_PATH=.:/opt/gdbm-1.8.3/include:/net/include $ LIBRARY_PATH=.:/opt/gdbm-1.8.3/lib:/net/lib To specify multiple search path directories on the command line, the options `-I' and `-L' can be repeated. For example, the following command, $ gcc -I. -I/opt/gdbm-1.8.3/include -I/net/include -L. -L/opt/gdbm-1.8.3/lib -L/net/lib ..... is equivalent to the environment variable settings given above. When environment variables and command-line options are used together the compiler searches the directories in the following order: 1. command-line options `-I' and `-L', from left to right 2. directories specified by environment variables, such as `C_INCLUDE_PATH' and `LIBRARY_PATH' 3. default system directories In day-to-day usage, directories are usually added to the search paths with the options `-I' and `-L'. ---------- Footnotes ---------- (1) The current directory can also be specified using an empty path element. For example, `:DIR1:DIR2' is equivalent to `.:DIR1:DIR2'.  File: gccintro.info, Node: Shared libraries and static libraries, Next: C language standards, Prev: Setting search paths, Up: Compilation options Shared libraries and static libraries ===================================== Although the example program above has been successfully compiled and linked, a final step is needed before being able to load and run the executable file. If an attempt is made to start the executable directly, the following error will occur on most systems: $ ./a.out ./a.out: error while loading shared libraries: libgdbm.so.3: cannot open shared object file: No such file or directory This is because the GDBM package provides a "shared library". This type of library requires special treatment--it must be loaded from disk before the executable will run. External libraries are usually provided in two forms: "static libraries" and "shared libraries". Static libraries are the `.a' files seen earlier. When a program is linked against a static library, the machine code from the object files for any external functions used by the program is copied from the library into the final executable. Shared libraries are handled with a more advanced form of linking, which makes the executable file smaller. They use the extension `.so', which stands for "shared object". An executable file linked against a shared library contains only a small table of the functions it requires, instead of the complete machine code from the object files for the external functions. Before the executable file starts running, the machine code for the external functions is copied into memory from the shared library file on disk by the operating system--a process referred to as "dynamic linking". Dynamic linking makes executable files smaller and saves disk space, because one copy of a library can be shared between multiple programs. Most operating systems also provide a virtual memory mechanism which allows one copy of a shared library in physical memory to be used by all running programs, saving memory as well as disk space. Furthermore, shared libraries make it possible to update a library without recompiling the programs which use it (provided the interface to the library does not change). Because of these advantages `gcc' compiles programs to use shared libraries by default on most systems, if they are available. Whenever a static library `libNAME.a' would be used for linking with the option `-lNAME' the compiler first checks for an alternative shared library with the same name and a `.so' extension. In this case, when the compiler searches for the `libgdbm' library in the link path, it finds the following two files in the directory `/opt/gdbm-1.8.3/lib': $ cd /opt/gdbm-1.8.3/lib $ ls libgdbm.* libgdbm.a libgdbm.so Consequently, the `libgdbm.so' shared object file is used in preference to the `libgdbm.a' static library. However, when the executable file is started its loader function must find the shared library in order to load it into memory. By default the loader searches for shared libraries only in a predefined set of system directories, such as `/usr/local/lib' and `/usr/lib'. If the library is not located in one of these directories it must be added to the load path.(1) The simplest way to set the load path is through the environment variable `LD_LIBRARY_PATH'. For example, the following commands set the load path to `/opt/gdbm-1.8.3/lib' so that `libgdbm.so' can be found: $ LD_LIBRARY_PATH=/opt/gdbm-1.8.3/lib $ export LD_LIBRARY_PATH $ ./a.out Storing key-value pair... done. The executable now runs successfully, prints its message and creates a DBM file called `test' containing the key-value pair `testkey' and `testvalue'. To save typing, the `LD_LIBRARY_PATH' environment variable can be set once for each session in the appropriate login file, such as `.bash_profile' for the GNU Bash shell. Several shared library directories can be placed in the load path, as a colon separated list `DIR1:DIR2:DIR3:...:DIRN'. For example, the following command sets the load path to use the `lib' directories under `/opt/gdbm-1.8.3' and `/opt/gtk-1.4': $ LD_LIBRARY_PATH=/opt/gdbm-1.8.3/lib:/opt/gtk-1.4/lib $ export LD_LIBRARY_PATH If the load path contains existing entries, it can be extended using the syntax `LD_LIBRARY_PATH=NEWDIRS:$LD_LIBRARY_PATH'. For example, the following command adds the directory `/opt/gsl-1.5/lib' to the load path shown above: $ LD_LIBRARY_PATH=/opt/gsl-1.5/lib:$LD_LIBRARY_PATH $ echo $LD_LIBRARY_PATH /opt/gsl-1.5/lib:/opt/gdbm-1.8.3/lib:/opt/gtk-1.4/lib It is possible for the system administrator to set the `LD_LIBRARY_PATH' variable for all users, by adding it to a default login script, such as `/etc/profile'. On GNU systems, a system-wide path can also be defined in the loader configuration file `/etc/ld.so.conf'. Alternatively, static linking can be forced with the `-static' option to `gcc' to avoid the use of shared libraries: $ gcc -Wall -static -I/opt/gdbm-1.8.3/include/ -L/opt/gdbm-1.8.3/lib/ dbmain.c -lgdbm This creates an executable linked with the static library `libgdbm.a' which can be run without setting the environment variable `LD_LIBRARY_PATH' or putting shared libraries in the default directories: $ ./a.out Storing key-value pair... done. As noted earlier, it is also possible to link directly with individual library files by specifying the full path to the library on the command line. For example, the following command will link directly with the static library `libgdbm.a', $ gcc -Wall -I/opt/gdbm-1.8.3/include dbmain.c /opt/gdbm-1.8.3/lib/libgdbm.a and the command below will link with the shared library file `libgdbm.so': $ gcc -Wall -I/opt/gdbm-1.8.3/include dbmain.c /opt/gdbm-1.8.3/lib/libgdbm.so In the latter case it is still necessary to set the library load path when running the executable. ---------- Footnotes ---------- (1) Note that the directory containing the shared library can, in principle, be stored ("hard-coded") in the executable itself using the linker option `-rpath', but this is not usually done since it creates problems if the library is moved or the executable is copied to another system.  File: gccintro.info, Node: C language standards, Next: Warning options in -Wall, Prev: Shared libraries and static libraries, Up: Compilation options C language standards ==================== By default, `gcc' compiles programs using the GNU dialect of the C language, referred to as "GNU C". This dialect incorporates the official ANSI/ISO standard for the C language with several useful GNU extensions, such as nested functions and variable-size arrays. Most ANSI/ISO programs will compile under GNU C without changes. There are several options which control the dialect of C used by `gcc'. The most commonly-used options are `-ansi' and `-pedantic'. The specific dialects of the C language for each standard can also be selected with the `-std' option. * Menu: * ANSI/ISO:: * Strict ANSI/ISO:: * Selecting specific standards::  File: gccintro.info, Node: ANSI/ISO, Next: Strict ANSI/ISO, Up: C language standards ANSI/ISO -------- Occasionally a valid ANSI/ISO program may be incompatible with the extensions in GNU C. To deal with this situation, the compiler option `-ansi' disables those GNU extensions which conflict with the ANSI/ISO standard. On systems using the GNU C Library (`glibc') it also disables extensions to the C standard library. This allows programs written for ANSI/ISO C to be compiled without any unwanted effects from GNU extensions. For example, here is a valid ANSI/ISO C program which uses a variable called `asm': #include int main (void) { const char asm[] = "6502"; printf ("the string asm is '%s'\n", asm); return 0; } The variable name `asm' is valid under the ANSI/ISO standard, but this program will not compile in GNU C because `asm' is a GNU C keyword extension (it allows native assembly instructions to be used in C functions). Consequently, it cannot be used as a variable name without giving a compilation error: $ gcc -Wall ansi.c ansi.c: In function `main': ansi.c:6: parse error before `asm' ansi.c:7: parse error before `asm' In contrast, using the `-ansi' option disables the `asm' keyword extension, and allows the program above to be compiled correctly: $ gcc -Wall -ansi ansi.c $ ./a.out the string asm is '6502' For reference, the non-standard keywords and macros defined by the GNU C extensions are `asm', `inline', `typeof', `unix' and `vax'. More details can be found in the GCC Reference Manual "`Using GCC'" (*note Further reading::). The next example shows the effect of the `-ansi' option on systems using the GNU C Library, such as GNU/Linux systems. The program below prints the value of pi, \pi=3.14159..., from the preprocessor definition `M_PI' in the header file `math.h': #include #include int main (void) { printf("the value of pi is %f\n", M_PI); return 0; } The constant `M_PI' is not part of the ANSI/ISO C standard library (it comes from the BSD version of Unix). In this case, the program will not compile with the `-ansi' option: $ gcc -Wall -ansi pi.c pi.c: In function `main': pi.c:7: `M_PI' undeclared (first use in this function) pi.c:7: (Each undeclared identifier is reported only once pi.c:7: for each function it appears in.) The program can be compiled without the `-ansi' option. In this case both the language and library extensions are enabled by default: $ gcc -Wall pi.c $ ./a.out the value of pi is 3.141593 It is also possible to compile the program using ANSI/ISO C, by enabling only the extensions in the GNU C Library itself. This can be achieved by defining special macros, such as `_GNU_SOURCE', which enable extensions in the GNU C Library:(1) $ gcc -Wall -ansi -D_GNU_SOURCE pi.c $ ./a.out the value of pi is 3.141593 The GNU C Library provides a number of these macros (referred to as "feature test macros") which allow control over the support for POSIX extensions (`_POSIX_C_SOURCE'), BSD extensions (`_BSD_SOURCE'), SVID extensions (`_SVID_SOURCE'), XOPEN extensions (`_XOPEN_SOURCE') and GNU extensions (`_GNU_SOURCE'). The `_GNU_SOURCE' macro enables all the extensions together, with the POSIX extensions taking precedence over the others in cases where they conflict. Further information about feature test macros can be found in the `GNU C Library Reference Manual', *note Further reading::. ---------- Footnotes ---------- (1) The `-D' option for defining macros will be explained in detail in the next chapter.  File: gccintro.info, Node: Strict ANSI/ISO, Next: Selecting specific standards, Prev: ANSI/ISO, Up: C language standards Strict ANSI/ISO --------------- The command-line option `-pedantic' in combination with `-ansi' will cause `gcc' to reject all GNU C extensions, not just those that are incompatible with the ANSI/ISO standard. This helps you to write portable programs which follow the ANSI/ISO standard. Here is a program which uses variable-size arrays, a GNU C extension. The array `x[n]' is declared with a length specified by the integer variable `n'. int main (int argc, char *argv[]) { int i, n = argc; double x[n]; for (i = 0; i < n; i++) x[i] = i; return 0; } This program will compile with `-ansi', because support for variable length arrays does not interfere with the compilation of valid ANSI/ISO programs--it is a backwards-compatible extension: $ gcc -Wall -ansi gnuarray.c However, compiling with `-ansi -pedantic' reports warnings about violations of the ANSI/ISO standard: $ gcc -Wall -ansi -pedantic gnuarray.c gnuarray.c: In function `main': gnuarray.c:5: warning: ISO C90 forbids variable-size array `x' Note that an absence of warnings from `-ansi -pedantic' does not guarantee that a program strictly conforms to the ANSI/ISO standard. The standard itself specifies only a limited set of circumstances that should generate diagnostics, and these are what `-ansi -pedantic' reports.  File: gccintro.info, Node: Selecting specific standards, Prev: Strict ANSI/ISO, Up: C language standards Selecting specific standards ---------------------------- The specific language standard used by GCC can be controlled with the `-std' option. The following C language standards are supported: `-std=c89' or `-std=iso9899:1990' The original ANSI/ISO C language standard (ANSI X3.159-1989, ISO/IEC 9899:1990). GCC incorporates the corrections in the two ISO Technical Corrigenda to the original standard. `-std=iso9899:199409' The ISO C language standard with ISO Amendment 1, published in 1994. This amendment was mainly concerned with internationalization, such as adding support for multibyte characters to the C library. `-std=c99' or `-std=iso9899:1999' The revised ISO C language standard, published in 1999 (ISO/IEC 9899:1999). The C language standards with GNU extensions can be selected with the options `-std=gnu89' and `-std=gnu99'.  File: gccintro.info, Node: Warning options in -Wall, Next: Additional warning options, Prev: C language standards, Up: Compilation options Warning options in `-Wall' ========================== As described earlier (*note Compiling a simple C program::), the warning option `-Wall' enables warnings for many common errors, and should always be used. It combines a large number of other, more specific, warning options which can also be selected individually. Here is a summary of these options: `-Wcomment' (included in `-Wall') This option warns about nested comments. Nested comments typically arise when a section of code containing comments is later "commented out": /* commented out double x = 1.23 ; /* x-position */ */ Nested comments can be a source of confusion--the safe way to "comment out" a section of code containing comments is to surround it with the preprocessor directive `#if 0 ... #endif': /* commented out */ #if 0 double x = 1.23 ; /* x-position */ #endif `-Wformat' (included in `-Wall') This option warns about the incorrect use of format strings in functions such as `printf' and `scanf', where the format specifier does not agree with the type of the corresponding function argument. `-Wunused' (included in `-Wall') This option warns about unused variables. When a variable is declared but not used this can be the result of another variable being accidentally substituted in its place. If the variable is genuinely not needed it can be removed from the source code. `-Wimplicit' (included in `-Wall') This option warns about any functions that are used without being declared. The most common reason for a function to be used without being declared is forgetting to include a header file. `-Wreturn-type' (included in `-Wall') This option warns about functions that are defined without a return type but not declared `void'. It also catches empty `return' statements in functions that are not declared `void'. For example, the following program does not use an explicit return value: #include int main (void) { printf ("hello world\n"); return; } The lack of a return value in the code above could be the result of an accidental omission by the programmer--the value returned by the main function is actually the return value of the `printf' function (the number of characters printed). To avoid ambiguity, it is preferable to use an explicit value in the return statement, either as a variable or a constant, such as `return 0'. The complete set of warning options included in `-Wall' can be found in the GCC Reference Manual "`Using GCC'" (*note Further reading::). The options included in `-Wall' have the common characteristic that they report constructions which are always wrong, or can easily be rewritten in an unambiguously correct way. This is why they are so useful--any warning produced by `-Wall' can be taken as an indication of a potentially serious problem.  File: gccintro.info, Node: Additional warning options, Prev: Warning options in -Wall, Up: Compilation options Additional warning options ========================== GCC provides many other warning options that are not included in `-Wall', but are often useful. Typically these produce warnings for source code which may be technically valid but is very likely to cause problems. The criteria for these options are based on experience of common errors--they are not included in `-Wall' because they only indicate possibly problematic or "suspicious" code. Since these warnings can be issued for valid code it is not necessary to compile with them all the time. It is more appropriate to use them periodically and review the results, checking for anything unexpected, or to enable them for some programs or files. `-W' This is a general option similar to `-Wall' which warns about a selection of common programming errors, such as functions which can return without a value (also known as "falling off the end of the function body"), and comparisons between signed and unsigned values. For example, the following function tests whether an unsigned integer is negative (which is impossible, of course): int foo (unsigned int x) { if (x < 0) return 0; /* cannot occur */ else return 1; } Compiling this function with `-Wall' does not produce a warning, $ gcc -Wall -c w.c but does give a warning with `-W': $ gcc -W -c w.c w.c: In function `foo': w.c:4: warning: comparison of unsigned expression < 0 is always false In practice, the options `-W' and `-Wall' are normally used together. `-Wconversion' This option warns about implicit type conversions that could cause unexpected results. For example, the assignment of a negative value to an unsigned variable, as in the following code, unsigned int x = -1; is technically allowed by the ANSI/ISO C standard (with the negative integer being converted to a positive integer, according to the machine representation) but could be a simple programming error. If you need to perform such a conversion you can use an explicit cast, such as `((unsigned int) -1)', to avoid any warnings from this option. On two's-complement machines the result of the cast gives the maximum number that can be represented by an unsigned integer. `-Wshadow' This option warns about the redeclaration of a variable name in a scope where it has already been declared. This is referred to as variable "shadowing", and causes confusion about which occurrence of the variable corresponds to which value. The following function declares a local variable `y' that shadows the declaration in the body of the function: double test (double x) { double y = 1.0; { double y; y = x; } return y; } This is valid ANSI/ISO C, where the return value is 1. The shadowing of the variable `y' might make it seem (incorrectly) that the return value is `x', when looking at the line `y = x' (especially in a large and complicated function). Shadowing can also occur for function names. For example, the following program attempts to define a variable `sin' which shadows the standard function `sin(x)'. double sin_series (double x) { /* series expansion for small x */ double sin = x * (1.0 - x * x / 6.0); return sin; } This error will be detected by the `-Wshadow' option. `-Wcast-qual' This option warns about pointers that are cast to remove a type qualifier, such as `const'. For example, the following function discards the `const' qualifier from its input argument, allowing it to be overwritten: void f (const char * str) { char * s = (char *)str; s[0] = '\0'; } The modification of the original contents of `str' is a violation of its `const' property. This option will warn about the improper cast of the variable `str' which allows the string to be modified. `-Wwrite-strings' This option implicitly gives all string constants defined in the program a `const' qualifier, causing a compile-time warning if there is an attempt to overwrite them. The result of modifying a string constant is not defined by the ANSI/ISO standard, and the use of writable string constants is deprecated in GCC. `-Wtraditional' This option warns about parts of the code which would be interpreted differently by an ANSI/ISO compiler and a "traditional" pre-ANSI compiler.(1) When maintaining legacy software it may be necessary to investigate whether the traditional or ANSI/ISO interpretation was intended in the original code for warnings generated by this option. The options above produce diagnostic warning messages, but allow the compilation to continue and produce an object file or executable. For large programs it can be desirable to catch all the warnings by stopping the compilation whenever a warning is generated. The `-Werror' option changes the default behavior by converting warnings into errors, stopping the compilation whenever a warning occurs. ---------- Footnotes ---------- (1) The traditional form of the C language was described in the original C reference manual "`The C Programming Language (First Edition)'" by Kernighan and Ritchie.  File: gccintro.info, Node: Using the preprocessor, Next: Compiling for debugging, Prev: Compilation options, Up: Top Using the preprocessor ********************** This chapter describes the use of the GNU C preprocessor `cpp', which is part of the GCC package. The preprocessor expands macros in source files before they are compiled. It is automatically called whenever GCC processes a C or C++ program.(1) * Menu: * Defining macros:: * Macros with values:: * Preprocessing source files:: ---------- Footnotes ---------- (1) In recent versions of GCC the preprocessor is integrated into the compiler, although a separate `cpp' command is also provided.  File: gccintro.info, Node: Defining macros, Next: Macros with values, Up: Using the preprocessor Defining macros =============== The following program demonstrates the most common use of the C preprocessor. It uses the preprocessor conditional `#ifdef' to check whether a macro is defined. When the macro is defined, the preprocessor includes the corresponding code up to the closing `#endif' command. In this example, the macro which is tested is called `TEST', and the conditional part of the source code is a `printf' statement which prints the message "`Test mode'": #include int main (void) { #ifdef TEST printf ("Test mode\n"); #endif printf ("Running...\n"); return 0; } The `gcc' option `-DNAME' defines a preprocessor macro `NAME' from the command line. If the program above is compiled with the command-line option `-DTEST', the macro `TEST' will be defined and the resulting executable will print both messages: $ gcc -Wall -DTEST dtest.c $ ./a.out Test mode Running... If the same program is compiled without the `-D' option then the "`Test mode'" message is omitted from the source code after preprocessing, and the final executable does not include the code for it: $ gcc -Wall dtest.c $ ./a.out Running... Macros are generally undefined, unless specified on the command line with the option `-D', or in a source file (or library header file) with `#define'. Some macros are automatically defined by the compiler--these typically use a reserved namespace beginning with a double-underscore prefix `__'. The complete set of predefined macros can be listed by running the GNU preprocessor `cpp' with the option `-dM' on an empty file: $ cpp -dM /dev/null #define __i386__ 1 #define __i386 1 #define i386 1 #define __unix 1 #define __unix__ 1 #define __ELF__ 1 #define unix 1 ....... Note that this list includes a small number of system-specific macros defined by `gcc' which do not use the double-underscore prefix. These non-standard macros can be disabled with the `-ansi' option of `gcc'.  File: gccintro.info, Node: Macros with values, Next: Preprocessing source files, Prev: Defining macros, Up: Using the preprocessor Macros with values ================== In addition to being defined, a macro can also be given a concrete value. This value is inserted into the source code at each point where the macro occurs. The following program uses a macro `NUM', to represent a number which will be printed: #include int main (void) { printf("Value of NUM is %d\n", NUM); return 0; } Note that macros are not expanded inside strings--only the occurrence of `NUM' outside the string is substituted by the preprocessor. To define a macro with a value, the `-D' command-line option can be used in the form `-DNAME=VALUE'. For example, the following command line defines `NUM' to be 100 when compiling the program above: $ gcc -Wall -DNUM=100 dtestval.c $ ./a.out Value of NUM is 100 This example uses a number, but a macro can take values of any form. Whatever the value of the macro is, it is inserted directly into the source code at the point where the macro name occurs. For example, the following definition expands the occurrences of `NUM' to `2+2' during preprocessing: $ gcc -Wall -DNUM="2+2" dtestval.c $ ./a.out Value of NUM is 4 After the preprocessor has made the substitution `NUM ==> 2+2' this is equivalent to compiling the following program: #include int main (void) { printf("Value of NUM is %d\n", 2+2); return 0; } Note that it is a good idea to surround macros by parentheses whenever they are part of an expression. For example, the following program uses parentheses to ensure the correct precedence for the multiplication `10*NUM': #include int main (void) { printf ("Ten times NUM is %d\n", 10 * (NUM)); return 0; } With these parentheses, it produces the expected result when compiled with the same command line as above: $ gcc -Wall -DNUM="2+2" dtestmul10.c $ ./a.out Ten times NUM is 40 Without parentheses, the program would produce the value `22' from the literal form of the expression `10*2+2 = 22', instead of the desired value `10*(2+2) = 40'. When a macro is defined with `-D' alone, `gcc' uses a default value of `1'. For example, compiling the original test program with the option `-DNUM' generates an executable which produces the following output: $ gcc -Wall -DNUM dtestval.c $ ./a.out Value of NUM is 1 A macro can be defined to a empty value using quotes on the command line, `-DNAME=""'. Such a macro is still treated as defined by conditionals such as `#ifdef', but expands to nothing. A macro containing quotes can be defined using shell-escaped quote characters. For example, the command-line option `-DMESSAGE="\"Hello, World!\""' defines a macro `MESSAGE' which expands to the sequence of characters `"Hello, World!"'. For an explanation of the different types of quoting and escaping used in the shell see the "`GNU Bash Reference Manual'", *Note Further reading::.  File: gccintro.info, Node: Preprocessing source files, Prev: Macros with values, Up: Using the preprocessor Preprocessing source files ========================== It is possible to see the effect of the preprocessor on source files directly, using the `-E' option of `gcc'. For example, the file below defines and uses a macro `TEST': #define TEST "Hello, World!" const char str[] = TEST; If this file is called `test.c' the effect of the preprocessor can be seen with the following command line: $ gcc -E test.c # 1 "test.c" const char str[] = "Hello, World!" ; The `-E' option causes `gcc' to run the preprocessor, display the expanded output, and then exit without compiling the resulting source code. The value of the macro `TEST' is substituted directly into the output, producing the sequence of characters `const char str[] = "Hello, World!" ;'. The preprocessor also inserts lines recording the source file and line numbers in the form `# LINE-NUMBER "SOURCE-FILE"', to aid in debugging and allow the compiler to issue error messages referring to this information. These lines do not affect the program itself. The ability to see the preprocessed source files can be useful for examining the effect of system header files, and finding declarations of system functions. The following program includes the header file `stdio.h' to obtain the declaration of the function `printf': #include int main (void) { printf ("Hello, world!\n"); return 0; } It is possible to see the declarations from the included header file by preprocessing the file with `gcc -E': $ gcc -E hello.c On a GNU system, this produces output similar to the following: # 1 "hello.c" # 1 "/usr/include/stdio.h" 1 3 extern FILE *stdin; extern FILE *stdout; extern FILE *stderr; extern int fprintf (FILE * __stream, const char * __format, ...) ; extern int printf (const char * __format, ...) ; [ ... additional declarations ... ] # 1 "hello.c" 2 int main (void) { printf ("Hello, world!\n"); return 0; } The preprocessed system header files usually generate a lot of output. This can be redirected to a file, or saved more conveniently using the `gcc' `-save-temps' option: $ gcc -c -save-temps hello.c After running this command, the preprocessed output will be available in the file `hello.i'. The `-save-temps' option also saves `.s' assembly files and `.o' object files in addition to preprocessed `.i' files.  File: gccintro.info, Node: Compiling for debugging, Next: Compiling with optimization, Prev: Using the preprocessor, Up: Top Compiling for debugging *********************** Normally, an executable file does not contain any references to the original program source code, such as variable names or line-numbers--the executable file is simply the sequence of machine code instructions produced by the compiler. This is insufficient for debugging, since there is no easy way to find the cause of an error if the program crashes. GCC provides the `-g' "debug option" to store additional debugging information in object files and executables. This debugging information allows errors to be traced back from a specific machine instruction to the corresponding line in the original source file. It also allows the execution of a program to be traced in a debugger, such as the GNU Debugger `gdb' (for more information, see "`Debugging with GDB: The GNU Source-Level Debugger'", *Note Further reading::). Using a debugger also allows the values of variables to be examined while the program is running. The debug option works by storing the names of functions and variables (and all the references to them), with their corresponding source code line-numbers, in a "symbol table" in object files and executables. * Menu: * Examining core files:: * Displaying a backtrace::  File: gccintro.info, Node: Examining core files, Next: Displaying a backtrace, Up: Compiling for debugging Examining core files ==================== In addition to allowing a program to be run under the debugger, another helpful application of the `-g' option is to find the circumstances of a program crash. When a program exits abnormally the operating system can write out a "core file", usually named `core', which contains the in-memory state of the program at the time it crashed. Combined with information from the symbol table produced by `-g', the core file can be used to find the line where the program stopped, and the values of its variables at that point. This is useful both during the development of software, and after deployment--it allows problems to be investigated when a program has crashed "in the field". Here is a simple program containing an invalid memory access bug, which we will use to produce a core file: int a (int *p); int main (void) { int *p = 0; /* null pointer */ return a (p); } int a (int *p) { int y = *p; return y; } The program attempts to dereference a null pointer `p', which is an invalid operation. On most systems, this will cause a crash. (1) In order to be able to find the cause of the crash later, we need to compile the program with the `-g' option: $ gcc -Wall -g null.c Note that a null pointer will only cause a problem at run-time, so the option `-Wall' does not produce any warnings. Running the executable file on an x86 GNU/Linux system will cause the operating system to terminate the program abnormally: $ ./a.out Segmentation fault (core dumped) Whenever the error message `core dumped' is displayed, the operating system should produce a file called `core' in the current directory.(2) This core file contains a complete copy of the pages of memory used by the program at the time it was terminated. Incidentally, the term "segmentation fault" refers to the fact that the program tried to access a restricted memory "segment" outside the area of memory which had been allocated to it. Some systems are configured not to write core files by default, since the files can be large and rapidly fill up the available disk space on a system. In the `GNU Bash' shell the command `ulimit -c' controls the maximum size of core files. If the size limit is zero, no core files are produced. The current size limit can be shown by typing the following command: $ ulimit -c 0 If the result is zero, as shown above, then it can be increased with the following command to allow core files of any size to be written:(3) $ ulimit -c unlimited Note that this setting only applies to the current shell. To set the limit for future sessions the command should be placed in an appropriate login file, such as `.bash_profile' for the GNU Bash shell. Core files can be loaded into the GNU Debugger `gdb' with the following command: $ gdb EXECUTABLE-FILE CORE-FILE Note that both the original executable file and the core file are required for debugging--it is not possible to debug a core file without the corresponding executable. In this example, we can load the executable and core file with the command: $ gdb a.out core The debugger immediately begins printing diagnostic information, and shows a listing of the line where the program crashed (line 13): $ gdb a.out core Core was generated by `./a.out'. Program terminated with signal 11, Segmentation fault. Reading symbols from /lib/libc.so.6...done. Loaded symbols for /lib/libc.so.6 Reading symbols from /lib/ld-linux.so.2...done. Loaded symbols for /lib/ld-linux.so.2 #0 0x080483ed in a (p=0x0) at null.c:13 13 int y = *p; (gdb) The final line `(gdb)' is the GNU Debugger prompt--it indicates that further commands can be entered at this point. To investigate the cause of the crash, we display the value of the pointer `p' using the debugger `print' command: (gdb) print p $1 = (int *) 0x0 This shows that `p' is a null pointer (`0x0') of type `int *', so we know that dereferencing it with the expression `*p' in this line has caused the crash. ---------- Footnotes ---------- (1) Historically, a null pointer has typically corresponded to memory location 0, which is usually restricted to the operating system kernel and not accessible to user programs. (2) Some systems, such as FreeBSD and Solaris, can also be configured to write core files in specific directories, e.g. `/var/coredumps/', using the `sysctl' or `coreadm' commands. (3) This example uses the `ulimit' command in the GNU Bash shell. On other systems the usage of the `ulimit' command may vary, or have a different name (the `tcsh' shell uses the `limit' command instead). The size limit for core files can also be set to a specific value in kilobytes.  File: gccintro.info, Node: Displaying a backtrace, Prev: Examining core files, Up: Compiling for debugging Displaying a backtrace ====================== The debugger can also show the function calls and arguments up to the current point of execution--this is called a "stack backtrace" and is displayed with the command `backtrace': (gdb) backtrace #0 0x080483ed in a (p=0x0) at null.c:13 #1 0x080483d9 in main () at null.c:7 In this case, the backtrace shows that the crash at line 13 occurred when the function `a()' was called with an argument of `p=0x0', from line 7 in `main()'. It is possible to move to different levels in the stack trace, and examine their variables, using the debugger commands `up' and `down'. A complete description of all the commands available in `gdb' can be found in the manual "`Debugging with GDB: The GNU Source-Level Debugger'" (*note Further reading::).  File: gccintro.info, Node: Compiling with optimization, Next: Compiling a C++ program, Prev: Compiling for debugging, Up: Top Compiling with optimization *************************** GCC is an "optimizing" compiler. It provides a wide range of options which aim to increase the speed, or reduce the size, of the executable files it generates. Optimization is a complex process. For each high-level command in the source code there are usually many possible combinations of machine instructions that can be used to achieve the appropriate final result. The compiler must consider these possibilities and choose among them. In general, different code must be generated for different processors, as they use incompatible assembly and machine languages. Each type of processor also has its own characteristics--some CPUs provide a large number of "registers" for holding intermediate results of calculations, while others must store and fetch intermediate results from memory. Appropriate code must be generated in each case. Furthermore, different amounts of time are needed for different instructions, depending on how they are ordered. GCC takes all these factors into account and tries to produce the fastest executable for a given system when compiling with optimization. * Menu: * Source-level optimization:: * Speed-space tradeoffs:: * Scheduling:: * Optimization levels:: * Optimization examples:: * Optimization and debugging:: * Optimization and compiler warnings::  File: gccintro.info, Node: Source-level optimization, Next: Speed-space tradeoffs, Up: Compiling with optimization Source-level optimization ========================= The first form of optimization used by GCC occurs at the source-code level, and does not require any knowledge of the machine instructions. There are many source-level optimization techniques--this section describes two common types: "common subexpression elimination" and "function inlining". Common subexpression elimination -------------------------------- One method of source-level optimization which is easy to understand involves computing an expression in the source code with fewer instructions, by reusing already-computed results. For example, the following assignment: x = cos(v)*(1+sin(u/2)) + sin(w)*(1-sin(u/2)) can be rewritten with a temporary variable `t' to eliminate an unnecessary extra evaluation of the term `sin(u/2)': t = sin(u/2) x = cos(v)*(1+t) + sin(w)*(1-t) This rewriting is called "common subexpression elimination" (CSE), and is performed automatically when optimization is turned on.(1) Common subexpression elimination is powerful, because it simultaneously increases the speed and reduces the size of the code. Function inlining ----------------- Another type of source-level optimization, called "function inlining", increases the efficiency of frequently-called functions. Whenever a function is used, a certain amount of extra time is required for the CPU to carry out the call: it must store the function arguments in the appropriate registers and memory locations, jump to the start of the function (bringing the appropriate virtual memory pages into physical memory or the CPU cache if necessary), begin executing the code, and then return to the original point of execution when the function call is complete. This additional work is referred to as "function-call overhead". Function inlining eliminates this overhead by replacing calls to a function by the code of the function itself (known as placing the code "in-line"). In most cases, function-call overhead is a negligible fraction of the total run-time of a program. It can become significant only when there are functions which contain relatively few instructions, and these functions account for a substantial fraction of the run-time--in this case the overhead then becomes a large proportion of the total run-time. Inlining is always favorable if there is only one point of invocation of a function. It is also unconditionally better if the invocation of a function requires more instructions (memory) than moving the body of the function in-line. This is a common situation for simple accessor functions in C++, which can benefit greatly from inlining. Moreover, inlining may facilitate further optimizations, such as common subexpression elimination, by merging several separate functions into a single large function. The following function `sq(x)' is a typical example of a function that would benefit from being inlined. It computes x^2, the square of its argument x: double sq (double x) { return x * x; } This function is small, so the overhead of calling it is comparable to the time taken to execute the single multiplication carried out by the function itself. If this function is used inside a loop, such as the one below, then the function-call overhead would become substantial: for (i = 0; i < 1000000; i++) { sum += sq (i + 0.5); } Optimization with inlining replaces the inner loop of the program with the body of the function, giving the following code: for (i = 0; i < 1000000; i++) { double t = (i + 0.5); /* temporary variable */ sum += t * t; } Eliminating the function call and performing the multiplication "in-line" allows the loop to run with maximum efficiency. GCC selects functions for inlining using a number of heuristics, such as the function being suitably small. As an optimization, inlining is carried out only within each object file. The `inline' keyword can be used to request explicitly that a specific function should be inlined wherever possible, including its use in other files.(2) The GCC Reference Manual "`Using GCC'" provides full details of the `inline' keyword, and its use with the `static' and `extern' qualifiers to control the linkage of explicitly inlined functions (*note Further reading::). ---------- Footnotes ---------- (1) Temporary values introduced by the compiler during common subexpression elimination are only used internally, and do not affect real variables. The name of the temporary variable `t' shown above is only used as an illustration. (2) In this case, the definition of the inline function must be made available to the other files (in a header file, for example).  File: gccintro.info, Node: Speed-space tradeoffs, Next: Scheduling, Prev: Source-level optimization, Up: Compiling with optimization Speed-space tradeoffs ===================== While some forms of optimization, such as common subexpression elimination, are able to increase the speed and reduce the size of a program simultaneously, other types of optimization produce faster code at the expense of increasing the size of the executable. This choice between speed and memory is referred to as a "speed-space tradeoff". Optimizations with a speed-space tradeoff can also be used to make an executable smaller, at the expense of making it run slower. Loop unrolling -------------- A prime example of an optimization with a speed-space tradeoff is "loop unrolling". This form of optimization increases the speed of loops by eliminating the "end of loop" condition on each iteration. For example, the following loop from 0 to 7 tests the condition `i < 8' on each iteration: for (i = 0; i < 8; i++) { y[i] = i; } At the end of the loop, this test will have been performed 9 times, and a large fraction of the run time will have been spent checking it. A more efficient way to write the same code is simply to "unroll the loop" and execute the assignments directly: y[0] = 0; y[1] = 1; y[2] = 2; y[3] = 3; y[4] = 4; y[5] = 5; y[6] = 6; y[7] = 7; This form of the code does not require any tests, and executes at maximum speed. Since each assignment is independent, it also allows the compiler to use parallelism on processors that support it. Loop unrolling is an optimization that increases the speed of the resulting executable but also generally increases its size (unless the loop is very short, with only one or two iterations, for example). Loop unrolling is also possible when the upper bound of the loop is unknown, provided the start and end conditions are handled correctly. For example, the same loop with an arbitrary upper bound, for (i = 0; i < n; i++) { y[i] = i; } can be rewritten by the compiler as follows: for (i = 0; i < (n % 2); i++) { y[i] = i; } for ( ; i + 1 < n; i += 2) /* no initializer */ { y[i] = i; y[i+1] = i+1; } The first loop handles the case `i = 0' when `n' is odd, and the second loop handles all the remaining iterations. Note that the second loop does not use an initializer in the first argument of the `for' statement, since it continues where the first loop finishes. The assignments in the second loop can be parallelized, and the overall number of tests is reduced by a factor of 2 (approximately). Higher factors can be achieved by unrolling more assignments inside the loop, at the cost of greater code size.  File: gccintro.info, Node: Scheduling, Next: Optimization levels, Prev: Speed-space tradeoffs, Up: Compiling with optimization Scheduling ========== The lowest level of optimization is "scheduling", in which the compiler determines the best ordering of individual instructions. Most CPUs allow one or more new instructions to start executing before others have finished. Many CPUs also support "pipelining", where multiple instructions execute in parallel on the same CPU. When scheduling is enabled, instructions must be arranged so that their results become available to later instructions at the right time, and to allow for maximum parallel execution. Scheduling improves the speed of an executable without increasing its size, but requires additional memory and time in the compilation process itself (due to its complexity).  File: gccintro.info, Node: Optimization levels, Next: Optimization examples, Prev: Scheduling, Up: Compiling with optimization Optimization levels =================== In order to control compilation-time and compiler memory usage, and the trade-offs between speed and space for the resulting executable, GCC provides a range of general optimization levels, numbered from 0-3, as well as individual options for specific types of optimization. An optimization level is chosen with the command line option `-OLEVEL', where `LEVEL' is a number from 0 to 3. The effects of the different optimization levels are described below: `-O0' or no `-O' option (default) At this optimization level GCC does not perform any optimization and compiles the source code in the most straightforward way possible. Each command in the source code is converted directly to the corresponding instructions in the executable file, without rearrangement. This is the best option to use when debugging a program. The option `-O0' is equivalent to not specifying a `-O' option. `-O1' or `-O' This level turns on the most common forms of optimization that do not require any speed-space tradeoffs. With this option the resulting executables should be smaller and faster than with `-O0'. The more expensive optimizations, such as instruction scheduling, are not used at this level. Compiling with the option `-O1' can often take less time than compiling with `-O0', due to the reduced amounts of data that need to be processed after simple optimizations. `-O2' This option turns on further optimizations, in addition to those used by `-O1'. These additional optimizations include instruction scheduling. Only optimizations that do not require any speed-space tradeoffs are used, so the executable should not increase in size. The compiler will take longer to compile programs and require more memory than with `-O1'. This option is generally the best choice for deployment of a program, because it provides maximum optimization without increasing the executable size. It is the default optimization level for releases of GNU packages. `-O3' This option turns on more expensive optimizations, such as function inlining, in addition to all the optimizations of the lower levels `-O2' and `-O1'. The `-O3' optimization level may increase the speed of the resulting executable, but can also increase its size. Under some circumstances where these optimizations are not favorable, this option might actually make a program slower. `-funroll-loops' This option turns on loop-unrolling, and is independent of the other optimization options. It will increase the size of an executable. Whether or not this option produces a beneficial result has to be examined on a case-by-case basis. `-Os' This option selects optimizations which reduce the size of an executable. The aim of this option is to produce the smallest possible executable, for systems constrained by memory or disk space. In some cases a smaller executable will also run faster, due to better cache usage. It is important to remember that the benefit of optimization at the highest levels must be weighed against the cost. The cost of optimization includes greater complexity in debugging, and increased time and memory requirements during compilation. For most purposes it is satisfactory to use `-O0' for debugging, and `-O2' for development and deployment.  File: gccintro.info, Node: Optimization examples, Next: Optimization and debugging, Prev: Optimization levels, Up: Compiling with optimization Examples ======== The following program will be used to demonstrate the effects of different optimization levels: #include double powern (double d, unsigned n) { double x = 1.0; unsigned j; for (j = 1; j <= n; j++) x *= d; return x; } int main (void) { double sum = 0.0; unsigned i; for (i = 1; i <= 100000000; i++) { sum += powern (i, i % 5); } printf ("sum = %g\n", sum); return 0; } The main program contains a loop calling the `powern' function. This function computes the n-th power of a floating point number by repeated multiplication--it has been chosen because it is suitable for both inlining and loop-unrolling. The run-time of the program can be measured using the `time' command in the GNU Bash shell. Here are some results for the program above, compiled on a 566MHz Intel Celeron with 16KB L1-cache and 128KB L2-cache, using GCC 3.3.1 on a GNU/Linux system: $ gcc -Wall -O0 test.c -lm $ time ./a.out real 0m13.388s user 0m13.370s sys 0m0.010s $ gcc -Wall -O1 test.c -lm $ time ./a.out real 0m10.030s user 0m10.030s sys 0m0.000s $ gcc -Wall -O2 test.c -lm $ time ./a.out real 0m8.388s user 0m8.380s sys 0m0.000s $ gcc -Wall -O3 test.c -lm $ time ./a.out real 0m6.742s user 0m6.730s sys 0m0.000s $ gcc -Wall -O3 -funroll-loops test.c -lm $ time ./a.out real 0m5.412s user 0m5.390s sys 0m0.000s The relevant entry in the output for comparing the speed of the resulting executables is the `user' time, which gives the actual CPU time spent running the process. The other rows, `real' and `sys', record the total real time for the process to run (including times where other processes were using the CPU) and the time spent waiting for operating system calls. Although only one run is shown for each case above, the benchmarks were executed several times to confirm the results. From the results it can be seen in this case that increasing the optimization level with `-O1', `-O2' and `-O3' produces an increasing speedup, relative to the unoptimized code compiled with `-O0'. The additional option `-funroll-loops' produces a further speedup. The speed of the program is more than doubled overall, when going from unoptimized code to the highest level of optimization. Note that for a small program such as this there can be considerable variation between systems and compiler versions. For example, on a Mobile 2.0GHz Intel Pentium 4M system the trend of the results using the same version of GCC is similar except that the performance with `-O2' is slightly worse than with `-O1'. This illustrates an important point: optimizations may not necessarily make a program faster in every case.  File: gccintro.info, Node: Optimization and debugging, Next: Optimization and compiler warnings, Prev: Optimization examples, Up: Compiling with optimization Optimization and debugging ========================== With GCC it is possible to use optimization in combination with the debugging option `-g'. Many other compilers do not allow this. When using debugging and optimization together, the internal rearrangements carried out by the optimizer can make it difficult to see what is going on when examining an optimized program in the debugger. For example, temporary variables are often eliminated, and the ordering of statements may be changed. However, when a program crashes unexpectedly, any debugging information is better than none--so the use of `-g' is recommended for optimized programs, both for development and deployment. The debugging option `-g' is enabled by default for releases of GNU packages, together with the optimization option `-O2'.  File: gccintro.info, Node: Optimization and compiler warnings, Prev: Optimization and debugging, Up: Compiling with optimization Optimization and compiler warnings ================================== When optimization is turned on, GCC can produce additional warnings that do not appear when compiling without optimization. As part of the optimization process, the compiler examines the use of all variables and their initial values--this is referred to as "data-flow analysis". It forms the basis for other optimization strategies, such as instruction scheduling. A side-effect of data-flow analysis is that the compiler can detect the use of uninitialized variables. The `-Wuninitialized' option (which is included in `-Wall') warns about variables that are read without being initialized. It only works when the program is compiled with optimization to enable data-flow analysis. The following function contains an example of such a variable: int sign (int x) { int s; if (x > 0) s = 1; else if (x < 0) s = -1; return s; } The function works correctly for most arguments, but has a bug when `x' is zero--in this case the return value of the variable `s' will be undefined. Compiling the program with the `-Wall' option alone does not produce any warnings, because data-flow analysis is not carried out without optimization: $ gcc -Wall -c uninit.c To produce a warning, the program must be compiled with `-Wall' and optimization simultaneously. In practice, the optimization level `-O2' is needed to give good warnings: $ gcc -Wall -O2 -c uninit.c uninit.c: In function `sign': uninit.c:4: warning: `s' might be used uninitialized in this function This correctly detects the possibility of the variable `s' being used without being defined. Note that while GCC will usually find most uninitialized variables, it does so using heuristics which will occasionally miss some complicated cases or falsely warn about others. In the latter situation, it is often possible to rewrite the relevant lines in a simpler way that removes the warning and improves the readability of the source code.  File: gccintro.info, Node: Compiling a C++ program, Next: Platform-specific options, Prev: Compiling with optimization, Up: Top Compiling a C++ program *********************** This chapter describes how to use GCC to compile programs written in C++, and the command-line options specific to that language. The GNU C++ compiler provided by GCC is a true C++ compiler--it compiles C++ source code directly into assembly language. Some other C++ "compilers" are translators which convert C++ programs into C, and then compile the resulting C program using an existing C compiler. A true C++ compiler, such as GCC, is able to provide better support for error reporting, debugging and optimization. * Menu: * Compiling a simple C++ program:: * Using the C++ standard library:: * Templates::  File: gccintro.info, Node: Compiling a simple C++ program, Next: Using the C++ standard library, Up: Compiling a C++ program Compiling a simple C++ program ============================== The procedure for compiling a C++ program is the same as for a C program, but uses the command `g++' instead of `gcc'. Both compilers are part of the GNU Compiler Collection. To demonstrate the use of `g++', here is a version of the "Hello World" program written in C++: #include int main () { std::cout << "Hello, world!" << std::endl; return 0; } The program can be compiled with the following command line: $ g++ -Wall hello.cc -o hello The C++ frontend of GCC uses many of the same the same options as the C compiler `gcc'. It also supports some additional options for controlling C++ language features, which will be described in this chapter. Note that C++ source code should be given one of the valid C++ file extensions `.cc', `.cpp', `.cxx' or `.C' rather than the `.c' extension used for C programs. The resulting executable can be run in exactly same way as the C version, simply by typing its filename: $ ./hello Hello, world! The executable produces the same output as the C version of the program, using `std::cout' instead of the C `printf' function. All the options used in the `gcc' commands in previous chapters apply to `g++' without change, as do the procedures for compiling and linking files and libraries (using `g++' instead of `gcc', of course). One natural difference is that the `-ansi' option requests compliance with the C++ standard, instead of the C standard, when used with `g++'. Note that programs using C++ object files must always be linked with `g++', in order to supply the appropriate C++ libraries. Attempting to link a C++ object file with the C compiler `gcc' will cause "undefined reference" errors for C++ standard library functions: $ g++ -Wall -c hello.cc $ gcc hello.o (should use `g++') hello.o: In function `main': hello.o(.text+0x1b): undefined reference to `std::cout' ..... hello.o(.eh_frame+0x11): undefined reference to `__gxx_personality_v0' Linking the same object file with `g++' supplies all the necessary C++ libraries and will produce a working executable: $ g++ hello.o $ ./a.out Hello, world! A point that sometimes causes confusion is that `gcc' will actually compile C++ source code when it detects a C++ file extension, but cannot then link the resulting object files. $ gcc -Wall -c hello.cc (succeeds, even for C++) $ gcc hello.o hello.o: In function `main': hello.o(.text+0x1b): undefined reference to `std::cout' In order to avoid this problem it is best to use `g++' consistently for C++ programs, and `gcc' for C programs.  File: gccintro.info, Node: Using the C++ standard library, Next: Templates, Prev: Compiling a simple C++ program, Up: Compiling a C++ program Using the C++ standard library ============================== An implementation of the C++ standard library is provided as a part of GCC. The following program uses the standard library `string' class to reimplement the "Hello World" program: #include #include using namespace std; int main () { string s1 = "Hello,"; string s2 = "World!"; cout << s1 + " " + s2 << endl; return 0; } The program can be compiled and run using the same commands as above: $ g++ -Wall hellostr.cc $ ./a.out Hello, World! Note that in accordance with the C++ standard, the header files for the C++ library itself do not use a file extension. The classes in the library are also defined in the `std' namespace, so the directive `using namespace std' is needed to access them, unless the prefix `std::' is used throughout (as in the previous section).  File: gccintro.info, Node: Templates, Prev: Using the C++ standard library, Up: Compiling a C++ program Templates ========= Templates provide the ability to define C++ classes which support "generic programming" techniques. Templates can be considered as a powerful kind of macro facility. When a templated class or function is used with a specific class or type, such as `float' or `int', the corresponding template code is compiled with that type substituted in the appropriate places. * Menu: * Using C++ standard library templates:: * Providing your own templates:: * Explicit template instantiation:: * The export keyword::  File: gccintro.info, Node: Using C++ standard library templates, Next: Providing your own templates, Up: Templates Using C++ standard library templates ------------------------------------ The C++ standard library `libstdc++' supplied with GCC provides a wide range of generic container classes such as lists and queues, in addition to generic algorithms such as sorting. These classes were originally part of the Standard Template Library (STL), which was a separate package, but are now included in the C++ standard library itself. The following program demonstrates the use of the template library by creating a list of strings with the template `list': #include #include #include using namespace std; int main () { list list; list.push_back("Hello"); list.push_back("World"); cout << "List size = " << list.size() << endl; return 0; } No special options are needed to use the template classes in the standard library; the command-line options for compiling this program are the same as before: $ g++ -Wall string.cc $ ./a.out List size = 2 Note that the executables created by `g++' using the C++ standard library will be linked to the shared library `libstdc++', which is supplied as part of the default GCC installation. There are several versions of this library--if you distribute executables using the C++ standard library you need to ensure that the recipient has a compatible version of `libstdc++', or link your program statically using the command-line option `-static'.  File: gccintro.info, Node: Providing your own templates, Next: Explicit template instantiation, Prev: Using C++ standard library templates, Up: Templates Providing your own templates ---------------------------- In addition to the template classes provided by the C++ standard library you can define your own templates. The recommended way to use templates with `g++' is to follow the "inclusion compilation model", where template definitions are placed in header files. This is the method used by the C++ standard library supplied with GCC itself. The header files can then be included with `#include' in each source file where they are needed. For example, the following template file creates a simple `Buffer' class which represents a circular buffer holding objects of type `T'. #ifndef BUFFER_H #define BUFFER_H template class Buffer { public: Buffer (unsigned int n); void insert (const T & x); T get (unsigned int k) const; private: unsigned int i; unsigned int size; T *pT; }; template Buffer::Buffer (unsigned int n) { i = 0; size = n; pT = new T[n]; }; template void Buffer::insert (const T & x) { i = (i + 1) % size; pT[i] = x; }; template T Buffer::get (unsigned int k) const { return pT[(i + (size - k)) % size]; }; #endif /* BUFFER_H */ The file contains both the declaration of the class and the definitions of the member functions. This class is only given for demonstration purposes and should not be considered an example of good programming. Note the use of "include guards", which test for the presence of the macro `BUFFER_H', ensuring that the definitions in the header file are only parsed once, if the file is included multiple times in the same context. The program below uses the templated `Buffer' class to create a buffer of size 10, storing the floating point values 0.25 and 1.0 in the buffer: #include #include "buffer.h" using namespace std; int main () { Buffer f(10); f.insert (0.25); f.insert (1.0 + f.get(0)); cout << "stored value = " << f.get(0) << endl; return 0; } The definitions for the template class and its functions are included in the source file for the program with `#include "buffer.h"' before they are used. The program can then be compiled using the following command line: $ g++ -Wall tprog.cc $ ./a.out stored value = 1.25 At the points where the template functions are used in the source file, `g++' compiles the appropriate definition from the header file and places the compiled function in the corresponding object file. If a template function is used several times in a program it will be stored in more than one object file. The GNU Linker ensures that only one copy is placed in the final executable. Other linkers may report ""multiply defined symbol"" errors when they encounter more than one copy of a template function--a method of working with these linkers is described below.  File: gccintro.info, Node: Explicit template instantiation, Next: The export keyword, Prev: Providing your own templates, Up: Templates Explicit template instantiation ------------------------------- To achieve complete control over the compilation of templates with `g++' it is possible to require explicit instantiation of each occurrence of a template, using the option `-fno-implicit-templates'. This method is not needed when using the GNU Linker--it is an alternative to the inclusion compilation model for systems with linkers which cannot eliminate duplicate definitions of template functions in object files. In this approach, template functions are no longer compiled at the point where they are used, as a result of the `-fno-implicit-templates' option. Instead, the compiler looks for an explicit instantiation of the template using the `template' keyword with a specific type to force its compilation (this is a GNU extension to the standard behavior). These instantiations are typically placed in a separate source file, which is then compiled to make an object file containing all the template functions required by a program. This ensures that each template appears in only one object file, and is compatible with linkers which cannot eliminate duplicate definitions in object files. For example, the following file `templates.cc' contains an explicit instantiation of the `Buffer' class used by the program `tprog.cc' given above: #include "buffer.h" template class Buffer; The whole program can be compiled and linked using explicit instantiation with the following commands: $ g++ -Wall -fno-implicit-templates -c tprog.cc $ g++ -Wall -fno-implicit-templates -c templates.cc $ g++ tprog.o templates.o $ ./a.out stored value = 1.25 The object code for all the template functions is contained in the file `templates.o'. There is no object code for template functions in `tprog.o' when it is compiled with the `-fno-implicit-templates' option. If the program is modified to use additional types, then further explicit instantiations can be added to the file `templates.cc'. For example, the following code adds instantiations for Buffer objects containing `double' and `int' values: #include "buffer.h" template class Buffer; template class Buffer; template class Buffer; The disadvantage of explicit instantiation is that it is necessary to know which template types are needed by the program. For a complicated program this may be difficult to determine in advance. Any missing template instantiations can be determined at link time, however, and added to the list of explicit instantiations, by noting which functions are undefined. Explicit instantiation can also be used to make libraries of precompiled template functions, by creating an object file containing all the required instantiations of a template function (as in the file `templates.cc' above). For example, the object file created from the template instantiations above contains the machine code needed for Buffer classes with `float', `double' and `int' types, and could be distributed in a library.  File: gccintro.info, Node: The export keyword, Prev: Explicit template instantiation, Up: Templates The `export' keyword -------------------- At the time of writing, GCC does not support the new C++ `export' keyword (GCC 3.3.2). This keyword was proposed as a way of separating the interface of templates from their implementation. However it adds its own complexity to the linking process, which can detract from any advantages in practice. The `export' keyword is not widely used, and most other compilers do not support it either. The inclusion compilation model described earlier is recommended as the simplest and most portable way to use templates.  File: gccintro.info, Node: Platform-specific options, Next: Troubleshooting, Prev: Compiling a C++ program, Up: Top Platform-specific options ************************* GCC provides a range of platform-specific options for different types of CPUs. These options control features such as hardware floating-point modes, and the use of special instructions for different CPUs. They can be selected with the `-m' option on the command line, and work with all the GCC language frontends, such as `gcc' and `g++'. The following sections describe some of the options available for common platforms. A complete list of all platform-specific options can be found in the GCC Reference Manual, "`Using GCC'" (*note Further reading::). Support for new processors is added to GCC as they become available, therefore some of the options described in this chapter may not be found in older versions of GCC. * Menu: * Intel and AMD x86 options:: * DEC Alpha options:: * SPARC options:: * POWER/PowerPC options:: * Multi-architecture support::  File: gccintro.info, Node: Intel and AMD x86 options, Next: DEC Alpha options, Up: Platform-specific options Intel and AMD x86 options ========================= The features of the widely used Intel and AMD x86 families of processors (386, 486, Pentium, etc) can be controlled with GCC platform-specific options. On these platforms, GCC produces executable code which is compatible with all the processors in the x86 family by default--going all the way back to the 386. However, it is also possible to compile for a specific processor to obtain better performance.(1) For example, recent versions of GCC have specific support for newer processors such as the Pentium 4 and AMD Athlon. These can be selected with the following option for the Pentium 4, $ gcc -Wall -march=pentium4 hello.c and for the Athlon: $ gcc -Wall -march=athlon hello.c A complete list of supported CPU types can be found in the GCC Reference Manual. Code produced with a specific `-march=CPU' option will be faster but will not run on other processors in the x86 family. If you plan to distribute executable files for general use on Intel and AMD processors they should be compiled without any `-march' options. As an alternative, the `-mcpu=CPU' option provides a compromise between speed and portability--it generates code that is tuned for a specific processor, in terms of instruction scheduling, but does not use any instructions which are not available on other CPUs in the x86 family. The resulting code will be compatible with all the CPUs, and have a speed advantage on the CPU specified by `-mcpu'. The executables generated by `-mcpu' cannot achieve the same performance as `-march', but may be more convenient in practice. AMD has enhanced the 32-bit x86 instruction set to a 64-bit instruction set called x86-64, which is implemented in their AMD64 processors.(2) On AMD64 systems GCC generates 64-bit code by default. The option `-m32' allows 32-bit code to be generated instead. The AMD64 processor has several different memory models for programs running in 64-bit mode. The default model is the small code model, which allows code and data up to 2GB in size. The medium code model allows unlimited data sizes and can be selected with `-mcmodel=medium'. There is also a large code model, which supports an unlimited code size in addition to unlimited data size. It is not currently implemented in GCC since the medium code model is sufficient for all practical purposes--executables with sizes greater than 2GB are not encountered in practice. A special kernel code model `-mcmodel=kernel' is provided for system-level code, such as the Linux kernel. An important point to note is that by default on the AMD64 there is a 128-byte area of memory allocated below the stack pointer for temporary data, referred to as the "red-zone", which is not supported by the Linux kernel. Compilation of the Linux kernel on the AMD64 requires the options `-mcmodel=kernel -mno-red-zone'. ---------- Footnotes ---------- (1) Also referred to as "targeting" a specific processor. (2) Intel has added support for this instruction set as the "Intel 64-bit enhancements" on their Xeon CPUs.  File: gccintro.info, Node: DEC Alpha options, Next: SPARC options, Prev: Intel and AMD x86 options, Up: Platform-specific options DEC Alpha options ================= The DEC Alpha processor has default settings which maximize floating-point performance, at the expense of full support for IEEE arithmetic features. Support for infinity arithmetic and gradual underflow (denormalized numbers) is not enabled in the default configuration on the DEC Alpha processor. Operations which produce infinities or underflows will generate floating-point exceptions (also known as "traps"), and cause the program to terminate, unless the operating system catches and handles the exceptions (which is, in general, inefficient). The IEEE standard specifies that these operations should produce special results to represent the quantities in the IEEE numeric format. In most cases the DEC Alpha default behavior is acceptable, since the majority of programs do not produce infinities or underflows. For applications which require these features, GCC provides the option `-mieee' to enable full support for IEEE arithmetic. To demonstrate the difference between the two cases the following program divides 1 by 0: #include int main (void) { double x = 1.0, y = 0.0; printf ("x/y = %g\n", x / y); return 0; } In IEEE arithmetic the result of 1/0 is `inf' ("Infinity"). If the program is compiled for the Alpha processor with the default settings it generates an exception, which terminates the program: $ gcc -Wall alpha.c $ ./a.out Floating point exception (on an Alpha processor) Using the `-mieee' option ensures full IEEE compliance - the division 1/0 correctly produces the result `inf' and the program continues executing successfully: $ gcc -Wall -mieee alpha.c $ ./a.out x/y = inf Note that programs which generate floating-point exceptions run more slowly when compiled with `-mieee', because the exceptions are handled in software rather than hardware.  File: gccintro.info, Node: SPARC options, Next: POWER/PowerPC options, Prev: DEC Alpha options, Up: Platform-specific options SPARC options ============= On the SPARC range of processors the `-mcpu=CPU' option generates processor-specific code. The valid options for `CPU' are `v7', `v8' (SuperSPARC), `Sparclite', `Sparclet' and `v9' (UltraSPARC). Code produced with a specific `-mcpu' option will not run on other processors in the SPARC family, except where supported by the backwards-compatibility of the processor itself. On 64-bit UltraSPARC systems the options `-m32' and `-m64' control code generation for 32-bit or 64-bit environments. The 32-bit environment selected by `-m32' uses `int', `long' and pointer types with a size of 32 bits. The 64-bit environment selected by `-m64' uses a 32-bit `int' type and 64-bit `long' and pointer types.  File: gccintro.info, Node: POWER/PowerPC options, Next: Multi-architecture support, Prev: SPARC options, Up: Platform-specific options POWER/PowerPC options ===================== On systems using the POWER/PowerPC family of processors the option `-mcpu=CPU' selects code generation for specific CPU models. The possible values of `CPU' include `power', `power2', `powerpc', `powerpc64' and `common', in addition to other more specific model numbers. Code generated with the option `-mcpu=common' will run on any of the processors. The option `-maltivec' enables use of the Altivec vector processing instructions, if the appropriate hardware support is available. The POWER/PowerPC processors include a combined "multiply and add" instruction a * x + b, which performs the two operations simultaneously for speed--this is referred to as a "fused" multiply and add, and is used by GCC by default. Due to differences in the way intermediate values are rounded, the result of a fused instruction may not be exactly the same as performing the two operations separately. In cases where strict IEEE arithmetic is required, the use of the combined instructions can be disabled with the option `-mno-fused-madd'. On AIX systems, the option `-mminimal-toc' decreases the number of entries GCC puts in the global "table of contents" (TOC) in executables, to avoid "TOC overflow" errors at link time. The option `-mxl-call' makes the linking of object files from GCC compatible with those from IBM's XL compilers. For applications using POSIX threads, AIX always requires the option `-pthread' when compiling, even when the program will only run in single-threaded mode.  File: gccintro.info, Node: Multi-architecture support, Prev: POWER/PowerPC options, Up: Platform-specific options Multi-architecture support ========================== A number of platforms can execute code for more than one architecture. For example, 64-bit platforms such as AMD64, MIPS64, Sparc64, and PowerPC64 support the execution of both 32-bit and 64-bit code. Similarly, ARM processors support both ARM code and a more compact code called "Thumb". GCC can be built to support multiple architectures on these platforms. By default, the compiler will generate 64-bit object files, but giving the `-m32' option will generate a 32-bit object file for the corresponding architecture.(1) Note that support for multiple architectures depends on the corresponding libraries being available. On 64-bit platforms supporting both 64 and 32-bit executables, the 64-bit libraries are often placed in `lib64' directories instead of `lib' directories, e.g. in `/usr/lib64' and `/lib64'. The 32-bit libraries are then found in the default `lib' directories as on other platforms. This allows both a 32-bit and a 64-bit library with the same name to exist on the same system. Other systems, such as the IA64/Itanium, use the directories `/usr/lib' and `/lib' for 64-bit libraries. GCC knows about these paths and uses the appropriate path when compiling 64-bit or 32-bit code. ---------- Footnotes ---------- (1) The options `-maix64' and `-maix32' are used on AIX.  File: gccintro.info, Node: Troubleshooting, Next: Compiler-related tools, Prev: Platform-specific options, Up: Top Troubleshooting *************** GCC provides several help and diagnostic options to assist in troubleshooting problems with the compilation process. All the options described in this chapter work with both `gcc' and `g++'. * Menu: * Help for command-line options:: * Version numbers:: * Verbose compilation::  File: gccintro.info, Node: Help for command-line options, Next: Version numbers, Up: Troubleshooting Help for command-line options ============================= To obtain a brief reminder of various command-line options, GCC provides a help option which displays a summary of the top-level GCC command-line options: $ gcc --help To display a complete list of options for `gcc' and its associated programs, such as the GNU Linker and GNU Assembler, use the help option above with the verbose (`-v') option: $ gcc -v --help The complete list of options produced by this command is extremely long--you may wish to page through it using the `more' command, or redirect the output to a file for reference: $ gcc -v --help 2>&1 | more  File: gccintro.info, Node: Version numbers, Next: Verbose compilation, Prev: Help for command-line options, Up: Troubleshooting Version numbers =============== You can find the version number of `gcc' using the version option: $ gcc --version gcc (GCC) 3.3.1 The version number is important when investigating compilation problems, since older versions of GCC may be missing some features that a program uses. The version number has the form MAJOR-VERSION.MINOR-VERSION or MAJOR-VERSION.MINOR-VERSION.MICRO-VERSION, where the additional third "micro" version number (as shown above) is used for subsequent bug-fix releases in a release series. More details about the version can be found using `-v': $ gcc -v Reading specs from /usr/lib/gcc-lib/i686/3.3.1/specs Configured with: ../configure --prefix=/usr Thread model: posix gcc version 3.3.1 This includes information on the build flags of the compiler itself and the installed configuration file, `specs'.  File: gccintro.info, Node: Verbose compilation, Prev: Version numbers, Up: Troubleshooting Verbose compilation =================== The `-v' option can also be used to display detailed information about the exact sequence of commands used to compile and link a program. Here is an example which shows the verbose compilation of the `Hello World' program: $ gcc -v -Wall hello.c Reading specs from /usr/lib/gcc-lib/i686/3.3.1/specs Configured with: ../configure --prefix=/usr Thread model: posix gcc version 3.3.1 /usr/lib/gcc-lib/i686/3.3.1/cc1 -quiet -v -D__GNUC__=3 -D__GNUC_MINOR__=3 -D__GNUC_PATCHLEVEL__=1 hello.c -quiet -dumpbase hello.c -auxbase hello -Wall -version -o /tmp/cceCee26.s GNU C version 3.3.1 (i686-pc-linux-gnu) compiled by GNU C version 3.3.1 (i686-pc-linux-gnu) GGC heuristics: --param ggc-min-expand=51 --param ggc-min-heapsize=40036 ignoring nonexistent directory "/usr/i686/include" #include "..." search starts here: #include <...> search starts here: /usr/local/include /usr/include /usr/lib/gcc-lib/i686/3.3.1/include /usr/include End of search list. as -V -Qy -o /tmp/ccQynbTm.o /tmp/cceCee26.s GNU assembler version 2.12.90.0.1 (i386-linux) using BFD version 2.12.90.0.1 20020307 Debian/GNU Linux /usr/lib/gcc-lib/i686/3.3.1/collect2 --eh-frame-hdr -m elf_i386 -dynamic-linker /lib/ld-linux.so.2 /usr/lib/crt1.o /usr/lib/crti.o /usr/lib/gcc-lib/i686/3.3.1/crtbegin.o -L/usr/lib/gcc-lib/i686/3.3.1 -L/usr/lib/gcc-lib/i686/3.3.1/../../.. /tmp/ccQynbTm.o -lgcc -lgcc_eh -lc -lgcc -lgcc_eh /usr/lib/gcc-lib/i686/3.3.1/crtend.o /usr/lib/crtn.o The output produced by `-v' can be useful whenever there is a problem with the compilation process itself. It displays the full directory paths used to search for header files and libraries, the predefined preprocessor symbols, and the object files and libraries used for linking.  File: gccintro.info, Node: Compiler-related tools, Next: How the compiler works, Prev: Troubleshooting, Up: Top Compiler-related tools ********************** This chapter describes a number of tools which are useful in combination with GCC. These include the GNU archiver `ar', for creating libraries, and the GNU profiling and coverage testing programs, `gprof' and `gcov'. * Menu: * Creating a library with the GNU archiver:: * Using the profiler gprof:: * Coverage testing with gcov::  File: gccintro.info, Node: Creating a library with the GNU archiver, Next: Using the profiler gprof, Up: Compiler-related tools Creating a library with the GNU archiver ======================================== The GNU archiver `ar' combines a collection of object files into a single archive file, also known as a "library". An archive file is simply a convenient way of distributing a large number of related object files together (as described earlier in *Note Linking with external libraries::). To demonstrate the use of the GNU archiver we will create a small library `libhello.a' containing two functions `hello' and `bye'. The first object file will be generated from the source code for the `hello' function, in the file `hello_fn.c' seen earlier: #include #include "hello.h" void hello (const char * name) { printf ("Hello, %s!\n", name); } The second object file will be generated from the source file `bye_fn.c', which contains the new function `bye': #include #include "hello.h" void bye (void) { printf ("Goodbye!\n"); } Both functions use the header file `hello.h', now with a prototype for the function `bye()': void hello (const char * name); void bye (void); The source code can be compiled to the object files `hello_fn.o' and `bye_fn.o' using the commands: $ gcc -Wall -c hello_fn.c $ gcc -Wall -c bye_fn.c These object files can be combined into a static library using the following command line: $ ar cr libhello.a hello_fn.o bye_fn.o The option `cr' stands for "create and replace".(1) If the library does not exist, it is first created. If the library already exists, any original files in it with the same names are replaced by the new files specified on the command line. The first argument `libhello.a' is the name of the library. The remaining arguments are the names of the object files to be copied into the library. The archiver `ar' also provides a "table of contents" option `t' to list the object files in an existing library: $ ar t libhello.a hello_fn.o bye_fn.o Note that when a library is distributed, the header files for the public functions and variables it provides should also be made available, so that the end-user can include them and obtain the correct prototypes. We can now write a program using the functions in the newly created library: #include "hello.h" int main (void) { hello ("everyone"); bye (); return 0; } This file can be compiled with the following command line, as described in *Note Linking with external libraries::, assuming the library `libhello.a' is stored in the current directory: $ gcc -Wall main.c libhello.a -o hello The main program is linked against the object files found in the library file `libhello.a' to produce the final executable. The short-cut library linking option `-l' can also be used to link the program, without needing to specify the full filename of the library explicitly: $ gcc -Wall -L. main.c -lhello -o hello The option `-L.' is needed to add the current directory to the library search path. The resulting executable can be run as usual: $ ./hello Hello, everyone! Goodbye! It displays the output from both the `hello' and `bye' functions defined in the library. ---------- Footnotes ---------- (1) Note that `ar' does not require a prefix `-' for its options.  File: gccintro.info, Node: Using the profiler gprof, Next: Coverage testing with gcov, Prev: Creating a library with the GNU archiver, Up: Compiler-related tools Using the profiler `gprof' ========================== The GNU profiler `gprof' is a useful tool for measuring the performance of a program--it records the number of calls to each function and the amount of time spent there, on a per-function basis. Functions which consume a large fraction of the run-time can be identified easily from the output of `gprof'. Efforts to speed up a program should concentrate first on those functions which dominate the total run-time. We will use `gprof' to examine the performance of a small numerical program which computes the lengths of sequences occurring in the unsolved `Collatz conjecture' in mathematics.(1) The Collatz conjecture involves sequences defined by the rule: x_{n+1} <= x_{n} / 2 if x_{n} is even 3 x_{n} + 1 if x_{n} is odd The sequence is iterated from an initial value x_0 until it terminates with the value 1. According to the conjecture, all sequences do terminate eventually--the program below displays the longest sequences as x_0 increases. The source file `collatz.c' contains three functions: `main', `nseq' and `step': #include /* Computes the length of Collatz sequences */ unsigned int step (unsigned int x) { if (x % 2 == 0) { return (x / 2); } else { return (3 * x + 1); } } unsigned int nseq (unsigned int x0) { unsigned int i = 1, x; if (x0 == 1 || x0 == 0) return i; x = step (x0); while (x != 1 && x != 0) { x = step (x); i++; } return i; } int main (void) { unsigned int i, m = 0, im = 0; for (i = 1; i < 500000; i++) { unsigned int k = nseq (i); if (k > m) { m = k; im = i; printf ("sequence length = %u for %u\n", m, im); } } return 0; } To use profiling, the program must be compiled and linked with the `-pg' profiling option: $ gcc -Wall -c -pg collatz.c $ gcc -Wall -pg collatz.o This creates an "instrumented" executable which contains additional instructions that record the time spent in each function. If the program consists of more than one source file then the `-pg' option should be used when compiling each source file, and used again when linking the object files to create the final executable (as shown above). Forgetting to link with the option `-pg' is a common error, which prevents profiling from recording any useful information. The executable must be run to create the profiling data: $ ./a.out (normal program output is displayed) While running the instrumented executable, profiling data is silently written to a file `gmon.out' in the current directory. It can be analyzed with `gprof' by giving the name of the executable as an argument: $ gprof a.out Flat profile: Each sample counts as 0.01 seconds. % cumul. self self total time seconds seconds calls us/call us/call name 68.59 2.14 2.14 62135400 0.03 0.03 step 31.09 3.11 0.97 499999 1.94 6.22 nseq 0.32 3.12 0.01 main The first column of the data shows that the program spends most of its time (almost 70%) in the function `step', and 30% in `nseq'. Consequently efforts to decrease the run-time of the program should concentrate on the former. In comparison, the time spent within the `main' function itself is completely negligible (less than 1%). The other columns in the output provide information on the total number of function calls made, and the time spent in each function. Additional output breaking down the run-time further is also produced by `gprof' but not shown here. Full details can be found in the manual "`GNU gprof--The GNU Profiler'", by Jay Fenlason and Richard Stallman. ---------- Footnotes ---------- (1) American Mathematical Monthly, Volume 92 (1985), 3-23  File: gccintro.info, Node: Coverage testing with gcov, Prev: Using the profiler gprof, Up: Compiler-related tools Coverage testing with `gcov' ============================ The GNU coverage testing tool `gcov' analyses the number of times each line of a program is executed during a run. This makes it possible to find areas of the code which are not used, or which are not exercised in testing. When combined with profiling information from `gprof' the information from coverage testing allows efforts to speed up a program to be concentrated on specific lines of the source code. We will use the example program below to demonstrate `gcov'. This program loops overs the integers 1 to 9 and tests their divisibility with the modulus (`%') operator. #include int main (void) { int i; for (i = 1; i < 10; i++) { if (i % 3 == 0) printf ("%d is divisible by 3\n", i); if (i % 11 == 0) printf ("%d is divisible by 11\n", i); } return 0; } To enable coverage testing the program must be compiled with the following options: $ gcc -Wall -fprofile-arcs -ftest-coverage cov.c This creates an "instrumented" executable which contains additional instructions that record the number of times each line of the program is executed. The option `-ftest-coverage' adds instructions for counting the number of times individual lines are executed, while `-fprofile-arcs' incorporates instrumentation code for each branch of the program. Branch instrumentation records how frequently different paths are taken through `if' statements and other conditionals. The executable must then be run to create the coverage data: $ ./a.out 3 is divisible by 3 6 is divisible by 3 9 is divisible by 3 The data from the run is written to several files with the extensions `.bb' `.bbg' and `.da' respectively in the current directory. This data can be analyzed using the `gcov' command and the name of a source file: $ gcov cov.c 88.89% of 9 source lines executed in file cov.c Creating cov.c.gcov The `gcov' command produces an annotated version of the original source file, with the file extension `.gcov', containing counts of the number of times each line was executed: #include int main (void) { 1 int i; 10 for (i = 1; i < 10; i++) { 9 if (i % 3 == 0) 3 printf ("%d is divisible by 3\n", i); 9 if (i % 11 == 0) ###### printf ("%d is divisible by 11\n", i); 9 } 1 return 0; 1 } The line counts can be seen in the first column of the output. Lines which were not executed are marked with hashes `######'. The command `grep '######' *.gcov' can be used to find parts of a program which have not been used.  File: gccintro.info, Node: How the compiler works, Next: Examining compiled files, Prev: Compiler-related tools, Up: Top How the compiler works ********************** This chapter describes in more detail how GCC transforms source files to an executable file. Compilation is a multi-stage process involving several tools, including the GNU Compiler itself (through the `gcc' or `g++' frontends), the GNU Assembler `as', and the GNU Linker `ld'. The complete set of tools used in the compilation process is referred to as a "toolchain". * Menu: * An overview of the compilation process:: * The preprocessor:: * The compiler:: * The assembler:: * The linker::  File: gccintro.info, Node: An overview of the compilation process, Next: The preprocessor, Up: How the compiler works An overview of the compilation process ====================================== The sequence of commands executed by a single invocation of GCC consists of the following stages: * preprocessing (to expand macros) * compilation (from source code to assembly language) * assembly (from assembly language to machine code) * linking (to create the final executable) As an example, we will examine these compilation stages individually using the "Hello World" program `hello.c': #include int main (void) { printf ("Hello, world!\n"); return 0; } Note that it is not necessary to use any of the individual commands described in this section to compile a program. All the commands are executed automatically and transparently by GCC internally, and can be seen using the `-v' option described earlier (*note Verbose compilation::). The purpose of this chapter is to provide an understanding of how the compiler works. Although the "Hello World" program is very simple it uses external header files and libraries, and so exercises all the major steps of the compilation process.  File: gccintro.info, Node: The preprocessor, Next: The compiler, Prev: An overview of the compilation process, Up: How the compiler works The preprocessor ================ The first stage of the compilation process is the use of the preprocessor to expand macros and included header files. To perform this stage, GCC executes the following command:(1) $ cpp hello.c > hello.i The result is a file `hello.i' which contains the source code with all macros expanded. By convention, preprocessed files are given the file extension `.i' for C programs and `.ii' for C++ programs. In practice, the preprocessed file is not saved to disk unless the `-save-temps' option is used. ---------- Footnotes ---------- (1) As mentioned earlier, the preprocessor is integrated into the compiler in recent versions of GCC. Conceptually, the compilation process is the same as running the preprocessor as separate application.  File: gccintro.info, Node: The compiler, Next: The assembler, Prev: The preprocessor, Up: How the compiler works The compiler ============ The next stage of the process is the actual compilation of preprocessed source code to assembly language, for a specific processor. The command-line option `-S' instructs `gcc' to convert the preprocessed C source code to assembly language without creating an object file: $ gcc -Wall -S hello.i The resulting assembly language is stored in the file `hello.s'. Here is what the "Hello World" assembly language for an Intel x86 (i686) processor looks like: $ cat hello.s .file "hello.c" .section .rodata .LC0: .string "Hello, world!\n" .text .globl main .type main, @function main: pushl %ebp movl %esp, %ebp subl $8, %esp andl $-16, %esp movl $0, %eax subl %eax, %esp movl $.LC0, (%esp) call printf movl $0, %eax leave ret .size main, .-main .ident "GCC: (GNU) 3.3.1" Note that the assembly language contains a call to the external function `printf'.  File: gccintro.info, Node: The assembler, Next: The linker, Prev: The compiler, Up: How the compiler works The assembler ============= The purpose of the assembler is to convert assembly language into machine code and generate an object file. When there are calls to external functions in the assembly source file, the assembler leaves the addresses of the external functions undefined, to be filled in later by the linker. The assembler can be invoked with the following command line: $ as hello.s -o hello.o As with GCC, the output file is specified with the `-o' option. The resulting file `hello.o' contains the machine instructions for the "Hello World" program, with an undefined reference to `printf'.  File: gccintro.info, Node: The linker, Prev: The assembler, Up: How the compiler works The linker ========== The final stage of compilation is the linking of object files to create an executable. In practice, an executable requires many external functions from system and C run-time (`crt') libraries. Consequently, the actual link commands used internally by GCC are complicated. For example, the full command for linking the "Hello World" program is: $ ld -dynamic-linker /lib/ld-linux.so.2 /usr/lib/crt1.o /usr/lib/crti.o /usr/lib/gcc-lib/i686/3.3.1/crtbegin.o -L/usr/lib/gcc-lib/i686/3.3.1 hello.o -lgcc -lgcc_eh -lc -lgcc -lgcc_eh /usr/lib/gcc-lib/i686/3.3.1/crtend.o /usr/lib/crtn.o Fortunately there is never any need to type the command above directly--the entire linking process is handled transparently by `gcc' when invoked as follows: $ gcc hello.o This links the object file `hello.o' to the C standard library, and produces an executable file `a.out': $ ./a.out Hello, world! An object file for a C++ program can be linked to the C++ standard library in the same way with a single `g++' command.  File: gccintro.info, Node: Examining compiled files, Next: Getting help, Prev: How the compiler works, Up: Top Examining compiled files ************************ This chapter describes several useful tools for examining the contents of executable files and object files. * Menu: * Identifying files:: * Examining the symbol table:: * Finding dynamically linked libraries::  File: gccintro.info, Node: Identifying files, Next: Examining the symbol table, Up: Examining compiled files Identifying files ================= When a source file has been compiled to an object file or executable the options used to compile it are no longer obvious. The `file' command looks at the contents of an object file or executable and determines some of its characteristics, such as whether it was compiled with dynamic or static linking. For example, here is the result of the `file' command for a typical executable: $ file a.out a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), not stripped The output shows that the executable file is dynamically linked, and compiled for the Intel 386 and compatible processors. A full explanation of the output is shown below: `ELF' The internal format of the executable file (ELF stands for "Executable and Linking Format", other formats such as COFF "Common Object File Format" are used on some older operating systems (e.g. MS-DOS)). `32-bit' The word size (for some platforms this would be 64-bit). `LSB' Compiled for a platform with "least significant byte" first word-ordering, such as Intel and AMD x86 processors (the alternative MSB "most significant byte" first is used by other processors, such as the Motorola 680x0)(1). Some processors such as Itanium and MIPS support both LSB and MSB orderings. `Intel 80386' The processor the executable file was compiled for. `version 1 (SYSV)' This is the version of the internal format of the file. `dynamically linked' The executable uses shared libraries (`statically linked' indicates programs linked statically, for example using the `-static' option) `not stripped' The executable contains a symbol table (this can be removed with the `strip' command). The `file' command can also be used on object files, where it gives similar output. The POSIX standard(2) for Unix systems defines the behavior of the `file' command. ---------- Footnotes ---------- (1) The MSB and LSB orderings are also known as big-endian and little-endian respectively (the terms originate from Jonathan Swift's satire "Gulliver's Travels", 1727). (2) POSIX.1 (2003 edition), IEEE Std 1003.1-2003.  File: gccintro.info, Node: Examining the symbol table, Next: Finding dynamically linked libraries, Prev: Identifying files, Up: Examining compiled files Examining the symbol table ========================== As described earlier in the discussion of debugging, executables and object files can contain a symbol table (*note Compiling for debugging::). This table stores the location of functions and variables by name, and can be displayed with the `nm' command: $ nm a.out 08048334 t Letext 08049498 ? _DYNAMIC 08049570 ? _GLOBAL_OFFSET_TABLE_ ........ 080483f0 T main 08049590 b object.11 0804948c d p.3 U printf@GLIBC_2.0 Among the contents of the symbol table, the output shows that the start of the `main' function has the hexadecimal offset `080483f0'. Most of the symbols are for internal use by the compiler and operating system. A `T' in the second column indicates a function that is defined in the object file, while a `U' indicates a function which is undefined (and should be resolved by linking against another object file). A complete explanation of the output of `nm' can be found in the `GNU Binutils' manual. The most common use of the `nm' command is to check whether a library contains the definition of a specific function, by looking for a `T' entry in the second column against the function name.  File: gccintro.info, Node: Finding dynamically linked libraries, Prev: Examining the symbol table, Up: Examining compiled files Finding dynamically linked libraries ==================================== When a program has been compiled using shared libraries it needs to load those libraries dynamically at run-time in order to call external functions. The command `ldd' examines an executable and displays a list of the shared libraries that it needs. These libraries are referred to as the shared library "dependencies" of the executable. For example, the following commands demonstrate how to find the shared library dependencies of the "Hello World" program: $ gcc -Wall hello.c $ ldd a.out libc.so.6 => /lib/libc.so.6 (0x40020000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000) The output shows that the "Hello World" program depends on the C library `libc' (shared library version 6) and the dynamic loader library `ld-linux' (shared library version 2). If the program uses external libraries, such as the math library, these are also displayed. For example, the `calc' program (which uses the `sqrt' function) generates the following output: $ gcc -Wall calc.c -lm -o calc $ ldd calc libm.so.6 => /lib/libm.so.6 (0x40020000) libc.so.6 => /lib/libc.so.6 (0x40041000) /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000) The first line shows that this program depends on the math library `libm' (shared library version 6), in addition to the C library and dynamic loader library. The `ldd' command can also be used to examine shared libraries themselves, in order to follow a chain of shared library dependencies.  File: gccintro.info, Node: Getting help, Next: Further reading, Prev: Examining compiled files, Up: Top Getting help ************ If you encounter a problem not covered by this manual, there are several reference manuals which describe GCC and language-related topics in more detail (*note Further reading::). These manuals contain answers to common questions, and careful study of them will usually yield a solution. If the manuals are unclear, the most appropriate way to obtain help is to ask a knowledgeable colleague for assistance. Alternatively, there are many companies and consultants who offer commercial support for programming matters related to GCC on an hourly or ongoing basis. For businesses this can be a cost-effective way to obtain high-quality support. A directory of free software support companies and their current rates can be found on the GNU Project website.(1) With free software, commercial support is available in a free market--service companies compete in quality and price, and users are not tied to any particular one. In contrast, support for proprietary software is usually only available from the original vendor. A higher-level of commercial support for GCC is available from companies involved in the development of the GNU compiler toolchain itself. A listing of these companies can be found in the "Development Companies" section of the publisher's webpage for this book.(2) These companies can provide services such as extending GCC to generate code for new CPUs or fixing bugs in the compiler. ---------- Footnotes ---------- (1) `http://www.gnu.org/prep/service.html' (2) `http://www.network-theory.co.uk/gcc/intro/'  File: gccintro.info, Node: Further reading, Next: Acknowledgements, Prev: Getting help, Up: Top Further reading *************** The definitive guide to GCC is the official reference manual, "`Using GCC'", published by GNU Press: `Using GCC (for GCC version 3.3.1)' by Richard M. Stallman and the GCC Developer Community (Published by GNU Press, ISBN 1-882114-39-6) This manual is essential for anyone working with GCC because it describes every option in detail. Note that the manual is updated when new releases of GCC become available, so the ISBN number may change in the future. If you are new to programming with GCC you will also want to learn how to use the GNU Debugger GDB, and how to compile large programs easily with GNU Make. These tools are described in the following manuals: `Debugging with GDB: The GNU Source-Level Debugger' by Richard M. Stallman, Roland Pesch, Stan Shebs, et al. (Published by GNU Press, ISBN 1-882114-88-4) `GNU Make: A Program for Directing Recompilation' by Richard M. Stallman and Roland McGrath (Published by GNU Press, ISBN 1-882114-82-5) For effective C programming it is also essential to have a good knowledge of the C standard library. The following manual documents all the functions in the GNU C Library: `The GNU C Library Reference Manual' by Sandra Loosemore with Richard M. Stallman, et al (2 vols) (Published by GNU Press, ISBN 1-882114-22-1 and 1-882114-24-8) Be sure to check the website `http://www.gnupress.org/' for the latest printed editions of manuals published by GNU Press. The manuals can be purchased online using a credit card at the FSF website(1) in addition to being available for order through most bookstores using the ISBNs. Manuals published by GNU Press raise funds for the Free Software Foundation and the GNU Project. Information about shell commands, environment variables and shell-quoting rules can be found in the following book: `The GNU Bash Reference Manual' by Chet Ramey and Brian Fox (Published by Network Theory Ltd, ISBN 0-9541617-7-7) Other GNU Manuals mentioned in this book (such as `GNU gprof--The GNU Profiler' and `The GNU Binutils Manual') were not available in print at the time this book went to press. Links to online copies can be found at the publisher's webpage for this book.(2) The official GNU Project webpages for GCC can be found at `http://www.gnu.org/software/gcc/'. These include a list of frequently asked questions, as well as the GCC bug tracking database and a lot of other useful information about GCC. There are many books about the C and C++ languages themselves. Two of the standard references are: `The C Programming Language' (ANSI edition) Brian W. Kernighan, Dennis Ritchie (ISBN 0-13110362-8) `The C++ Programming Language' (3rd edition) Bjarne Stroustrup (ISBN 0-20188954-4) Anyone using the C and C++ languages in a professional context should obtain a copy of the official language standards. The official C standard number is ISO/IEC 9899:1990, for the original C standard published in 1990 and implemented by GCC. A revised C standard ISO/IEC 9899:1999 (known as C99) was published in 1999, and this is mostly (but not yet fully) supported by GCC. The C++ standard is ISO/IEC 14882. The IEEE floating-point arithmetic standard (IEEE-754) is also important for any programs involving numerical computations. These standards documents are available commercially from the relevant standards bodies. The C and C++ standards are also available as printed books: `The C Standard: Incorporating Technical Corrigendum 1' (Published by Wiley, ISBN 0-470-84573-2) `The C++ Standard' (Published by Wiley, ISBN 0-470-84674-7) For ongoing learning, anyone using GCC might consider joining the Association of C and C++ Users (ACCU). The ACCU is a non-profit organization devoted to professionalism in programming at all levels, and is recognized as an authority in the field. More information is available from the ACCU website `http://www.accu.org/'. The ACCU publishes two journals about programming in C and C++ and organizes regular conferences. The annual membership fee represents a good investment for individuals, or for companies who want to encourage a higher standard of professional development among their staff. ---------- Footnotes ---------- (1) `http://order.fsf.org/' (2) `http://www.network-theory.co.uk/gcc/intro/'  File: gccintro.info, Node: Acknowledgements, Next: Index, Prev: Further reading, Up: Top Acknowledgements **************** Many people have contributed to this book, and it is important to record their names here: Thanks to Gerald Pfeifer, for his careful reviewing and numerous suggestions for improving the book. Thanks to Andreas Jaeger, for information on AMD64 and multi-architecture support, and many helpful comments. Thanks to David Edelsohn, for information on the POWER/PowerPC series of processors. Thanks to Jamie Lokier, for research. Thanks to Stephen Compall, for helpful corrections. Thanks to Gerard Jungman, for useful comments. Thanks to Steven Rubin, for generating the chip layout for the cover with Electric. And most importantly, thanks to Richard Stallman, founder of the GNU Project, for writing GCC and making it free software.  File: gccintro.info, Node: Index, Prev: Acknowledgements, Up: Top Index ***** * Menu: * #define, preprocessor directive: Defining macros. * #if, preprocessor directive: Warning options in -Wall. * #ifdef, preprocessor directive: Defining macros. * #include, preprocessor directive: Compiling multiple source files. * $, shell prompt: Conventions used in this manual. * --help option, display command-line options: Help for command-line options. * --version option, display version number: Version numbers. * -ansi option, disable language extensions: C language standards. * -ansi option, used with g++: Compiling a simple C++ program. * -c option, compile to object file: Creating object files from source files. * -D option, define macro: Defining macros. * -dM option, list predefined macros: Defining macros. * -E option, preprocess source files: Preprocessing source files. * -fno-implicit-templates option, disable implicit instantiation: Explicit template instantiation. * -fprofile-arcs option, instrument branches: Coverage testing with gcov. * -ftest-coverage option, record coverage: Coverage testing with gcov. * -funroll-loops option, optimization by loop unrolling: Optimization levels. * -g option, enable debugging: Compiling for debugging. * -I option, include path: Setting search paths. * -L option, library search path: Setting search paths. * -l option, linking with libraries: Linking with external libraries. * -lm option, link with math library: Linking with external libraries. * -m option, platform-specific settings: Platform-specific options. * -m32 and -m64 options, compile for 32 or 64-bit environment: SPARC options. * -maltivec option, enables use of Altivec processor on PowerPC: POWER/PowerPC options. * -march option, compile for specific CPU: Intel and AMD x86 options. * -mcmodel option, for AMD64: Intel and AMD x86 options. * -mcpu option, compile for specific CPU: SPARC options. * -mieee option, floating-point support on DEC Alpha: DEC Alpha options. * -mminimal-toc option, on AIX: POWER/PowerPC options. * -mno-fused-madd option, on PowerPC: POWER/PowerPC options. * -mxl-call option, compatibility with IBM XL compilers on AIX: POWER/PowerPC options. * -o option, set output filename: Compiling a simple C program. * -O0 option, optimization level zero: Optimization levels. * -O1 option, optimization level one: Optimization levels. * -O2 option, optimization level two: Optimization levels. * -O3 option, optimization level three: Optimization levels. * -Os option, optimization for size: Optimization levels. * -pedantic option, conform to the ANSI standard (with -ansi): C language standards. * -pg option, enable profiling: Using the profiler gprof. * -pthread option, on AIX: POWER/PowerPC options. * -rpath option, set run-time shared library search path: Shared libraries and static libraries. * -S option, create assembly code: The compiler. * -save-temps option, keeps intermediate files: Preprocessing source files. * -static option, force static linking: Shared libraries and static libraries. * -std option, select specific language standard <1>: Selecting specific standards. * -std option, select specific language standard: C language standards. * -v option, verbose compilation: Help for command-line options. * -W option, enable additional warnings: Additional warning options. * -Wall option, enable common warnings: Compiling a simple C program. * -Wcast-qual option, warn about casts removing qualifiers: Additional warning options. * -Wcomment option, warn about nested comments: Warning options in -Wall. * -Wconversion option, warn about type conversions: Additional warning options. * -Werror option, convert warnings to errors: Additional warning options. * -Wformat option, warn about incorrect format strings: Warning options in -Wall. * -Wimplicit option, warn about missing declarations: Warning options in -Wall. * -Wreturn-type option, warn about incorrect return types: Warning options in -Wall. * -Wshadow option, warn about shadowed variables: Additional warning options. * -Wtraditional option, warn about traditional C: Additional warning options. * -Wuninitialized option, warn about uninitialized variables: Optimization and compiler warnings. * -Wunused option, unused variable warning: Warning options in -Wall. * -Wwrite-strings option, warning for modified string constants: Additional warning options. * .a, archive file extension: Linking with external libraries. * .c, C source file extension: Compiling a simple C program. * .cc, C++ file extension: Compiling a simple C++ program. * .cpp, C++ file extension: Compiling a simple C++ program. * .cxx, C++ file extension: Compiling a simple C++ program. * .h, header file extension: Compiling multiple source files. * .i, preprocessed file extension for C: The preprocessor. * .ii, preprocessed file extension for C++: The preprocessor. * .o, object file extension: Compiling files independently. * .s, assembly file extension: The compiler. * .so, shared object file extension: Shared libraries and static libraries. * /tmp directory, temporary files: Linking with external libraries. * 64-bit platforms, additional library directories: Setting search paths. * 64-bit processor specific options, AMD64 and Intel: Intel and AMD x86 options. * __gxx_personality_v0, undefined reference error: Compiling a simple C++ program. * _GNU_SOURCE macro, enables extensions to GNU C Library: ANSI/ISO. * a, archive file extension: Linking with external libraries. * a.out, default executable filename: Compiling a simple C program. * ACCU, Association of C and C++ users: Further reading. * ADA, gnat compiler: A brief history of GCC. * additional warning options: Additional warning options. * AIX, compatibility with IBM XL compilers: POWER/PowerPC options. * AIX, platform-specific options: POWER/PowerPC options. * AIX, TOC overflow error: POWER/PowerPC options. * Alpha, platform-specific options: DEC Alpha options. * Altivec, on PowerPC: POWER/PowerPC options. * AMD x86, platform-specific options: Intel and AMD x86 options. * AMD64, 64-bit processor specific options: Intel and AMD x86 options. * ansi option, disable language extensions: C language standards. * ansi option, used with g++: Compiling a simple C++ program. * ANSI standards for C/C++ languages, available as books: Further reading. * ANSI/ISO C, compared with GNU C extensions: C language standards. * ANSI/ISO C, controlled with -ansi option: ANSI/ISO. * ANSI/ISO C, pedantic diagnostics option: Strict ANSI/ISO. * ar, GNU archiver <1>: Creating a library with the GNU archiver. * ar, GNU archiver: Linking with external libraries. * archive file, .a extension: Linking with external libraries. * archive file, explanation of: Linking with external libraries. * archiver, ar: How the compiler works. * ARM, multi-architecture support: Multi-architecture support. * arrays, variable-size in GNU C: Strict ANSI/ISO. * asm, GNU C extension keyword: ANSI/ISO. * assembler, as: How the compiler works. * assembler, converting assembly language to machine code: The assembler. * Association of C and C++ users (ACCU): Further reading. * Athlon, platform-specific options: Intel and AMD x86 options. * backtrace, debugger command: Displaying a backtrace. * backtrace, displaying: Displaying a backtrace. * bash profile file, login settings <1>: Examining core files. * bash profile file, login settings <2>: Shared libraries and static libraries. * bash profile file, login settings: Environment variables. * benchmarking, with time command: Optimization examples. * big-endian, word-ordering: Identifying files. * binary file, also called executable file: Compiling a C program. * Binutils, GNU Binary Tools: Examining the symbol table. * bits, 32 vs 64 on UltraSPARC: SPARC options. * books, further reading: Further reading. * branches, instrumenting for coverage testing: Coverage testing with gcov. * BSD extensions, GNU C Library: ANSI/ISO. * buffer, template example: Providing your own templates. * bug, example of <1>: Examining core files. * bug, example of <2>: Using library header files. * bug, example of: Finding errors in a simple program. * C language, dialects of: C language standards. * C language, further reading: Further reading. * C library, standard: Linking with external libraries. * C math library: Linking with external libraries. * c option, compile to object file: Creating object files from source files. * C programs, recompiling after modification: Recompiling and relinking. * C source file, .c extension: Compiling a simple C program. * C standard library: Linking with external libraries. * C++, compiling a simple program with g++: Compiling a simple C++ program. * C++, creating libraries with explicit instantiation: Explicit template instantiation. * C++, file extensions: Compiling a simple C++ program. * C++, g++ as a true compiler: Compiling a C++ program. * C++, g++ compiler: A brief history of GCC. * C++, instantiation of templates: Providing your own templates. * C++, namespace std: Using the C++ standard library. * C++, standard library: Using the C++ standard library. * C++, standard library libstdc++: Using C++ standard library templates. * C++, standard library templates: Using C++ standard library templates. * C++, templates: Templates. * c, C source file extension: Compiling a simple C program. * C, compiling with gcc: Compiling a simple C program. * C, gcc compiler: A brief history of GCC. * C/C++ languages, standards in printed form: Further reading. * C/C++, risks of using: Programming in C and C++. * C/C++, risks of using, example <1>: Optimization and compiler warnings. * C/C++, risks of using, example <2>: Using library header files. * C/C++, risks of using, example: Finding errors in a simple program. * c89/c99, selected with -std: Selecting specific standards. * C_INCLUDE_PATH: Environment variables. * cannot find -lLIBRARY error, example of: Search path example. * cannot find LIBRARY, linker error: Setting search paths. * cannot open shared object file error: Shared libraries and static libraries. * casts, used to avoid conversion warnings: Additional warning options. * cc, C++ file extension: Compiling a simple C++ program. * circular buffer, template example: Providing your own templates. * COFF format: Identifying files. * Collatz sequence: Using the profiler gprof. * combined multiply and add instruction: POWER/PowerPC options. * command-line help option: Troubleshooting. * comment warning option, warn about nested comments: Warning options in -Wall. * comments, nested: Warning options in -Wall. * common errors, not included with -Wall: Additional warning options. * common subexpression elimination, optimization: Source-level optimization. * comparison of ... expression always true/false warning, example of: Additional warning options. * compilation, for debugging: Compiling for debugging. * compilation, internal stages of: How the compiler works. * compilation, model for templates: Providing your own templates. * compilation, options: Compilation options. * compilation, stopping on warning: Additional warning options. * compile to object file, -c option: Creating object files from source files. * compiled files, examining: Examining compiled files. * compiler, converting source code to assembly code: The compiler. * compiler, how it works internally: How the compiler works. * compiler-related tools: Compiler-related tools. * compiling C programs with gcc: Compiling a C program. * compiling C++ programs with g++: Compiling a simple C++ program. * compiling files independently: Compiling files independently. * compiling multiple files: Compiling multiple source files. * compiling with optimization: Compiling with optimization. * configuration files for GCC: Version numbers. * const, warning about overriding by casts: Additional warning options. * constant strings, compile-time warnings: Additional warning options. * consultants, providing commercial support: Getting help. * conventions, used in manual: Conventions used in this manual. * conversions between types, warning of: Additional warning options. * core file, debugging with gdb: Examining core files. * core file, examining from program crash: Examining core files. * core file, not produced: Examining core files. * coverage testing, with gcov: Coverage testing with gcov. * CPLUS_INCLUDE_PATH: Environment variables. * cpp, C preprocessor: Using the preprocessor. * cpp, C++ file extension: Compiling a simple C++ program. * cr option, create/replace archive files: Creating a library with the GNU archiver. * crashes, saved in core file: Examining core files. * creating executable files from object files: Creating executables from object files. * creating object files from source files: Creating object files from source files. * cxx, C++ file extension: Compiling a simple C++ program. * D option, define macro: Defining macros. * data-flow analysis: Optimization and compiler warnings. * DBM file, created with gdbm: Search path example. * debugging, compilation flags: Compiling for debugging. * debugging, with gdb: Compiling for debugging. * debugging, with optimization: Optimization and debugging. * DEC Alpha, platform-specific options: DEC Alpha options. * declaration, in header file: Compiling multiple source files. * declaration, missing: Using library header files. * default directories, linking and header files: Setting search paths. * default executable filename, a.out: Compiling a simple C program. * default value, of macro defined with -D: Macros with values. * defining macros: Defining macros. * denormalized numbers, on DEC Alpha: DEC Alpha options. * dependencies, of shared libraries: Finding dynamically linked libraries. * deployment, options for <1>: Optimization and debugging. * deployment, options for <2>: Optimization levels. * deployment, options for: Examining core files. * dereferencing, null pointer: Examining core files. * dialects of C language: C language standards. * different type arg, format warning: Finding errors in a simple program. * disk space, reduced usage by shared libraries: Shared libraries and static libraries. * displaying a backtrace: Displaying a backtrace. * division by zero: DEC Alpha options. * DLL (dynamically linked library), see shared libraries: Shared libraries and static libraries. * dM option, list predefined macros: Defining macros. * dollar sign $, shell prompt: Conventions used in this manual. * dynamic loader: Shared libraries and static libraries. * dynamically linked libraries, examining with ldd: Finding dynamically linked libraries. * dynamically linked library, see shared libraries: Shared libraries and static libraries. * E option, preprocess source files: Preprocessing source files. * EGCS (Experimental GNU Compiler Suite): A brief history of GCC. * ELF format: Identifying files. * elimination, of common subexpressions: Source-level optimization. * embedded systems, cross-compilation for: Major features of GCC. * empty macro, compared with undefined macro: Macros with values. * empty return, incorrect use of: Warning options in -Wall. * enable profiling, -pg option: Using the profiler gprof. * endianness, word-ordering: Identifying files. * enhancements, to GCC: Getting help. * environment variables <1>: Shared libraries and static libraries. * environment variables: Conventions used in this manual. * environment variables, extending an existing path: Shared libraries and static libraries. * environment variables, for default search paths: Environment variables. * environment variables, setting permanently: Shared libraries and static libraries. * error, undefined reference due to library link order: Link order of libraries. * error, undefined reference due to order of object files: Link order of object files. * error, while loading shared libraries: Shared libraries and static libraries. * examining compiled files: Examining compiled files. * examining core files: Examining core files. * examples, conventions used in: Conventions used in this manual. * examples, of optimization: Optimization examples. * executable file: Compiling a C program. * executable, creating from object files by linking: Creating executables from object files. * executable, default filename a.out: Compiling a simple C program. * executable, examining with file command: Identifying files. * executable, running: Compiling a simple C program. * executable, symbol table stored in: Compiling for debugging. * explicit instantiation of templates: Explicit template instantiation. * export keyword, not supported in GCC: The export keyword. * extended search paths, for include and link directories: Extended search paths. * extension, .a archive file: Linking with external libraries. * extension, .c source file: Compiling a simple C program. * extension, .C, C++ file: Compiling a simple C++ program. * extension, .cc, C++ file: Compiling a simple C++ program. * extension, .cpp, C++ file: Compiling a simple C++ program. * extension, .cxx, C++ file: Compiling a simple C++ program. * extension, .h header file: Compiling multiple source files. * extension, .i preprocessed file: The preprocessor. * extension, .ii preprocessed file: The preprocessor. * extension, .o object file: Compiling files independently. * extension, .s assembly file: The compiler. * extension, .so shared object file: Shared libraries and static libraries. * external libraries, linking with: Linking with external libraries. * feature test macros, GNU C Library: ANSI/ISO. * features, of GCC: Major features of GCC. * file command, for identifying files: Identifying files. * file extension, .a archive file: Linking with external libraries. * file extension, .c source file: Compiling a simple C program. * file extension, .C, C++ file: Compiling a simple C++ program. * file extension, .cc, C++ file: Compiling a simple C++ program. * file extension, .cpp, C++ file: Compiling a simple C++ program. * file extension, .cxx, C++ file: Compiling a simple C++ program. * file extension, .h header file: Compiling multiple source files. * file extension, .i preprocessed file: The preprocessor. * file extension, .ii preprocessed file: The preprocessor. * file extension, .o object file: Compiling files independently. * file extension, .s assembly file: The compiler. * file extension, .so shared object file: Shared libraries and static libraries. * Floating point exception, on DEC Alpha: DEC Alpha options. * fno-implicit-templates option, disable implicit instantiation: Explicit template instantiation. * format strings, incorrect usage warning: Warning options in -Wall. * format, different type arg warning: Finding errors in a simple program. * Fortran, g77 compiler: A brief history of GCC. * fprofile-arcs option, instrument branches: Coverage testing with gcov. * Free Software Foundation (FSF): A brief history of GCC. * ftest-coverage option, record coverage: Coverage testing with gcov. * function inlining, example of optimization: Source-level optimization. * function-call overhead: Source-level optimization. * funroll-loops option, optimization by loop unrolling: Optimization levels. * fused multiply and add instruction: POWER/PowerPC options. * g option, enable debugging: Compiling for debugging. * g++, compiling C++ programs: Compiling a simple C++ program. * g++, GNU C++ Compiler: A brief history of GCC. * g77, Fortran compiler: A brief history of GCC. * gcc, GNU C Compiler: A brief history of GCC. * gcc, simple example: Compiling a simple C program. * gcc, used inconsistently with g++: Compiling a simple C++ program. * gcj, GNU Compiler for Java: A brief history of GCC. * gcov, GNU coverage testing tool: Coverage testing with gcov. * gdb, debugging core file with: Examining core files. * gdb, GNU debugger: Compiling for debugging. * gdbm, GNU DBM library: Search path example. * generic programming, in C++: Templates. * getting help: Getting help. * gmon.out, data file for gprof: Using the profiler gprof. * gnat, GNU ADA compiler: A brief history of GCC. * GNU archiver, ar: Linking with external libraries. * GNU C extensions, compared with ANSI/ISO C: C language standards. * GNU C Library Reference Manual: Further reading. * GNU C Library, feature test macros: ANSI/ISO. * GNU Compilers, major features: Major features of GCC. * GNU Compilers, Reference Manual: Further reading. * GNU GDB Manual: Further reading. * GNU Make Manual: Further reading. * GNU Press, manuals: Further reading. * GNU Project, history of: A brief history of GCC. * gnu89/gnu99, selected with -std: Selecting specific standards. * GNU_SOURCE macro (_GNU_SOURCE), enables extensions to GNU C Library: ANSI/ISO. * gprof, GNU Profiler: Using the profiler gprof. * gradual underflow, on DEC Alpha: DEC Alpha options. * gxx_personality_v0, undefined reference error: Compiling a simple C++ program. * h, header file extension: Compiling multiple source files. * header file, .h extension: Compiling multiple source files. * header file, declarations in: Compiling multiple source files. * header file, default directories: Setting search paths. * header file, include path--extending with -I: Setting search paths. * header file, missing: Using library header files. * header file, missing header causes implicit declaration: Using library header files. * header file, not compiled: Creating object files from source files. * header file, not found--compilation error no such file or directory: Setting search paths. * header file, with include guards: Providing your own templates. * header file, without .h extension for C++: Using the C++ standard library. * Hello World program, in C: Compiling a simple C program. * Hello World program, in C++: Compiling a simple C++ program. * help option, display command-line options: Help for command-line options. * help options: Troubleshooting. * history, of GCC: A brief history of GCC. * I option, include path: Setting search paths. * i, preprocessed file extension for C: The preprocessor. * IBM XL compilers, compatibility on AIX: POWER/PowerPC options. * identifying files, with file command: Identifying files. * IEEE arithmetic standard, printed form: Further reading. * IEEE options, on DEC Alpha: DEC Alpha options. * ii, preprocessed file extension for C++: The preprocessor. * implicit declaration of function warning, due to missing header file: Using library header files. * implicit declaration warning: Warning options in -Wall. * include guards, in header file: Providing your own templates. * include path, extending with -I: Setting search paths. * include path, setting with environment variables: Environment variables. * inclusion compilation model, in C++: Providing your own templates. * independent compilation of files: Compiling files independently. * Inf, infinity, on DEC Alpha: DEC Alpha options. * inlining, example of optimization: Source-level optimization. * instantiation, explicit vs implicit in C++: Explicit template instantiation. * instantiation, of templates in C++: Providing your own templates. * instruction scheduling, optimization: Scheduling. * instrumented executable, for coverage testing: Coverage testing with gcov. * instrumented executable, for profiling: Using the profiler gprof. * Intel x86, platform-specific options: Intel and AMD x86 options. * intermediate files, keeping: Preprocessing source files. * ISO C++, controlled with -ansi option: Compiling a simple C++ program. * ISO C, compared with GNU C extensions: C language standards. * ISO C, controlled with -ansi option: ANSI/ISO. * ISO standards for C/C++ languages, available as books: Further reading. * iso9899:1990/iso9899:1999, selected with -std: Selecting specific standards. * Itanium, multi-architecture support: Multi-architecture support. * Java, compared with C/C++: Programming in C and C++. * Java, gcj compiler: A brief history of GCC. * journals, about C and C++ programming: Further reading. * K&R dialect of C, warnings of different behavior: Additional warning options. * kernel mode, on AMD64: Intel and AMD x86 options. * Kernighan and Ritchie, `The C Programming Language': Further reading. * key-value pairs, stored with GDBM: Search path example. * keywords, additional in GNU C: ANSI/ISO. * L option, library search path: Setting search paths. * l option, linking with libraries: Linking with external libraries. * language standards, selecting with -std: Selecting specific standards. * ld.so.conf, loader configuration file: Shared libraries and static libraries. * ld: cannot find library error: Setting search paths. * LD_LIBRARY_PATH, shared library load path: Shared libraries and static libraries. * ldd, dynamical loader: Finding dynamically linked libraries. * levels of optimization: Optimization levels. * libraries, creating with ar: Creating a library with the GNU archiver. * libraries, creating with explicit instantiation in C++: Explicit template instantiation. * libraries, error while loading shared library: Shared libraries and static libraries. * libraries, extending search path with -L: Setting search paths. * libraries, finding shared library dependencies: Finding dynamically linked libraries. * libraries, link error due to undefined reference: Linking with external libraries. * libraries, link order: Link order of libraries. * libraries, linking with: Linking with external libraries. * libraries, linking with using -l: Linking with external libraries. * libraries, on 64-bit platforms: Setting search paths. * libraries, stored in archive files: Linking with external libraries. * library header files, using: Using library header files. * library, C math library: Linking with external libraries. * library, C standard library: Linking with external libraries. * library, C++ standard library: Using the C++ standard library. * libstdc++, C++ standard library: Using C++ standard library templates. * line numbers, recorded in preprocessed files: Preprocessing source files. * link error, cannot find library: Setting search paths. * link order, from left to right <1>: Link order of libraries. * link order, from left to right: Link order of object files. * link order, of libraries: Link order of libraries. * link order, of object files: Link order of object files. * link path, setting with environment variable: Environment variables. * linker error, cannot find LIBRARY: Setting search paths. * linker, GNU compared with other linkers: Providing your own templates. * linker, initial description: Creating executables from object files. * linker, ld <1>: The linker. * linker, ld: How the compiler works. * linking, creating executable files from object files: Creating executables from object files. * linking, default directories: Setting search paths. * linking, dynamic (shared libraries): Shared libraries and static libraries. * linking, explanation of: Compiling files independently. * linking, undefined reference error due to library link order: Link order of libraries. * linking, updated object files: Recompiling and relinking. * linking, with external libraries: Linking with external libraries. * linking, with library using -l: Linking with external libraries. * Lisp, compared with C/C++: Programming in C and C++. * little-endian, word-ordering: Identifying files. * loader configuration file, ld.so.conf: Shared libraries and static libraries. * loader function: Shared libraries and static libraries. * login file, setting environment variables in: Shared libraries and static libraries. * loop unrolling, optimization <1>: Optimization levels. * loop unrolling, optimization: Speed-space tradeoffs. * LSB, least significant byte: Identifying files. * m option, platform-specific settings: Platform-specific options. * m32 and m64 options, compile for 32 or 64-bit environment: SPARC options. * machine code: Compiling a C program. * machine-specific options: Platform-specific options. * macros, default value of: Macros with values. * macros, defined with value: Macros with values. * macros, defining in preprocessor: Defining macros. * macros, predefined: Defining macros. * major features, of GCC: Major features of GCC. * major version number, of GCC: Version numbers. * maltivec option, enables use of Altivec processor on PowerPC: POWER/PowerPC options. * manuals, for GNU software: Further reading. * march option, compile for specific CPU: Intel and AMD x86 options. * math library: Linking with external libraries. * math library, linking with -lm: Linking with external libraries. * mcmodel option, for AMD64: Intel and AMD x86 options. * mcpu option, compile for specific CPU: SPARC options. * mieee option, floating-point support on DEC Alpha: DEC Alpha options. * minor version number, of GCC: Version numbers. * MIPS64, multi-architecture support: Multi-architecture support. * missing header file, causes implicit declaration: Using library header files. * missing header files: Using library header files. * missing prototypes warning: Warning options in -Wall. * mminimal-toc option, on AIX: POWER/PowerPC options. * mno-fused-madd option, on PowerPC: POWER/PowerPC options. * modified source files, recompiling: Recompiling and relinking. * Motorola 680x0: Identifying files. * MSB, most significant byte: Identifying files. * multi-architecture support, discussion of: Multi-architecture support. * multiple directories, on include and link paths: Extended search paths. * multiple files, compiling: Compiling multiple source files. * multiply and add instruction: POWER/PowerPC options. * multiply defined symbol error, with C++: Providing your own templates. * mxl-call option, compatibility with IBM XL compilers on AIX: POWER/PowerPC options. * Namespace std in C++: Using the C++ standard library. * namespace, reserved prefix for preprocessor: Defining macros. * NaN, not a number, on DEC Alpha: DEC Alpha options. * nested comments, warning of: Warning options in -Wall. * nm command: Examining the symbol table. * No such file or directory, header file not found <1>: Search path example. * No such file or directory, header file not found: Setting search paths. * null pointer, attempt to dereference: Examining core files. * O option, optimization level: Optimization levels. * o option, set output filename: Compiling a simple C program. * o, object file extension: Compiling files independently. * object file, .o extension: Compiling files independently. * object file, creating from source using option -c: Creating object files from source files. * object file, examining with file command: Identifying files. * object file, explanation of: Compiling files independently. * object files, link order: Link order of object files. * object files, linking to create executable file: Creating executables from object files. * object files, relinking: Recompiling and relinking. * object files, temporary: Linking with external libraries. * Objective-C: A brief history of GCC. * optimization for size, -Os: Optimization levels. * optimization, and compiler warnings: Optimization and compiler warnings. * optimization, common subexpression elimination: Source-level optimization. * optimization, compiling with -O: Optimization levels. * optimization, example of: Optimization examples. * optimization, explanation of: Compiling with optimization. * optimization, levels of: Optimization levels. * optimization, loop unrolling <1>: Optimization levels. * optimization, loop unrolling: Speed-space tradeoffs. * optimization, speed-space tradeoffs: Speed-space tradeoffs. * optimization, with debugging: Optimization and debugging. * options, compilation: Compilation options. * options, platform-specific: Platform-specific options. * order, of object files in linking: Link order of object files. * ordering of libraries: Link order of libraries. * output file option, -o: Compiling a simple C program. * overflow error, for TOC on AIX: POWER/PowerPC options. * overhead, from function call: Source-level optimization. * parse error, due to language extensions: ANSI/ISO. * patch level, of GCC: Version numbers. * paths, extending an existing path in an environment variable: Shared libraries and static libraries. * paths, search: Setting search paths. * pedantic option, ANSI/ISO C: Strict ANSI/ISO. * pedantic option, conform to the ANSI standard (with -ansi): C language standards. * Pentium, platform-specific options: Intel and AMD x86 options. * pg option, enable profiling: Using the profiler gprof. * pipelining, explanation of: Scheduling. * platform-specific options: Platform-specific options. * POSIX extensions, GNU C Library: ANSI/ISO. * PowerPC and POWER, platform-specific options: POWER/PowerPC options. * PowerPC64, multi-architecture support: Multi-architecture support. * precedence, when using preprocessor: Macros with values. * predefined macros: Defining macros. * preprocessed files, keeping: Preprocessing source files. * preprocessing, source files: Preprocessing source files. * preprocessor macros, default value of: Macros with values. * preprocessor, cpp: How the compiler works. * preprocessor, first stage of compilation: The preprocessor. * preprocessor, using: Using the preprocessor. * print debugger command: Examining core files. * printf, example of error in format: Finding errors in a simple program. * printf, incorrect usage warning: Warning options in -Wall. * profile file, setting environment variables in: Shared libraries and static libraries. * profiling, with gprof: Using the profiler gprof. * program crashes, saved in core file: Examining core files. * prototypes, missing: Warning options in -Wall. * pthread option, on AIX: POWER/PowerPC options. * qualifiers, warning about overriding by casts: Additional warning options. * quotes, for defining empty macro: Macros with values. * recompiling: Recompiling and relinking. * recompiling modified source files: Recompiling and relinking. * red-zone, on AMD64: Intel and AMD x86 options. * reference books, C language: Further reading. * reference, undefined due to missing library: Linking with external libraries. * relinking: Recompiling and relinking. * relinking, updated object files: Recompiling and relinking. * return type, invalid: Warning options in -Wall. * Richard Stallman, principal author of GCC: A brief history of GCC. * risks, example of corrupted output: Finding errors in a simple program. * risks, when using C/C++: Programming in C and C++. * rpath option, set run-time shared library search path: Shared libraries and static libraries. * run-time, measuring with time command: Optimization examples. * running an executable file, C: Compiling a simple C program. * running an executable file, C++: Compiling a simple C++ program. * S option, create assembly code: The compiler. * s, assembly file extension: The compiler. * save-temps option, keeps intermediate files: Preprocessing source files. * scanf, incorrect usage warning: Warning options in -Wall. * scheduling, stage of optimization: Scheduling. * Scheme, compared with C/C++: Programming in C and C++. * search paths: Setting search paths. * search paths, example: Search path example. * search paths, extended: Extended search paths. * segmentation fault, error message: Examining core files. * selecting specific language standards, with -std: Selecting specific standards. * shadowing of variables: Additional warning options. * shared libraries: Shared libraries and static libraries. * shared libraries, advantages of: Shared libraries and static libraries. * shared libraries, dependencies: Finding dynamically linked libraries. * shared libraries, error while loading: Shared libraries and static libraries. * shared libraries, setting load path: Shared libraries and static libraries. * shared object file, .so extension: Shared libraries and static libraries. * shell prompt: Conventions used in this manual. * shell quoting <1>: Further reading. * shell quoting: Macros with values. * shell variables <1>: Shared libraries and static libraries. * shell variables <2>: Environment variables. * shell variables: Conventions used in this manual. * shell variables, setting permanently: Shared libraries and static libraries. * signed integer, casting: Additional warning options. * signed variable converted to unsigned, warning of: Additional warning options. * simple C program, compiling: Compiling a simple C program. * simple C++ program, compiling with g++: Compiling a simple C++ program. * size, optimization for, -Os: Optimization levels. * Smalltalk, compared with C/C++: Programming in C and C++. * so, shared object file extension: Shared libraries and static libraries. * soft underflow, on DEC Alpha: DEC Alpha options. * source code: Compiling a C program. * source files, recompiling: Recompiling and relinking. * source-level optimization: Source-level optimization. * space vs speed, tradeoff in optimization: Speed-space tradeoffs. * SPARC, platform-specific options: SPARC options. * Sparc64, multi-architecture support: Multi-architecture support. * specs directory, compiler configuration files: Version numbers. * speed-space tradeoffs, in optimization: Speed-space tradeoffs. * sqrt, example of linking with: Linking with external libraries. * stack backtrace, displaying: Displaying a backtrace. * stages of compilation, used internally: How the compiler works. * standard library, C: Linking with external libraries. * standard library, C++: Using the C++ standard library. * Standard Template Library (STL): Using C++ standard library templates. * standards, C, C++ and IEEE arithmetic: Further reading. * static libraries: Shared libraries and static libraries. * static linking, forcing with -static: Shared libraries and static libraries. * static option, force static linking: Shared libraries and static libraries. * std namespace in C++: Using the C++ standard library. * std option, select specific language standard <1>: Selecting specific standards. * std option, select specific language standard: C language standards. * strict ANSI/ISO C, -pedantic option: Strict ANSI/ISO. * strip command: Identifying files. * subexpression elimination, optimization: Source-level optimization. * Sun SPARC, platform-specific options: SPARC options. * support, commercial: Getting help. * SVID extensions, GNU C Library: ANSI/ISO. * symbol table: Compiling for debugging. * symbol table, examining with nm: Examining the symbol table. * system libraries: Linking with external libraries. * system libraries, location of <1>: Multi-architecture support. * system libraries, location of <2>: Setting search paths. * system libraries, location of: Linking with external libraries. * system-specific predefined macros: Defining macros. * SYSV, System V executable format: Identifying files. * t option, archive table of contents: Creating a library with the GNU archiver. * table of contents, in ar archive: Creating a library with the GNU archiver. * table of contents, overflow error on AIX: POWER/PowerPC options. * tcsh, limit command: Examining core files. * templates, explicit instantiation: Explicit template instantiation. * templates, export keyword: The export keyword. * templates, in C++: Templates. * templates, in C++ standard library: Using C++ standard library templates. * templates, inclusion compilation model: Providing your own templates. * temporary files, keeping: Preprocessing source files. * temporary files, written to /tmp: Linking with external libraries. * termination, abnormal (core dumped): Examining core files. * threads, on AIX: POWER/PowerPC options. * Thumb, alternative code format on ARM: Multi-architecture support. * time command, measuring run-time: Optimization examples. * TOC overflow error, on AIX: POWER/PowerPC options. * tools, compiler-related: Compiler-related tools. * tradeoffs, between speed and space in optimization: Speed-space tradeoffs. * Traditional C (K&R), warnings of different behavior: Additional warning options. * translators, from C++ to C, compared with g++: Compiling a C++ program. * troubleshooting options: Troubleshooting. * type conversions, warning of: Additional warning options. * typeof, GNU C extension keyword: ANSI/ISO. * ulimit command: Examining core files. * UltraSPARC, 32-bit mode vs 64-bit mode,: SPARC options. * undeclared identifier error for C library, when using -ansi option: ANSI/ISO. * undefined macro, compared with empty macro: Macros with values. * undefined reference error, __gxx_personality_v0: Compiling a simple C++ program. * undefined reference error, due to library link order: Link order of libraries. * undefined reference error, due to order of object files: Link order of object files. * undefined reference to C++ function, due to linking with gcc: Compiling a simple C++ program. * undefined reference, due to missing library: Linking with external libraries. * underflow, on DEC Alpha: DEC Alpha options. * uninitialized variable, warning of: Optimization and compiler warnings. * unix, GNU C extension keyword: ANSI/ISO. * unoptimized code (-O0): Optimization levels. * unrolling, of loops (optimization) <1>: Optimization levels. * unrolling, of loops (optimization): Speed-space tradeoffs. * unsigned integer, casting: Additional warning options. * unsigned variable converted to signed, warning of: Additional warning options. * unused variable warning, -Wunused: Warning options in -Wall. * updated object files, relinking: Recompiling and relinking. * updated source files, recompiling: Recompiling and relinking. * Using GCC (Reference Manual): Further reading. * v option, verbose compilation: Help for command-line options. * value, of macro: Macros with values. * variable shadowing: Additional warning options. * variable, warning of uninitialized use: Optimization and compiler warnings. * variable-size array, forbidden in ANSI/ISO C: Strict ANSI/ISO. * variable-size arrays in GNU C: Strict ANSI/ISO. * vax, GNU C extension keyword: ANSI/ISO. * verbose compilation, -v option: Verbose compilation. * verbose help option: Help for command-line options. * version number of GCC, displaying: Version numbers. * version option, display version number: Version numbers. * void return, incorrect use of: Warning options in -Wall. * W option, enable additional warnings: Additional warning options. * Wall option, enable common warnings: Compiling a simple C program. * warning option, -W additional warnings: Additional warning options. * warning options, -Wall: Compiling a simple C program. * warning options, additional: Additional warning options. * warning options, in detail: Warning options in -Wall. * warning, format with different type arg: Finding errors in a simple program. * warnings, additional with -W: Additional warning options. * warnings, and optimization: Optimization and compiler warnings. * warnings, implicit declaration of function: Using library header files. * warnings, promoting to errors: Additional warning options. * Wcast-qual option, warn about casts removing qualifiers: Additional warning options. * Wcomment option, warn about nested comments: Warning options in -Wall. * Wconversion option, warn about type conversions: Additional warning options. * Werror option, convert warnings to errors: Additional warning options. * Wimplicit option, warn about missing declarations: Warning options in -Wall. * word-ordering, endianness: Identifying files. * word-size, determined from executable file: Identifying files. * word-size, on UltraSPARC: SPARC options. * Wreturn-type option, warn about incorrect return types: Warning options in -Wall. * writable string constants, disabling: Additional warning options. * Wshadow option, warn about shadowed variables: Additional warning options. * Wtraditional option, warn about traditional C: Additional warning options. * Wuninitialized option, warn about uninitialized variables: Optimization and compiler warnings. * Wunused option, unused variable warning: Warning options in -Wall. * Wwrite-strings option, warning for modified string constants: Additional warning options. * x86, platform-specific options: Intel and AMD x86 options. * XL compilers, compatibility on AIX: POWER/PowerPC options. * XOPEN extensions, GNU C Library: ANSI/ISO. * zero, division by: DEC Alpha options. * zero, rounding to by underflow, on DEC Alpha: DEC Alpha options.  Tag Table: Node: Top77 Node: Introduction887 Node: A brief history of GCC1899 Node: Major features of GCC3669 Ref: Major features of GCC-Footnote-15947 Node: Programming in C and C++6020 Node: Conventions used in this manual7152 Ref: Conventions used in this manual-Footnote-18996 Node: Compiling a C program9053 Node: Compiling a simple C program10023 Node: Finding errors in a simple program12063 Node: Compiling multiple source files14604 Node: Compiling files independently17560 Node: Creating object files from source files19059 Node: Creating executables from object files20395 Node: Link order of object files21859 Node: Recompiling and relinking23249 Ref: Recompiling and relinking-Footnote-125007 Node: Linking with external libraries25143 Ref: Linking with external libraries-Footnote-129337 Node: Link order of libraries29537 Node: Using library header files31218 Ref: Using library header files-Footnote-132953 Node: Compilation options33055 Node: Setting search paths33656 Ref: Setting search paths-Footnote-135529 Node: Search path example35782 Node: Environment variables38781 Node: Extended search paths40750 Ref: Extended search paths-Footnote-142343 Node: Shared libraries and static libraries42482 Ref: Shared libraries and static libraries-Footnote-148562 Node: C language standards48853 Node: ANSI/ISO49701 Ref: ANSI/ISO-Footnote-153370 Node: Strict ANSI/ISO53463 Node: Selecting specific standards54989 Node: Warning options in -Wall55997 Node: Additional warning options59236 Ref: Additional warning options-Footnote-164859 Node: Using the preprocessor65028 Ref: Using the preprocessor-Footnote-165567 Node: Defining macros65703 Node: Macros with values67889 Node: Preprocessing source files71074 Node: Compiling for debugging73726 Node: Examining core files75113 Ref: Examining core files-Footnote-179458 Ref: Examining core files-Footnote-279640 Ref: Examining core files-Footnote-379825 Node: Displaying a backtrace80114 Node: Compiling with optimization81037 Node: Source-level optimization82535 Ref: Source-level optimization-Footnote-187052 Ref: Source-level optimization-Footnote-287289 Node: Speed-space tradeoffs87424 Node: Scheduling90275 Node: Optimization levels91122 Node: Optimization examples94743 Node: Optimization and debugging97913 Node: Optimization and compiler warnings98892 Node: Compiling a C++ program101126 Node: Compiling a simple C++ program101929 Node: Using the C++ standard library104800 Node: Templates105896 Node: Using C++ standard library templates106537 Node: Providing your own templates108184 Node: Explicit template instantiation111464 Node: The export keyword114675 Node: Platform-specific options115348 Node: Intel and AMD x86 options116393 Ref: Intel and AMD x86 options-Footnote-1119453 Ref: Intel and AMD x86 options-Footnote-2119515 Node: DEC Alpha options119627 Node: SPARC options121698 Node: POWER/PowerPC options122567 Node: Multi-architecture support124250 Ref: Multi-architecture support-Footnote-1125674 Node: Troubleshooting125735 Node: Help for command-line options126171 Node: Version numbers126928 Node: Verbose compilation127940 Node: Compiler-related tools129990 Node: Creating a library with the GNU archiver130490 Ref: Creating a library with the GNU archiver-Footnote-1133966 Node: Using the profiler gprof134036 Ref: Using the profiler gprof-Footnote-1138339 Node: Coverage testing with gcov138401 Node: How the compiler works141416 Node: An overview of the compilation process142087 Node: The preprocessor143365 Ref: The preprocessor-Footnote-1144092 Node: The compiler144302 Node: The assembler145509 Node: The linker146237 Node: Examining compiled files147408 Node: Identifying files147791 Ref: Identifying files-Footnote-1149938 Ref: Identifying files-Footnote-2150111 Node: Examining the symbol table150165 Node: Finding dynamically linked libraries151556 Node: Getting help153254 Ref: Getting help-Footnote-1154852 Ref: Getting help-Footnote-2154899 Node: Further reading154952 Ref: Further reading-Footnote-1159404 Ref: Further reading-Footnote-2159436 Node: Acknowledgements159489 Node: Index160384  End Tag Table