make-doc-non-dfsg-3.81.orig/0000755000175000017500000000000010416557457016065 5ustar srivastasrivastamake-doc-non-dfsg-3.81.orig/ABOUT-NLS0000644000175000017500000015111610416557457017321 0ustar srivastasrivastaNotes on the Free Translation Project ************************************* Free software is going international! The Free Translation Project is a way to get maintainers of free software, translators, and users all together, so that will gradually become able to speak many languages. A few packages already provide translations for their messages. If you found this `ABOUT-NLS' file inside a distribution, you may assume that the distributed package does use GNU `gettext' internally, itself available at your nearest GNU archive site. But you do _not_ need to install GNU `gettext' prior to configuring, installing or using this package with messages translated. Installers will find here some useful hints. These notes also explain how users should proceed for getting the programs to use the available translations. They tell how people wanting to contribute and work at translations should contact the appropriate team. When reporting bugs in the `intl/' directory or bugs which may be related to internationalization, you should tell about the version of `gettext' which is used. The information can be found in the `intl/VERSION' file, in internationalized packages. Quick configuration advice ========================== If you want to exploit the full power of internationalization, you should configure it using ./configure --with-included-gettext to force usage of internationalizing routines provided within this package, despite the existence of internationalizing capabilities in the operating system where this package is being installed. So far, only the `gettext' implementation in the GNU C library version 2 provides as many features (such as locale alias, message inheritance, automatic charset conversion or plural form handling) as the implementation here. It is also not possible to offer this additional functionality on top of a `catgets' implementation. Future versions of GNU `gettext' will very likely convey even more functionality. So it might be a good idea to change to GNU `gettext' as soon as possible. So you need _not_ provide this option if you are using GNU libc 2 or you have installed a recent copy of the GNU gettext package with the included `libintl'. INSTALL Matters =============== Some packages are "localizable" when properly installed; the programs they contain can be made to speak your own native language. Most such packages use GNU `gettext'. Other packages have their own ways to internationalization, predating GNU `gettext'. By default, this package will be installed to allow translation of messages. It will automatically detect whether the system already provides the GNU `gettext' functions. If not, the GNU `gettext' own library will be used. This library is wholly contained within this package, usually in the `intl/' subdirectory, so prior installation of the GNU `gettext' package is _not_ required. Installers may use special options at configuration time for changing the default behaviour. The commands: ./configure --with-included-gettext ./configure --disable-nls will respectively bypass any pre-existing `gettext' to use the internationalizing routines provided within this package, or else, _totally_ disable translation of messages. When you already have GNU `gettext' installed on your system and run configure without an option for your new package, `configure' will probably detect the previously built and installed `libintl.a' file and will decide to use this. This might be not what is desirable. You should use the more recent version of the GNU `gettext' library. I.e. if the file `intl/VERSION' shows that the library which comes with this package is more recent, you should use ./configure --with-included-gettext to prevent auto-detection. The configuration process will not test for the `catgets' function and therefore it will not be used. The reason is that even an emulation of `gettext' on top of `catgets' could not provide all the extensions of the GNU `gettext' library. Internationalized packages have usually many `po/LL.po' files, where LL gives an ISO 639 two-letter code identifying the language. Unless translations have been forbidden at `configure' time by using the `--disable-nls' switch, all available translations are installed together with the package. However, the environment variable `LINGUAS' may be set, prior to configuration, to limit the installed set. `LINGUAS' should then contain a space separated list of two-letter codes, stating which languages are allowed. Using This Package ================== As a user, if your language has been installed for this package, you only have to set the `LANG' environment variable to the appropriate `LL_CC' combination. Here `LL' is an ISO 639 two-letter language code, and `CC' is an ISO 3166 two-letter country code. For example, let's suppose that you speak German and live in Germany. At the shell prompt, merely execute `setenv LANG de_DE' (in `csh'), `export LANG; LANG=de_DE' (in `sh') or `export LANG=de_DE' (in `bash'). This can be done from your `.login' or `.profile' file, once and for all. You might think that the country code specification is redundant. But in fact, some languages have dialects in different countries. For example, `de_AT' is used for Austria, and `pt_BR' for Brazil. The country code serves to distinguish the dialects. The locale naming convention of `LL_CC', with `LL' denoting the language and `CC' denoting the country, is the one use on systems based on GNU libc. On other systems, some variations of this scheme are used, such as `LL' or `LL_CC.ENCODING'. You can get the list of locales supported by your system for your country by running the command `locale -a | grep '^LL''. Not all programs have translations for all languages. By default, an English message is shown in place of a nonexistent translation. If you understand other languages, you can set up a priority list of languages. This is done through a different environment variable, called `LANGUAGE'. GNU `gettext' gives preference to `LANGUAGE' over `LANG' for the purpose of message handling, but you still need to have `LANG' set to the primary language; this is required by other parts of the system libraries. For example, some Swedish users who would rather read translations in German than English for when Swedish is not available, set `LANGUAGE' to `sv:de' while leaving `LANG' to `sv_SE'. Special advice for Norwegian users: The language code for Norwegian bokma*l changed from `no' to `nb' recently (in 2003). During the transition period, while some message catalogs for this language are installed under `nb' and some older ones under `no', it's recommended for Norwegian users to set `LANGUAGE' to `nb:no' so that both newer and older translations are used. In the `LANGUAGE' environment variable, but not in the `LANG' environment variable, `LL_CC' combinations can be abbreviated as `LL' to denote the language's main dialect. For example, `de' is equivalent to `de_DE' (German as spoken in Germany), and `pt' to `pt_PT' (Portuguese as spoken in Portugal) in this context. Translating Teams ================= For the Free Translation Project to be a success, we need interested people who like their own language and write it well, and who are also able to synergize with other translators speaking the same language. Each translation team has its own mailing list. The up-to-date list of teams can be found at the Free Translation Project's homepage, `http://www.iro.umontreal.ca/contrib/po/HTML/', in the "National teams" area. If you'd like to volunteer to _work_ at translating messages, you should become a member of the translating team for your own language. The subscribing address is _not_ the same as the list itself, it has `-request' appended. For example, speakers of Swedish can send a message to `sv-request@li.org', having this message body: subscribe Keep in mind that team members are expected to participate _actively_ in translations, or at solving translational difficulties, rather than merely lurking around. If your team does not exist yet and you want to start one, or if you are unsure about what to do or how to get started, please write to `translation@iro.umontreal.ca' to reach the coordinator for all translator teams. The English team is special. It works at improving and uniformizing the terminology in use. Proven linguistic skill are praised more than programming skill, here. Available Packages ================== Languages are not equally supported in all packages. The following matrix shows the current state of internationalization, as of January 2004. The matrix shows, in regard of each package, for which languages PO files have been submitted to translation coordination, with a translation percentage of at least 50%. Ready PO files af am ar az be bg bs ca cs da de el en en_GB eo es +----------------------------------------------------+ a2ps | [] [] [] [] | aegis | () | ant-phone | () | anubis | | ap-utils | | aspell | [] | bash | [] [] [] [] | batchelor | | bfd | [] [] | binutils | [] [] | bison | [] [] [] | bluez-pin | [] [] [] | clisp | | clisp | [] [] [] | console-tools | [] [] | coreutils | [] [] [] [] | cpio | [] [] [] | darkstat | [] () [] | diffutils | [] [] [] [] [] [] [] | e2fsprogs | [] [] [] | enscript | [] [] [] [] | error | [] [] [] [] [] | fetchmail | [] () [] [] [] [] | fileutils | [] [] [] | findutils | [] [] [] [] [] [] [] | flex | [] [] [] [] | fslint | | gas | [] | gawk | [] [] [] [] | gbiff | [] | gcal | [] | gcc | [] [] | gettext | [] [] [] [] [] | gettext-examples | [] [] [] [] | gettext-runtime | [] [] [] [] [] | gettext-tools | [] [] [] | gimp-print | [] [] [] [] [] | gliv | | glunarclock | [] [] | gnubiff | [] | gnucash | [] () [] [] | gnucash-glossary | [] () [] | gnupg | [] () [] [] [] [] | gpe-aerial | [] | gpe-beam | [] [] | gpe-calendar | [] [] | gpe-clock | [] [] | gpe-conf | [] [] | gpe-contacts | [] [] | gpe-edit | [] | gpe-go | [] | gpe-login | [] [] | gpe-ownerinfo | [] [] | gpe-sketchbook | [] [] | gpe-su | [] [] | gpe-taskmanager | [] [] | gpe-timesheet | [] | gpe-today | [] [] | gpe-todo | [] [] | gphoto2 | [] [] [] [] | gprof | [] [] [] | gpsdrive | () () () | gramadoir | [] | grep | [] [] [] [] [] [] | gretl | [] | gtick | [] () | hello | [] [] [] [] [] [] | id-utils | [] [] | indent | [] [] [] [] | iso_3166 | [] [] [] [] [] [] [] [] [] [] | iso_3166_1 | [] [] [] [] [] [] | iso_3166_2 | | iso_3166_3 | [] | iso_4217 | [] [] [] [] | iso_639 | | jpilot | [] [] [] | jtag | | jwhois | [] | kbd | [] [] [] [] [] | latrine | () | ld | [] [] | libc | [] [] [] [] [] [] | libgpewidget | [] [] | libiconv | [] [] [] [] [] | lifelines | [] () | lilypond | [] | lingoteach | | lingoteach_lessons | () () | lynx | [] [] [] [] | m4 | [] [] [] [] | mailutils | [] [] | make | [] [] [] | man-db | [] () [] [] () | minicom | [] [] [] | mysecretdiary | [] [] [] | nano | [] () [] [] [] | nano_1_0 | [] () [] [] [] | opcodes | [] | parted | [] [] [] [] [] | ptx | [] [] [] [] [] | python | | radius | [] | recode | [] [] [] [] [] [] [] | rpm | [] [] | screem | | scrollkeeper | [] [] [] [] [] [] | sed | [] [] [] [] [] [] | sh-utils | [] [] [] | shared-mime-info | | sharutils | [] [] [] [] [] [] | silky | () | skencil | [] () [] | sketch | [] () [] | soundtracker | [] [] [] | sp | [] | tar | [] [] [] [] | texinfo | [] [] [] | textutils | [] [] [] [] | tin | () () | tp-robot | | tuxpaint | [] [] [] [] [] [] [] | unicode-han-tra... | | unicode-transla... | | util-linux | [] [] [] [] [] | vorbis-tools | [] [] [] [] | wastesedge | () | wdiff | [] [] [] [] | wget | [] [] [] [] [] [] | xchat | [] [] [] [] | xfree86_xkb_xml | [] [] | xpad | [] | +----------------------------------------------------+ af am ar az be bg bs ca cs da de el en en_GB eo es 4 0 0 1 9 4 1 40 41 60 78 17 1 5 13 68 et eu fa fi fr ga gl he hr hu id is it ja ko lg +-------------------------------------------------+ a2ps | [] [] [] () () | aegis | | ant-phone | [] | anubis | [] | ap-utils | [] | aspell | [] [] | bash | [] [] | batchelor | [] [] | bfd | [] | binutils | [] [] | bison | [] [] [] [] | bluez-pin | [] [] [] [] [] | clisp | | clisp | [] | console-tools | | coreutils | [] [] [] [] [] [] | cpio | [] [] [] [] | darkstat | () [] [] [] | diffutils | [] [] [] [] [] [] [] | e2fsprogs | | enscript | [] [] | error | [] [] [] [] | fetchmail | [] | fileutils | [] [] [] [] [] [] | findutils | [] [] [] [] [] [] [] [] [] [] [] | flex | [] [] [] | fslint | [] | gas | [] | gawk | [] [] [] | gbiff | [] | gcal | [] | gcc | [] | gettext | [] [] [] | gettext-examples | [] [] | gettext-runtime | [] [] [] [] [] | gettext-tools | [] [] [] | gimp-print | [] [] | gliv | () | glunarclock | [] [] [] [] | gnubiff | [] | gnucash | () [] | gnucash-glossary | [] | gnupg | [] [] [] [] [] [] [] | gpe-aerial | [] | gpe-beam | [] | gpe-calendar | [] [] [] | gpe-clock | [] | gpe-conf | [] | gpe-contacts | [] [] | gpe-edit | [] [] | gpe-go | [] | gpe-login | [] [] | gpe-ownerinfo | [] [] [] | gpe-sketchbook | [] | gpe-su | [] | gpe-taskmanager | [] | gpe-timesheet | [] [] [] | gpe-today | [] [] | gpe-todo | [] [] | gphoto2 | [] [] [] | gprof | [] [] | gpsdrive | () () () | gramadoir | [] [] | grep | [] [] [] [] [] [] [] [] [] [] [] | gretl | [] [] | gtick | [] [] [] | hello | [] [] [] [] [] [] [] [] [] [] [] [] [] | id-utils | [] [] [] [] | indent | [] [] [] [] [] [] [] [] [] | iso_3166 | [] [] [] [] [] [] [] | iso_3166_1 | [] [] [] [] [] | iso_3166_2 | | iso_3166_3 | | iso_4217 | [] [] [] [] [] [] | iso_639 | | jpilot | [] () | jtag | [] | jwhois | [] [] [] [] | kbd | [] | latrine | [] | ld | [] | libc | [] [] [] [] [] [] | libgpewidget | [] [] [] [] | libiconv | [] [] [] [] [] [] [] [] [] | lifelines | () | lilypond | [] | lingoteach | [] [] | lingoteach_lessons | | lynx | [] [] [] [] | m4 | [] [] [] [] | mailutils | | make | [] [] [] [] [] [] | man-db | () () | minicom | [] [] [] [] | mysecretdiary | [] [] | nano | [] [] [] [] | nano_1_0 | [] [] [] [] | opcodes | [] | parted | [] [] [] | ptx | [] [] [] [] [] [] [] | python | | radius | [] | recode | [] [] [] [] [] [] | rpm | [] [] | screem | | scrollkeeper | [] | sed | [] [] [] [] [] [] [] [] [] | sh-utils | [] [] [] [] [] [] [] | shared-mime-info | [] [] [] | sharutils | [] [] [] [] [] | silky | () [] () () | skencil | [] | sketch | [] | soundtracker | [] [] | sp | [] () | tar | [] [] [] [] [] [] [] [] [] | texinfo | [] [] [] [] | textutils | [] [] [] [] [] [] | tin | [] () | tp-robot | [] | tuxpaint | [] [] [] [] [] [] [] [] [] | unicode-han-tra... | | unicode-transla... | [] [] | util-linux | [] [] [] [] () [] | vorbis-tools | [] | wastesedge | () | wdiff | [] [] [] [] [] [] | wget | [] [] [] [] [] [] [] | xchat | [] [] [] | xfree86_xkb_xml | [] [] | xpad | [] [] | +-------------------------------------------------+ et eu fa fi fr ga gl he hr hu id is it ja ko lg 22 2 1 26 106 28 24 8 10 41 33 1 26 33 12 0 lt lv mk mn ms mt nb nl nn no nso pl pt pt_BR ro ru +-----------------------------------------------------+ a2ps | [] [] () () [] [] [] | aegis | () () () | ant-phone | [] [] | anubis | [] [] [] [] [] [] | ap-utils | [] () [] | aspell | [] | bash | [] [] [] | batchelor | [] | bfd | [] | binutils | [] | bison | [] [] [] [] [] | bluez-pin | [] [] [] | clisp | | clisp | [] | console-tools | [] | coreutils | [] [] | cpio | [] [] [] [] [] | darkstat | [] [] [] [] | diffutils | [] [] [] [] [] [] | e2fsprogs | [] | enscript | [] [] [] [] | error | [] [] [] | fetchmail | [] [] () [] | fileutils | [] [] [] | findutils | [] [] [] [] [] | flex | [] [] [] [] | fslint | [] [] | gas | | gawk | [] [] [] | gbiff | [] [] | gcal | | gcc | | gettext | [] [] [] | gettext-examples | [] [] [] | gettext-runtime | [] [] [] [] | gettext-tools | [] [] | gimp-print | [] | gliv | [] [] [] | glunarclock | [] [] [] [] | gnubiff | [] | gnucash | [] [] () [] | gnucash-glossary | [] [] | gnupg | [] | gpe-aerial | [] [] [] [] | gpe-beam | [] [] [] [] | gpe-calendar | [] [] [] [] | gpe-clock | [] [] [] [] | gpe-conf | [] [] [] [] | gpe-contacts | [] [] [] [] | gpe-edit | [] [] [] [] | gpe-go | [] [] [] | gpe-login | [] [] [] [] | gpe-ownerinfo | [] [] [] [] | gpe-sketchbook | [] [] [] [] | gpe-su | [] [] [] [] | gpe-taskmanager | [] [] [] [] | gpe-timesheet | [] [] [] [] | gpe-today | [] [] [] [] | gpe-todo | [] [] [] [] | gphoto2 | [] | gprof | [] [] | gpsdrive | () () [] | gramadoir | () [] | grep | [] [] [] [] [] | gretl | | gtick | [] [] [] | hello | [] [] [] [] [] [] [] [] [] [] | id-utils | [] [] [] [] | indent | [] [] [] [] | iso_3166 | [] [] [] | iso_3166_1 | [] [] | iso_3166_2 | | iso_3166_3 | [] | iso_4217 | [] [] [] [] [] [] [] [] | iso_639 | [] | jpilot | () () | jtag | | jwhois | [] [] [] [] () | kbd | [] [] [] | latrine | [] | ld | | libc | [] [] [] [] | libgpewidget | [] [] [] | libiconv | [] [] [] [] [] | lifelines | | lilypond | | lingoteach | | lingoteach_lessons | | lynx | [] [] [] | m4 | [] [] [] [] [] | mailutils | [] [] [] | make | [] [] [] [] | man-db | [] | minicom | [] [] [] [] | mysecretdiary | [] [] [] | nano | [] [] [] [] [] | nano_1_0 | [] [] [] [] [] [] | opcodes | [] [] | parted | [] [] [] [] | ptx | [] [] [] [] [] [] [] [] | python | | radius | [] [] | recode | [] [] [] [] | rpm | [] [] [] | screem | | scrollkeeper | [] [] [] [] [] | sed | [] [] [] | sh-utils | [] [] | shared-mime-info | [] [] | sharutils | [] [] | silky | () | skencil | [] [] | sketch | [] [] | soundtracker | | sp | | tar | [] [] [] [] [] [] | texinfo | [] [] [] [] | textutils | [] [] | tin | | tp-robot | [] | tuxpaint | [] [] [] [] [] [] [] [] | unicode-han-tra... | | unicode-transla... | | util-linux | [] [] [] | vorbis-tools | [] [] [] | wastesedge | | wdiff | [] [] [] [] [] | wget | [] [] [] | xchat | [] [] [] | xfree86_xkb_xml | [] [] | xpad | [] [] | +-----------------------------------------------------+ lt lv mk mn ms mt nb nl nn no nso pl pt pt_BR ro ru 1 2 0 3 12 0 10 69 6 7 1 40 26 36 76 63 sk sl sr sv ta th tr uk ven vi wa xh zh_CN zh_TW zu +-----------------------------------------------------+ a2ps | [] [] [] [] | 16 aegis | | 0 ant-phone | | 3 anubis | [] [] | 9 ap-utils | () | 3 aspell | | 4 bash | | 9 batchelor | | 3 bfd | [] [] | 6 binutils | [] [] [] | 8 bison | [] [] | 14 bluez-pin | [] [] [] | 14 clisp | | 0 clisp | | 5 console-tools | | 3 coreutils | [] [] [] [] | 16 cpio | [] [] | 14 darkstat | [] [] [] () () | 12 diffutils | [] [] [] | 23 e2fsprogs | [] [] | 6 enscript | [] [] | 12 error | [] [] [] | 15 fetchmail | [] [] | 11 fileutils | [] [] [] [] [] | 17 findutils | [] [] [] [] [] [] | 29 flex | [] [] | 13 fslint | | 3 gas | [] | 3 gawk | [] [] | 12 gbiff | | 4 gcal | [] [] | 4 gcc | [] | 4 gettext | [] [] [] [] [] | 16 gettext-examples | [] [] [] [] [] | 14 gettext-runtime | [] [] [] [] [] [] [] [] | 22 gettext-tools | [] [] [] [] [] [] | 14 gimp-print | [] [] | 10 gliv | | 3 glunarclock | [] [] [] | 13 gnubiff | | 3 gnucash | [] [] | 9 gnucash-glossary | [] [] [] | 8 gnupg | [] [] [] [] | 17 gpe-aerial | [] | 7 gpe-beam | [] | 8 gpe-calendar | [] [] [] [] | 13 gpe-clock | [] [] [] | 10 gpe-conf | [] [] | 9 gpe-contacts | [] [] [] | 11 gpe-edit | [] [] [] [] [] | 12 gpe-go | | 5 gpe-login | [] [] [] [] [] | 13 gpe-ownerinfo | [] [] [] [] | 13 gpe-sketchbook | [] [] | 9 gpe-su | [] [] [] | 10 gpe-taskmanager | [] [] [] | 10 gpe-timesheet | [] [] [] [] | 12 gpe-today | [] [] [] [] [] | 13 gpe-todo | [] [] [] [] | 12 gphoto2 | [] [] [] | 11 gprof | [] [] | 9 gpsdrive | [] [] | 3 gramadoir | [] | 5 grep | [] [] [] [] | 26 gretl | | 3 gtick | | 7 hello | [] [] [] [] [] | 34 id-utils | [] [] | 12 indent | [] [] [] [] | 21 iso_3166 | [] [] [] [] [] [] [] | 27 iso_3166_1 | [] [] [] | 16 iso_3166_2 | | 0 iso_3166_3 | | 2 iso_4217 | [] [] [] [] [] [] | 24 iso_639 | | 1 jpilot | [] [] [] [] [] | 9 jtag | [] | 2 jwhois | () [] [] | 11 kbd | [] [] | 11 latrine | | 2 ld | [] [] | 5 libc | [] [] [] [] | 20 libgpewidget | [] [] [] [] | 13 libiconv | [] [] [] [] [] [] [] [] | 27 lifelines | [] | 2 lilypond | [] | 3 lingoteach | | 2 lingoteach_lessons | () | 0 lynx | [] [] [] | 14 m4 | [] [] | 15 mailutils | | 5 make | [] [] [] | 16 man-db | [] | 5 minicom | | 11 mysecretdiary | [] [] | 10 nano | [] [] [] [] | 17 nano_1_0 | [] [] [] | 17 opcodes | [] [] | 6 parted | [] [] [] | 15 ptx | [] [] | 22 python | | 0 radius | | 4 recode | [] [] [] | 20 rpm | [] [] | 9 screem | [] [] | 2 scrollkeeper | [] [] [] | 15 sed | [] [] [] [] [] [] | 24 sh-utils | [] [] | 14 shared-mime-info | [] [] | 7 sharutils | [] [] [] [] | 17 silky | () | 3 skencil | [] | 6 sketch | [] | 6 soundtracker | [] [] | 7 sp | [] | 3 tar | [] [] [] [] [] | 24 texinfo | [] [] [] | 14 textutils | [] [] [] [] | 16 tin | | 1 tp-robot | | 2 tuxpaint | [] [] [] [] [] | 29 unicode-han-tra... | | 0 unicode-transla... | | 2 util-linux | [] [] | 15 vorbis-tools | | 8 wastesedge | | 0 wdiff | [] [] [] | 18 wget | [] [] [] [] [] [] [] [] | 24 xchat | [] [] [] [] [] | 15 xfree86_xkb_xml | [] [] [] [] [] | 11 xpad | | 5 +-----------------------------------------------------+ 63 teams sk sl sr sv ta th tr uk ven vi wa xh zh_CN zh_TW zu 131 domains 47 19 28 83 0 0 59 13 1 1 11 0 22 22 0 1373 Some counters in the preceding matrix are higher than the number of visible blocks let us expect. This is because a few extra PO files are used for implementing regional variants of languages, or language dialects. For a PO file in the matrix above to be effective, the package to which it applies should also have been internationalized and distributed as such by its maintainer. There might be an observable lag between the mere existence a PO file and its wide availability in a distribution. If January 2004 seems to be old, you may fetch a more recent copy of this `ABOUT-NLS' file on most GNU archive sites. The most up-to-date matrix with full percentage details can be found at `http://www.iro.umontreal.ca/contrib/po/HTML/matrix.html'. Using `gettext' in new packages =============================== If you are writing a freely available program and want to internationalize it you are welcome to use GNU `gettext' in your package. Of course you have to respect the GNU Library General Public License which covers the use of the GNU `gettext' library. This means in particular that even non-free programs can use `libintl' as a shared library, whereas only free software can use `libintl' as a static library or use modified versions of `libintl'. Once the sources are changed appropriately and the setup can handle the use of `gettext' the only thing missing are the translations. The Free Translation Project is also available for packages which are not developed inside the GNU project. Therefore the information given above applies also for every other Free Software Project. Contact `translation@iro.umontreal.ca' to make the `.pot' files available to the translation teams. make-doc-non-dfsg-3.81.orig/AUTHORS0000644000175000017500000000537210416557457017144 0ustar srivastasrivasta----------------------------------- GNU make development up to version 3.75 by: Roland McGrath Development starting with GNU make 3.76 by: Paul D. Smith Additional development starting with GNU make 3.81 by: Boris Kolpackov GNU Make User's Manual Written by: Richard M. Stallman Edited by: Roland McGrath Bob Chassell Melissa Weisshaus Paul D. Smith ----------------------------------- GNU make porting efforts: Port to VMS by: Klaus Kaempf Hartmut Becker Archive support/Bug fixes by: John W. Eaton Martin Zinser Port to Amiga by: Aaron Digulla Port to MS-DOS (DJGPP), OS/2, and MS-Windows (native/MinGW) by: DJ Delorie Rob Tulloh Eli Zaretskii Jonathan Grant Andreas Beuning Earnie Boyd ----------------------------------- Other contributors: Janet Carson Howard Chu Paul Eggert Klaus Heinz Michael Joosten Jim Kelton David Lubbren Tim Magill Markus Mauhart Greg McGary Thomas Riedl Han-Wen Nienhuys Andreas Schwab Carl Staelin (Princeton University) Ian Stewartson (Data Logic Limited) With suggestions/comments/bug reports from a cast of ... well ... hundreds, anyway :) ------------------------------------------------------------------------------- Copyright (C) 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. This file is part of GNU Make. GNU Make is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version. GNU Make is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GNU Make; see the file COPYING. If not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. make-doc-non-dfsg-3.81.orig/COPYING0000644000175000017500000004317310416557457017130 0ustar srivastasrivasta GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS Appendix: How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) 19yy This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19yy name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. make-doc-non-dfsg-3.81.orig/ChangeLog0000644000175000017500000031762610416557457017656 0ustar srivastasrivasta2006-04-01 Paul D. Smith Version 3.81 released. * NEWS: Updated for 3.81. * README.cvs: Mention that vpath builds are not supported out of CVS. Fixes Savannah bug #16236. Remove update of make.texi from the list of things to do; we use version.texi now. 2006-03-26 Paul D. Smith * doc/make.texi: Clean up licensing. Use @copying and version.texi support from automake, as described in the Texinfo manual. 2006-03-25 Eli Zaretskii * implicit.c (pattern_search) [HAVE_DOS_PATHS]: Don't compare b with lastslash, since the latter points to filename, not to target. * job.c (construct_command_argv_internal) [HAVE_DOS_PATHS]: Declare and define sh_chars_sh[]. 2006-03-23 Paul D. Smith * configure.in: Look for build.sh.in in $srcdir so it will be built for remote configurations as well. * Makefile.am: Make sure to clean up build.sh during distclean. Fixes Savannah bug #16166. * misc.c (log_access): Takes a const char *. * function.c (fold_newlines): Takes an unsigned int *. Both fixes for Savannah bug #16170. 2006-03-22 Boris Kolpackov * implicit.c (pattern_search): Call set_file_variables only if we have prerequisites that need second expansion. Fixes Savannah bug #16140. 2006-03-19 Paul D. Smith * remake.c (update_file): Add alloca(0) to clean up alloca'd memory on hosts that don't support it directly. * README.cvs: Add information on steps for making a release (to make sure I don't forget any). * main.c (clean_jobserver): Move jobserver cleanup code into a new function. (die): Cleanup code was removed from here; call the new function. (main): If we are re-execing, clean up the jobserver first so we don't leak file descriptors. Fix bug reported by Craig Fithian . 2006-03-17 Paul D. Smith * maintMakefile (do-po-update): Rewrite this rule to clean up and allow multiple concurrent runs. Patch from Joseph Myers 2006-03-17 Boris Kolpackov * dep.h (struct dep): Add the stem field. * misc.c (alloc_dep, free_dep): New functions. (copy_dep_chain): Copy stem. (free_dep_chain): Use free_dep. * read.c (record_files): Store stem in the dependency line. * file.c (expand_deps): Use stem stored in the dependency line. Use free_dep_chain instead of free_ns_chain. * implicit.c (pattern_search): Use alloc_dep and free_dep. * read.c (read_all_makefiles, eval_makefile, eval): Ditto. * main.c (main, handle_non_switch_argument): Ditto. * remake.c (check_dep): Ditto. * rule.c (convert_suffix_rule, freerule): Ditto. 2006-03-14 Paul D. Smith * expand.c (variable_append): Instead of appending everything then expanding the result, we expand (or not, if it's simple) each part as we add it. (allocated_variable_append): Don't expand the final result. Fixes Savannah bug #15913. 2006-03-09 Paul Smith * remake.c (update_file_1): Revert the change of 3 Jan 2006 which listed non-existent files as changed. Turns out there's a bug in the Linux kernel builds which means that this change causes everything to rebuild every time. We will re-introduce this fix in the next release, to give them time to fix their build system. Fixes Savannah bug #16002. Introduces Savannah bug #16051. * implicit.c (pattern_search) [DOS_PATHS]: Look for DOS paths if we *don't* find UNIX "/". Reported by David Ergo 2006-03-04 Eli Zaretskii * variable.c (do_variable_definition) [WINDOWS32]: Call the shell locator function find_and_set_default_shell if SHELL came from the command line. 2006-02-20 Paul D. Smith * variable.c (merge_variable_set_lists): It's legal for *setlist0 to be null; don't core in that case. 2006-02-19 Paul D. Smith * commands.c (set_file_variables): Realloc, not malloc, the static string values to avoid memory leaks. * expand.c (recursively_expand_for_file): Only set reading_file to an initialized value. * implicit.c (pattern_search): We need to make a copy of the stem if we get it from an intermediate dep, since those get freed. * file.c (lookup_file) [VMS]: Don't lowercase special targets that begin with ".". (enter_file) [VMS]: Ditto. Patch provided by Hartmut Becker . 2006-02-24 Eli Zaretskii * job.c (construct_command_argv_internal): Fix last change. * w32/subproc/sub_proc.c (process_pipe_io): Make dwStdin, dwStdout, and dwStderr unsigned int: avoids compiler warnings in the calls to _beginthreadex. * expand.c (recursively_expand_for_file): Initialize `save' to prevent compiler warnings. 2006-02-18 Eli Zaretskii * job.c (construct_command_argv_internal): Don't create a temporary script/batch file if we are under -n. Call _setmode to switch the script file stream to text mode. 2006-02-17 Paul D. Smith * variable.c (merge_variable_set_lists): Don't try to merge the global_setlist. Not only is this useless, but it can lead to circularities in the linked list, if global_setlist->next in one list gets set to point to another list which also ends in global_setlist. Fixes Savannah bug #15757. 2006-02-15 Paul D. Smith Fix for Savannah bug #106. * expand.c (expanding_var): Keep track of which variable we're expanding. If no variable is being expanded, it's the same as reading_file. * make.h (expanding_var): Declare it. * expand.c (recursively_expand_for_file): Set expanding_var to the current variable we're expanding, unless there's no file info in it (could happen if it comes from the command line or a default variable). Restore it before we exit. * expand.c (variable_expand_string): Use the expanding_var file info instead of the reading_file info. * function.c (check_numeric): Ditto. (func_word): Ditto. (func_wordlist): Ditto. (func_error): Ditto. (expand_builtin_function): Ditto. (handle_function): Ditto. 2006-02-14 Paul D. Smith * read.c (eval): Even if the included filenames expands to the empty string we still need to free the allocated buffer. * implicit.c (pattern_search): If we allocated a variable set for an impossible file, free it. * variable.c (free_variable_set): New function. * variable.h: Declare it. * read.c (read_all_makefiles): Makefile names are kept in the strcache, so there's never any need to alloc/free them. (eval): Ditto. * main.c (main): Add "archives" to the .FEATURES variable if archive support is enabled. * doc/make.texi (Special Variables): Document it. 2006-02-13 Paul D. Smith * implicit.c (pattern_search): Add checking for DOS pathnames to the pattern rule target LASTSLASH manipulation. Fixes Savannah bug #11183. 2006-02-11 Paul D. Smith * (ALL FILES): Updated copyright and license notices. 2006-02-10 Paul D. Smith A new internal capability: the string cache is a read-only cache of strings, with a hash table interface for fast lookup. Nothing in the cache will ever be freed, so there's no need for reference counting, etc. This is the beginning of a full solution for Savannah bug #15182, but for now we only store makefile names here. * strcache.c: New file. Implement a read-only string cache. * make.h: Add prototypes for new functions. * main.c (initialize_global_hash_tables): Initialize the string cache. (print_data_base): Print string cache stats. * read.c (eval_makefile): Use the string cache to store makefile names. Rewrite the string allocation to be sure we free everything. 2006-02-10 Eli Zaretskii * dir.c (dir_contents_file_exists_p): Don't opendir if the directory time stamp didn't change, except on FAT filesystems. Suggested by J. David Bryan . 2006-02-09 Paul D. Smith * function.c (func_or): Implement a short-circuiting OR function. (func_and): Implement a short-circuiting AND function. (function_table_init): Update the table with the new functions. * doc/make.texi (Conditional Functions): Changed the "if" section to one on general conditional functions. Added documentation for $(and ...) and $(or ...) functions. * NEWS: Note new $(and ...) and $(or ...) functions. 2006-02-08 Boris Kolpackov * job.h (struct child): Add the dontcare bitfield. * job.c (new_job): Cache dontcare flag. * job.c (reap_children): Use cached dontcare flag instead of the one in struct file. Fixes Savannah bug #15641. 2006-02-06 Paul D. Smith * vpath.c (selective_vpath_search): If the file we find has a timestamp from -o or -W, use that instead of the real time. * remake.c (f_mtime): If the mtime is a special token from -o or -W, don't overwrite it with the real mtime. Fixes Savannah bug #15341. Updates from Markus Mauhart : * w32/subproc/sub_proc.c (process_begin): Remove no-op tests. (process_signal, process_last_err, process_exit_code): Manage invalid handle values. (process_{outbuf,errbuf,outcnt,errcnt,pipes}): Unused and don't manage invalid handles; remove them. * job.c (start_job_command) [WINDOWS32]: Jump out on error. * config.h.W32.template [WINDOWS32]: Set flags for Windows builds. * README.cvs: Updates for building from CVS. 2006-02-05 Paul D. Smith * file.c (enter_file): Keep track of the last double_colon entry, to avoid walking the list every time we want to add a new one. Fixes Savannah bug #15533. * filedef.h (struct file): Add a new LAST pointer. * dir.c (directory_contents_hash_cmp): Don't use subtraction to do the comparison. For 64-bits systems the result of the subtraction might not fit into an int. Use comparison instead. Fixes Savannah bug #15534. * doc/make.texi: Update the chapter on writing commands to reflect the changes made in 3.81 for backslash/newline and SHELL handling. 2006-02-01 Paul D. Smith * dir.c (dir_contents_file_exists_p) [WINDOWS32]: Make sure variable st is not used when it's not initialized. Patch from Eli Zaretskii . 2006-01-31 Paul D. Smith * README.W32.template: Applied patch #4785 from Markus Mauhart . * README.cvs: Applied patch #4786 from Markus Mauhart . * make_msvc_net2003.vcproj [WINDOWS32]: New version from J. Grant . * main.c: Update the copyright year in the version output. * prepare_w32.bat: Remove this file from the distribution. 2006-01-21 Eli Zaretskii * remake.c (update_goal_chain): Set g->changed instead of incrementing it, as it is only 8-bit wide, and could overflow if many commands got started in update_file. * w32/include/sub_proc.h: Add a prototype for process_used_slots. * w32/subproc/sub_proc.c: Change dimension of proc_array[] to MAXIMUM_WAIT_OBJECTS. (process_wait_for_any_private): Change dimension of handles[] array to MAXIMUM_WAIT_OBJECTS. (process_used_slots): New function. (process_register): Don't register more processes than the available number of slots. (process_easy): Don't start new processes if all slots are used up. * job.c (load_too_high, start_waiting_jobs) [WINDOWS32]: If there are already more children than sub_proc.c can handle, behave as if the load were too high. (start_job_command): Fix a typo in error message when process_easy fails. 2006-01-14 Eli Zaretskii * main.c (main) [WINDOWS32]: Don't refuse to run with -jN, even if the shell is not sh.exe. * job.c (create_batch_file): Renamed from create_batch_filename; all callers changed. Don't close the temporary file; return its file descriptor instead. New arg FD allows to return the file descriptor. (construct_command_argv_internal): Use _fdopen instead of fopen to open the batch file. 2006-01-04 Paul D. Smith * readme.vms: Updates for case-insensitive VMS file systems from Hartmut Becker . * dir.c (vms_hash): Ditto. * vmsify.c (copyto): Ditto. * vmsfunctions.c (readdir): Ditto. * make.1: Add a section on the exit codes for make. * doc/make.texi: A number of minor updates to the documentation. 2006-01-03 Paul D. Smith * remake.c (update_file_1): Mark a prerequisite changed if it doesn't exist. * read.c (eval): Be sure to strip off trailing whitespace from the prerequisites list properly. Also, initialize all fields in struct dep when creating a new one. 2005-12-28 Paul D. Smith * config.h.W32.template [WINDOWS32]: Add in some pragmas to disable warnings for MSC. Patch by Rob Tulloh . 2005-12-17 Eli Zaretskii * doc/make.texi (Execution): Add a footnote about changes in handling of backslash-newline sequences. Mention the differences on MS-DOS and MS-Windows. * NEWS: More details about building the MinGW port and a pointer to README.W32. Fix the section name that describes the new backward-incompatible processing of backslash-newline sequences. The special processing of SHELL set to "cmd" is only relevant to MS-Windows, not MS-DOS. 2005-12-17 Eli Zaretskii * main.c (handle_runtime_exceptions): Cast exrec->ExceptionAddress to DWORD, to avoid compiler warnings. * job.c (exec_command): Cast hWaitPID and hPID to DWORD, and use %ld in format, to avoid compiler warnings. * doc/make.texi (Special Targets): Fix a typo. (Appending): Fix cross-reference to Setting. (Special Variables, Secondary Expansion, File Name Functions) (Flavor Function, Pattern Match, Quick Reference): Ensure two periods after a sentence. (Execution): Add @: after "e.g.". (Environment): Fix punctuation. (Target-specific, Call Function, Quick Reference): Add @: after "etc." (Shell Function, Target-specific): Add @: after "vs." 2005-12-14 Boris Kolpackov * read.c (record_target_var): Initialize variable's export field with v_default instead of leaving it "initialized" by whatever garbage happened to be on the heap. 2005-12-12 Paul D. Smith * make.1: Fix some display errors and document all existing options. Patch provided by Mike Frysinger . 2005-12-11 Paul D. Smith * implicit.c (pattern_search): If 2nd expansion is not set for this implicit rule, replace the pattern with the stem directly, and don't re-expand the variable list. Along with the other .SECONDEXPANSION changes below, fixes bug #13781. 2005-12-09 Boris Kolpackov * implicit.c (pattern_search): Mark other files that this rule builds as targets so that they are not treated as intermediates by the pattern rule search algorithm. Fixes bug #13022. 2005-12-07 Boris Kolpackov * remake.c (notice_finished_file): Propagate the change of modification time to all the double-colon entries only if it is the last one to be updated. Fixes bug #14334. 2005-11-17 Boris Kolpackov * function.c (func_flavor): Implement the flavor function which returns the flavor of a variable. * doc/make.texi (Functions for Transforming Text): Document it. * NEWS: Add it to the list of new functions. 2005-11-14 Boris Kolpackov * read.c (construct_include_path): Set the .INCLUDE_DIRS special variable. * doc/make.texi (Special Variables): Document .INCLUDE_DIRS. * NEWS: Add .INCLUDE_DIRS to the list of new special variables. 2005-10-26 Paul Smith * read.c (record_files): Don't set deps flags if there are no deps. * maintMakefile: We only need to build the templates when we are creating a distribution, so don't do it for "all". 2005-10-24 Paul D. Smith Make secondary expansion optional: its enabled by declaring the special target .SECONDEXPANSION. * NEWS: Update information on second expansion capabilities. * doc/make.texi (Secondary Expansion): Document the .SECONDEXPANSION special target and its behavior. * dep.h (struct dep): Add a flag STATICPATTERN, set to true if the prerequisite list was found in a static pattern rule. (free_dep_chain): Declare a prototype. * file.c (parse_prereqs): New function: break out some complexity from expand_deps(). (expand_deps): If we aren't doing second expansion, replace % with the stem for static pattern rules. Call the new function. * filedef.h (parse_prereqs): Declare a prototype. * implicit.c (pattern_search): Initialize the new staticpattern field. * main.c (second_expansion): Declare a global variable to remember if the special target has been seen. Initialize the new staticpattern field for prerequisites. * make.h: Extern for second_expansion. * misc.c (free_dep_chain): New function: frees a struct dep list. * read.c (read_all_makefiles): Initialize the staticpattern field. (eval_makefile): Ditto. (record_files): Check for the .SECONDEXPANSION target and set second_expansion global if it's found. Use the new free_dep_chain() instead of doing it by hand. Set the staticpattern field for prereqs of static pattern targets. 2005-10-16 Paul D. Smith * main.c (main): Set CURDIR to be a file variable instead of a default, so that values of CURDIR inherited from the environment won't override the make value. 2005-09-26 Paul D. Smith * job.c (construct_command_argv_internal): If the line is empty remember to free the temporary argv strings. Fixes bug # 14527. 2005-09-16 Paul D. Smith * job.c (start_job_command): The noerror flag is a boolean (single bit); set it appropriately. Reported by Mark Eichin 2005-08-29 Paul D. Smith * function.c (func_error): On Windows, output from $(info ...) seems to come in the wrong order. Try to force it with fflush(). 2005-08-10 Boris Kolpackov * read.c (record_files): Move code that sets stem for static pattern rules out of the if (!two_colon) condition so it is also executed for two-colon rules. Fixes Savannah bug #13881. 2005-08-08 Paul D. Smith * make.h: Don't test that __STDC__ is non-0. Some compilers (Windows for example) set it to 0 to denote "ISO C + extensions". Fixes bug # 13594. 2005-08-07 Paul D. Smith * w32/pathstuff.c (getcwd_fs): Fix warning about assignment in a conditional (slightly different version of a fix from Eli). Fix a bug reported by Michael Matz : patch included. If make is running in parallel without -k and two jobs die in a row, but not too close to each other, then make will quit without waiting for the rest of the jobs to die. * main.c (die): Don't reset err before calling reap_children() the second time: we still want it to be in the error condition. * job.c (reap_children): Use a static variable, rather than err, to control whether or not the error message should be printed. 2005-08-06 Eli Zaretskii * w32/subproc/sub_proc.c: Include signal.h. (process_pipe_io, process_file_io): Pass a pointer to a local DWORD variable to GetExitCodeProcess. If the exit code is CONTROL_C_EXIT, put SIGINT into pproc->signal. * job.c [WINDOWS32]: Include windows.h. (main_thread) [WINDOWS32]: New global variable. (reap_children) [WINDOWS32]: Get the handle for the main thread and store it in main_thread. * commands.c [WINDOWS32]: Include windows.h and w32err.h. (fatal_error_signal) [WINDOWS32]: Suspend the main thread before doing anything else. When we are done, close the main thread handle and exit with status 130. 2005-07-30 Eli Zaretskii * w32/subproc/sub_proc.c (process_begin): Don't pass a NULL pointer to fprintf. * main.c (find_and_set_default_shell): If found a DOSish shell, set sh_found and the value of default_shell, and report the findings in debug mode. * job.c (construct_command_argv_internal): Check unixy_shell, not no_default_sh_exe, to decide whether to use Unixy or DOSish builtin commands. * README.W32: Update with info about the MinGW build. * build_w32.bat: Support MinGW. * w32/subproc/build.bat: Likewise. * w32/subproc/sub_proc.c (process_easy): Fix format strings for printing DWORD args. * function.c (windows32_openpipe): Fix format strings for printing DWORD args. * job.c (reap_children) [WINDOWS32]: Don't declare 'status' and 'reap_mode'. (start_job_command): Fix format string for printing the result of process_easy. (start_job_command) [WINDOWS32]: Do not define. (exec_command): Fix format string for printing HANDLE args. * main.c (handle_runtime_exceptions): Fix sprintf format strings to avoid compiler warnings. (open_tmpfile): Declare fd only if HAVE_FDOPEN is defined. (Note: some of these fixes were submitted independently by J. Grant) 2005-07-30 J. Grant * prepare_w32.bat: Copy config.h.w32 to config.h if not exist. * make_msvc_net2003.vcproj, make_msvc_net2003.sln: MSVC Project files. * Makefile.am (EXTRA_DIST): Add MSVC Project files. 2005-07-15 Paul Smith * job.c (construct_command_argv_internal) [DOS,WINDOWS32,OS/2]: If we don't have a POSIX shell, then revert to the old backslash-newline behavior (where they are stripped). Fixes bug #13665. 2005-07-08 Paul D. Smith * config.h.W32.template: Reorder to match the standard config.h, for easier comparisons. From J. Grant * maintMakefile: Remove .dep_segment before overwriting it, in case it's not writable or noclobber is set. * expand.c (variable_expand_string): Cast result of pointer arithmetic to avoid a warning. * main.c (switches): Add full-fledged final initializer. 2005-07-06 Paul D. Smith * configure.in: IRIX has _sys_siglist. Tru64 UNIX has __sys_siglist. * signame.c (strsignal): If we found _sys_siglist[] or __sys_siglist[] use those instead of sys_siglist[]. From Albert Chin 2005-07-04 Paul D. Smith * config.h-vms.template [VMS]: Latest VMS has its own glob() and globfree(); set up to use the GNU versions. From Martin Zinser 2005-07-03 Paul D. Smith From J. Grant : * README.W32.template: Update the Windows and tested MSVC versions. * NMakefile.template (CFLAGS_any): Change warning level from W3 to W4. * w32/subproc/NMakefile (CFLAGS_any): Ditto. * build_w32.bat: Ditto. * w32/subproc/build.bat: Ditto. 2005-06-28 Paul D. Smith * signame.c: HAVE_DECL_* macros are set to 0, not undef, if the declaration was checked but not present. 2005-06-27 Paul D. Smith * dir.c (find_directory): Change type of fs_serno/fs_flags/fs_len to unsigned long. Fixes Savannah bug #13550. * w32/subproc/sub_proc.c: Remove (HANDLE) casts on lvalues. (process_pipe_io): Initialize tStdin/tStdout/tStderr variables. Fixes Savannah bug #13551. 2005-06-26 Paul D. Smith * make.h: Fix bug in ANSI_STRING/strerror() handling; only define it if ANSI_STRING is not set. 2005-06-25 Paul D. Smith * read.c (eval): If no filenames are passed to any of the "include" variants, don't print an error. * doc/make.texi (Include): Document this. Fixes Savannah bug #1761. * job.c (construct_command_argv_internal): Sanitize handling of backslash/newline pairs according to POSIX: that is, keep the backslash-newline in the command script, but remove a following TAB character, if present. In the fast path, make sure that the behavior matches what the shell would do both inside and outside of quotes. In the slow path, quote the backslash and put a literal newline in the string. Fixes Savannah bug #1332. * doc/make.texi (Execution): Document the new behavior and give some examples. * NEWS: Make a note of the new behavior. * make.h [WINDOWS32]: #include . Fixes Savannah bug #13478. * remake.c (name_mtime): If the stat() of a file fails and the -L option was given and the file is a symlink, take the best mtime of the symlink we can get as the mtime of the file and don't fail. Fixes Savannah bug #13280. * read.c (find_char_unquote): Accept a new argument IGNOREVARS. If it's set, then don't stop on STOPCHARs or BLANKs if they're inside a variable reference. Make this function static as it's only used here. (eval): Call find_char_unquote() with IGNOREVARS set when we're parsing an unexpanded line looking for semicolons. Fixes Savannah bug #1454. * misc.c (remove_comments): Move this to read.c and make it static as it's only used there. Call find_char_unquote() with new arg. * make.h: Remove prototypes for find_char_unquote() and remove_comments() since they're static now. * main.c (main): If we see MAKE_RESTARTS in the environment, unset its export flag and obtain its value. When we need to re-exec, increment the value and add it into the environment. * doc/make.texi (Special Variables): Document MAKE_RESTARTS. * NEWS: Mention MAKE_RESTARTS. * main.c (always_make_set): New variable. Change the -B option to set this one instead. (main): When checking makefiles, only set always_make_flag if always_make_set is set AND the restarts flag is 0. When building normal targets, set it IFF always_make_set is set. (main): Avoid infinite recursion with -W, too: only set what-if files to NEW before we check makefiles if we've never restarted before. If we have restarted, set what-if files to NEW _after_ we check makefiles. Fixes Savannah bug #7566: 2005-06-17 Paul D. Smith * default.c: Change VMS implicit rules to use $$$$ instead of $$ in the prerequisites list. 2005-06-12 Paul D. Smith Fix Savannah bug # 1328. * configure.in: Check for atexit(). * misc.c (close_stdout): Test stdout to see if writes to it have failed. If so, be sure to exit with a non-0 error code. Based on code found in gnulib. * make.h: Prototype. * main.c (main): Install close_stdout() with atexit(). 2005-06-10 Paul D. Smith VMS build updates from Hartmut Becker : * vmsjobs.c [VMS]: Updates to compile on VMS: add some missing headers; make vmsWaitForChildren() static; extern vmsify(). * job.c [VMS]: Move vmsWaitForChildren() prototype to be global. Don't create child_execute_job() here (it's in vmsjobs.c). * makefile.vms (job.obj) [VMS]: Add vmsjobs.c as a prerequisite. 2005-06-09 Paul D. Smith * variable.c (push_new_variable_scope): File variables point directly to the global_setlist variable. So, inserting a new scope in front of that has no effect on those variables: they don't go through current_variable_set_list. If we're pushing a scope and the current scope is global, push it "the other way" so that the new setlist is in the global_setlist variable, and next points to a new setlist with the global variable set. (pop_variable_scope): Properly undo a push with the new semantics. Fixes Savannah bug #11913. 2005-05-31 Boris Kolpackov * job.c (reap_children): Don't die of the command failed but the dontcare flag is set. Fixes Savannah bug #13216. * implicit.c (pattern_search): When creating a target from an implicit rule match, lookup pattern target and set precious flag in a newly created target. Fixes Savannah bug #13218. 2005-05-13 Paul D. Smith Implement "if... else if... endif" syntax. * read.c (eval): Push all checks for conditional words ("ifeq", "else", etc.) down into the conditional_line() function. (conditional_line): Rework to allow "else if..." clause. New return value -2 for lines which are not conditionals. The ignoring flag can now also be 2, which means "already parsed a true branch". If that value is seen no other branch of this conditional can be considered true. In the else parsing if there is extra text after the else, invoke conditional_line() recursively to see if it's another conditional. If not, it's an error. If so, raise the conditional value to this level instead of creating a new conditional nesting level. Special check for "else" and "endif", which aren't allowed on the "else" line. * doc/make.texi (Conditional Syntax): Document the new syntax. 2005-05-09 Paul D. Smith * Makefile.am (EXTRA_make_SOURCES): Add vmsjobs.c (MAYBE_W32): Rework how SUBDIRS are handled so that "make dist" recurses to the w32 directory, even on non-Windows systems. Use the method suggested in the automake manual. * configure.in: Add w32/Makefile to AC_CONFIG_FILES. * maintMakefile (gnulib-url): They moved the texinfo.tex files. 2005-05-07 Paul D. Smith * main.c (die): If we're dying with a fatal error (not that a command has failed), write back any leftover tokens before we go. * job.c (set_child_handler_action_flags): If there are jobs waiting for the load to go down, set an alarm to go off in 1 second. This allows us to wake up from a potentially long-lasting read() and start a new job if the load has gone down. Turn it off after the read. (job_noop): Dummy signal handler function. (new_job): Invoke it with the new semantics. * docs/make.texi: Document secondary expansion. Various cleanups and random work. 2005-05-03 Paul D. Smith Rename .DEFAULT_TARGET to .DEFAULT_GOAL: in GNU make terminology the targets which are to ultimately be made are called "goals"; see the GNU make manual. Also, MAKECMDGOALS, etc. * filedef.h, read.c, main.c: Change .DEFAULT_TARGET to .DEFAULT_GOAL, and default_target_name to default_goal_name. * doc/make.texi (Special Variables): Document .DEFAULT_GOAL. 2005-05-02 Paul D. Smith * job.c, vmsjobs.c (vmsWaitForChildren, vms_redirect, vms_handle_apos, vmsHandleChildTerm, reEnableAst, astHandler, tryToSetupYAst, child_execute_job) [VMS]: Move VMS-specific functions to vmsjobs.c. #include it into jobs.c. Grant Taylor reports that -j# can lose jobserver tokens. I found that this happens when an exported recursive variable contains a $(shell ...) function reference: in this situation we could "forget" to write back a token. * job.c, job.h: Add variable jobserver_tokens: counts the tokens we have. It's not reliable to depend on the number of children in our linked list so keep a separate count. (new_job): Check jobserver_tokens rather than children && waiting_jobs. Increment jobserver_tokens when we get one. (free_child): If jobserver_tokens is 0, internal error. If it's >1, write a token back to the jobserver pipe (we don't write a token for the "free" job). Decrement jobserver_tokens. * main.c: Add variable master_job_slots. (main): Set it to hold the number of jobs requested if we're the master process, when using the jobserver. (die): Sanity checks: first test jobserver_tokens to make sure this process isn't holding any tokens we didn't write back. Second, if master_job_slots is set count the tokens left in the jobserver pipe and ensure it's the same as master_job_slots (- 1). 2005-04-24 Paul D. Smith Grant Taylor reports that -j# in conjunction with -l# can lose jobserver tokens, because waiting jobs are not consulted properly when checking for the "free" token. * job.c (free_child): Count waiting_jobs as having tokens. * job.c (new_job): Ditto. Plus, call start_waiting_jobs() here to handle jobs waiting for the load to drop. 2005-04-23 Paul D. Smith * main.c (main): Be careful to not core if a variable setting in the environment doesn't contain an '='. This is illegal but can happen in broken setups. Reported by Joerg Schilling . 2005-04-12 Paul D. Smith The second expansion feature causes significant slowdown. Timing a complex makefile (GCC 4.1) shows a slowdown from .25s to just read the makefile before the feature, to 11+s to do the same operations after the feature. Additionally, memory usage increased drastically. To fix this I added some intelligence that avoids the overhead of the second expansion unless it's required. * dep.h: Add a new boolean field, need_2nd_expansion. * read.c (eval): When creating the struct dep for the target, check if the name contains a "$"; if so set need_2nd_expansion to 1. (record_files): If there's a "%" in a static pattern rule, it gets converted to "$*" so set need_2nd_expansion to 1. * file.c (expand_deps): Rework to be more efficient. Only perform initialize_file_variables(), set_file_variables(), and variable_expand_for_file() if the need_2nd_expansion is set. * implicit.c (pattern_search): Default need_2nd_expansion to 0. (pattern_search): Ditto. * main.c (handle_non_switch_argument): Ditto. (main): Ditto. * read.c (read_all_makefiles): Ditto. (eval_makefile): Ditto. 2005-04-07 Paul D. Smith * main.c (main) [WINDOWS32]: Export PATH to sub-shells, not Path. * variable.c (sync_Path_environment): Ditto. Patch by Alessandro Vesely. Fixes Savannah bug #12209. * main.c (main): Define the .FEATURES variable. * NEWS: Announce .FEATURES. * doc/make.texi (Special Variables): Document .FEATURES. * remake.c (check_dep): If a file is .PHONY, update it even if it's marked intermediate. Fixes Savannah bug #12331. 2005-03-15 Boris Kolpackov * file.c (expand_deps): Factor out the second expansion and prerequisite line parsing logic from snap_deps(). * file.c (snap_deps): Use expand_deps(). Expand and parse prerequisites of the .SUFFIXES special target first. Fixes Savannah bug #12320. 2005-03-13 Paul D. Smith * main.c (main) [MSDOS]: Export SHELL in MSDOS. Requested by Eli Zaretskii. 2005-03-11 Paul D. Smith * signame.c (strsignal): HAVE_DECL_SYS_SIGLIST is 0 when not available, not undefined (from Earnie Boyd). 2005-03-10 Boris Kolpackov * implicit.c (pattern_search): Mark an intermediate target as precious if it happened to be a prerequisite of some (other) target. Fixes Savannah bug #12267. 2005-03-09 Paul D. Smith * read.c (eval_makefile): Add alloca(0). (eval_buffer): Ditto. 2005-03-09 Boris Kolpackov * main.c (main): Use o_file instead of o_default when defining the .DEFAULT_TARGET special variable. * read.c (eval): Use define_variable_global() instead of define_variable() when setting new value for the .DEFAULT_TARGET special variable. Fixes Savannah bug #12266. 2005-03-04 Boris Kolpackov * imlicit.c (pattern_search): Mark files for which an implicit rule has been found as targets. Fixes Savannah bug #12202. 2005-03-04 Paul D. Smith * AUTHORS: Update. * doc/make.texi (Automatic Variables): Document $|. 2005-03-03 Boris Kolpackov * read.c (record_files): Instead of substituting % with actual stem value in dependency list replace it with $*. This fixes stem triple expansion bug. * implicit.c (pattern_search): Copy stem to a separate buffer and make it a properly terminated string. Assign this buffer instead of STEM (which is not terminated) to f->stem. Instead of substituting % with actual stem value in dependency list replace it with $*. This fixes stem triple expansion bug. 2005-03-01 Paul D. Smith * commands.c (fatal_error_signal) [WINDOWS32]: Don't call kill() on Windows, as it takes a handle not a pid. Just exit. Fix from patch #3679, provided by Alessandro Vesely. * configure.in: Update check for sys_siglist[] from autoconf manual. * signame.c (strsignal): Update to use the new autoconf macro. 2005-03-01 Boris Kolpackov * read.c (record_files): Add a check for the list of prerequisites of a static pattern rule being empty. Fixes Savannah bug #12180. 2005-02-28 Paul D. Smith * doc/make.texi (Text Functions): Update docs to allow the end ordinal for $(wordlist ...) to be 0. * function.c (func_wordlist): Fail if the start ordinal for $(wordlist ...) is <1. Matches documentation. Resolves Savannah support request #103195. * remake.c (update_goal_chain): Fix logic for stopping in -q: previously we were stopping when !-q, exactly the opposite. This has been wrong since version 1.34, in 1994! (update_file): If we got an error don't break out to run more double-colon rules: just return immediately. Fixes Savannah bug #7144. 2005-02-27 Paul D. Smith * misc.c (end_of_token): Make argument const. * make.h: Update prototype. * function.c (abspath, func_realpath, func_abspath): Use PATH_VAR() and GET_PATH_MAX instead of PATH_MAX. * dir.c (downcase): Use PATH_VAR() instead of PATH_MAX. * read.c (record_files): Ditto. * variable.c (do_variable_definition): Ditto. * function.c (func_error): Create a new function $(info ...) that simply prints the message to stdout with no extras. (function_table_init): Add new function to the table. * NEWS: Add $(info ...) reference. * doc/make.texi (Make Control Functions): Document it. New feature: if the system supports symbolic links, and the user provides the -L/--check-symlink-time flag, then use the latest mtime between the symlink(s) and the target file. * configure.in (MAKE_SYMLINKS): Check for lstat() and readlink(). If both are available, define MAKE_SYMLINKS. * main.c: New variable: check_symlink_flag. (usage): Add a line for -L/--check-symlink-times to the help string. (switches): Add -L/--check-symlink-times command line argument. (main): If MAKE_SYMLINKS is not defined but the user specified -L, print a warning and disable it again. * make.h: Declare check_symlink_flag. * remake.c (name_mtime): If MAKE_SYMLINKS and check_symlink_flag, if the file is a symlink then check each link in the chain and choose the NEWEST mtime we find as the mtime for the file. The newest mtime might be the file itself! * NEWS: Add information about this new feature. * doc/make.texi (Options Summary): Add -L/--check-symlink-times docs. Avoid core dumps described in Savannah bug # 12124: * file.c: New variable snapped_deps remember whether we've run snap_deps(). (snap_deps): Set it. * filedef.h: Extern it. * read.c (record_files): Check snapped_deps; if it's set then we're trying to eval a new target/prerequisite relationship from within a command script, which we don't support. Fatal. 2005-02-28 Boris Kolpackov Implementation of the .DEFAULT_TARGET special variable. * read.c (eval): If necessary, update default_target_name when reading rules. * read.c (record_files): Update default_target_file if default_target_name has changed. * main.c (default_target_name): Define. * main.c (main): Enter .DEFAULT_TARGET as make variable. If default_target_name is set use default_target_file as a root target to make. * filedef.h (default_target_name): Declare. * dep.h (free_dep_chain): * misc.c (free_dep_chain): Change to operate on struct nameseq and change name to free_ns_chain. * file.c (snap_deps): Update to use free_ns_chain. 2005-02-27 Boris Kolpackov Implementation of the second expansion in explicit rules, static pattern rules and implicit rules. * read.c (eval): Refrain from chopping up rule's dependencies. Store them in a struct dep as a single dependency line. Remove the code that implements SySV-style automatic variables. * read.c (record_files): Adjust the code that handles static pattern rules to expand all percents instead of only the first one. Reverse the order in which dependencies are stored so that when the second expansion reverses them again they appear in the makefile order (with some exceptions, see comments in the code). Remove the code that implements SySV-style automatic variables. * file.c (snap_deps): Implement the second expansion and chopping of dependency lines for explicit rules. * implicit.c (struct idep): Define an auxiliary data type to hold implicit rule's dependencies after stem substitution and expansion. * implicit.c (free_idep_chain): Implement. * implicit.c (get_next_word): Implement helper function for parsing implicit rule's dependency lines into words taking into account variable expansion requests. Used in the stem splitting code. * implicit.c (pattern_search): Implement the second expansion for implicit rules. Also fixes bug #12091. * commands.h (set_file_variables): Declare. * commands.c (set_file_variables): Remove static specifier. * dep.h (free_dep_chain): Declare. * misc.c (free_dep_chain): Implement. * variable.h (variable_expand_for_file): Declare. * expand.c (variable_expand_for_file): Remove static specifier. * make.h (strip_whitespace): Declare. * function.c (strip_whitespace): Remove static specifier. 2005-02-26 Paul D. Smith * main.c (main): Check for ferror() when reading makefiles from stdin. Apparently some shells in Windows don't close pipes properly and require this check. 2005-02-24 Jonathan Grant * configure.in: Add MinGW configuration options, and extra w32 code directory. * Makefile.am: Add MinGW configuration options, and extra w32 code directory. * main.c: Determine correct program string (after last \ without .exe). * subproc/sub_proc.c: `GetExitCodeProcess' from incompatible pointer type fix x2 * w32/Makefile.am: Import to build win32 lib of sub_proc etc. * subproc/w32err.c: MSVC thread directive not applied to MinGW builds. * tests/run_make_tests.pl, tests/test_driver.pl: MSYS testing environment support. 2004-04-16 Dmitry V. Levin * function.c (func_shell): When initializing error_prefix, check that reading file name is not null. This fixes long-standing segfault in cases like "make 'a1=$(shell :)' 'a2:=$(a1)'". 2005-02-09 Paul D. Smith * maintMakefile: Update the CVS download URL to simplify them. Also, the ftp://ftp.gnu.org/GNUinfo site was removed so I'm downloading the .texi files from Savannah now. Fixed these issues reported by Markus Mauhart : * main.c (handle_non_switch_argument): Only add variables to command_variables if they're not already there: duplicate settings waste space and can be confusing to read. * w32/include/sub_proc.h: Remove WINDOWS32. It's not needed since this header is never included by non-WINDOWS32 code, and it requires to define which isn't always included first. * dir.c (read_dirstream) [MINGW]: Use proper macro names when testing MINGW32 versions. * main.c (log_working_directory): flush stdout to be sure the WD change is printed before any stderr messages show up. 2005-02-01 Paul D. Smith * maintMakefile (po_repo): Update the GNU translation site URL. 2004-12-01 Paul D. Smith * main.c (main): Change char* env_shell to struct variable shell_var. * variable.c (target_environment): Use new shell_var. 2004-11-30 Paul D. Smith * configure.in: The old way we avoided creating build.sh from build.sh.in before build.sh.in exists doesn't work anymore; we have to use raw M4 (thanks to Andreas Schwab for the help!). This also keeps automake from complaining. * Makefile.am (README): Add a dummy target so automake won't complain that this file doesn't exist when we checkout from CVS. * maintMakefile (.dep_segment): Rewrite this rule since newer versions of automake don't provide DEP_FILES. 2004-11-30 Boris Kolpackov Implementation of `realpath' and `abspath' built-in functions. * configure.in: Check for realpath. * function.c (abspath): Return an absolute file name that does not contain any `.' or `..' components, nor repeated `/'. * function.c (func_abspath): For each name call abspath. * function.c (func_realpath): For each name call realpath from libc or delegate to abspath if realpath is not available. * doc/make.texi (Functions for File Names): Document new functions. * doc/make.texi (Quick Reference): Ditto. 2004-11-28 Paul D. Smith * main.c (main) [WINDOWS32]: Remove any trailing slashes from -C arguments. Fixes bug #10252. Fix for bug #1276: Handle SHELL according to POSIX requirements. * main.c (main): Set SHELL to v_noexport by default. Remember the original environment setting of SHELL in the env_shell variable. * main.h: Export new env_shell variable. * variable.c (target_environment): If we find a v_noexport variable for SHELL, add a SHELL variable with the env_shell value. * doc/make.texi (Quick Reference): Document the POSIX behavior. * doc/make.texi (Variables/Recursion): Ditto. 2004-11-28 Paul D. Smith * main.c (find_and_set_default_shell) [WINDOWS32]: check for equality of "cmd"/"cmd.exe", not inequality. Fixes bug #11155. Patch by Alessandro Vesely. 2004-11-12 Paul D. Smith * job.c (child_execute_job) [VMS]: Don't treat "#" as a comment on the command line if it's inside a string. Patch by: Hartmut Becker 2004-10-21 Boris Kolpackov * function.c (func_lastword): New function: return last word from the list of words. * doc/make.texi: Document $(lastword ). Fix broken links in Quick Reference section. 2004-10-06 Paul D. Smith Apply patch from Alessandro Vesely, provided with bug # 9748. Fix use of tmpnam() to work with Borland C. * job.c (construct_command_argv_internal) [WINDOWS32]: Remove construction of a temporary filename, and call new function create_batch_filename(). (create_batch_filename) [WINDOWS32]: New function to create a temporary filename. 2004-10-05 Boris Kolpackov * read.c (record_target_var): Expand simple pattern-specific variable. * variable.c (initialize_file_variables): Do not expand simple pattern-specific variable. 2004-09-28 Boris Kolpackov * remake.c (update_file_1): When rebuilding makefiles inherit dontcare flag from a target that triggered update. 2004-09-27 Boris Kolpackov * variable.c (initialize_file_variables): Mark pattern-specific variable as a per-target and copy export status. 2004-09-21 Boris Kolpackov * file.c (snap_deps): Mark .PHONY prerequisites as targets. * implicit.c (pattern_search): When considering an implicit rule's prerequisite check that it is actually a target rather then just an entry in the file hashtable. 2004-09-21 Paul D. Smith * read.c (readstring): Fix some logic errors in backslash handling. (eval): Remove some unnecessary processing in buffer handling. (record_target_var): Assert that parse_variable_definition() succeeded. Reported by: Markus Mauhart . * misc.c: Removed the sindex() function. All instances of this function were trivially replaceable by the standard strstr() function, and that function will always have better (or certainly no worse) performance than the very simple-minded algorithm sindex() used. This can matter with complex makefiles. * make.h: Remove the prototype for sindex(). * function.c (subst_expand): Convert sindex() call to strstr(). This means we no longer need to track the TLEN value so remove that. (func_findstring): Convert sindex() to strstr(). * commands.c (chop_commands): Convert sindex() calls to strstr(). Suggested by: Markus Mauhart . * main.c (find_and_set_default_shell) [WINDOWS32]: Implement the idea behind Savannah Patch #3144 from david.baird@homemail.com. If SHELL is set to CMD.EXE then assume it's batch-mode and non-unixy. I wrote the code differently from the patch, though, to make it safer. This also resolves bug #9174. 2004-09-20 Paul D. Smith * expand.c (variable_expand_string): Modify to invoke patsubst_expand() instead of subst_expand(); the latter didn't handle suffix patterns correctly. * function.c (subst_expand): Remove the SUFFIX_ONLY parameter; it was used only from variable_expand_string() and is no longer used there. (func_subst): Ditto, on call to subst_expand(). (patsubst_expand): Require the percent pointers to point to the character after the %, not to the % itself. * read.c (record_files): New call criteria for patsubst_expand(). * variable.h: Remove SUFFIX_ONLY from subst_expand() prototype. This is to fix a bug reported by Markus Mauhart . 2004-09-19 Paul D. Smith * function.c (subst_expand): Fix a check in by_word: look for a previous blank if we're beyond the beginning of the string, not the beginning of the word. Bugs reported by Markus Mauhart . 2004-05-16 Paul D. Smith * remake.c (update_goal_chain): Change the argument specifying whether we're rebuilding makefiles to be a global variable, REBUILDING_MAKEFILES. (complain): Extract the code that complains about no rules to make a target into a separate function. (update_file_1): If we tried to rebuild a file during the makefile rebuild phase and it was dontcare, then no message was printed. If we then try to build the same file during the normal build, print a message this time. (remake_file): Don't complain about un-remake-able files when we're rebuilding makefiles. 2004-05-11 Paul D. Smith * job.c (construct_command_argv_internal): OS/2 patches from Andreas Buening . 2004-05-10 Paul D. Smith * remake.c (update_file): Don't walk the double-colon chain unless this is a double-colon rule. Fix suggested by Boris Kolpackov . * makefile.vms (CFLAGS): Remove glob/globfree (see readme.vms docs) * readme.vms: New section describing OpenVMS support and issues. * default.c (default_variables): Add support for IA64. * job.c (tryToSetupYAst) [VMS]: On VMS running make in batch mode without some privilege aborts make with the error %SYSTEM-F-NOPRIV. It happens when setting up a handler for pressing Ctrl+Y and the input device is no terminal. The change catches this error and just continues. Patches by Hartmut Becker 2004-04-25 Paul D. Smith * commands.c (set_file_variables): Set $< properly in the face of order-only prerequisites. Patch from Boris Kolpackov 2004-04-21 Bob Byrnes * main.c (main): Notice failures to remake makefiles. 2004-03-28 Paul D. Smith Patches for Acorn RISC OS by Peter Naulls * job.c: No default shell for RISC OS. (load_too_high): Hard-code the return to 1. (construct_command_argv_internal): No sh_chars or sh_cmds. * getloadavg.c: Don't set LOAD_AVE_TYPE on RISC OS. 2004-03-20 Paul D. Smith * variable.c (do_variable_definition): Don't append from the global set if a previous non-appending target-specific variable definition exists. Reported by Oliver Schmidt (with fix). * expand.c (reference_variable): Don't give up on variables with no value that have the target-specific append flag set: they might have a value after all. Reported by Oliver Schmidt (with fix) and also by Maksim A. Nikulin . * rule.c (count_implicit_rule_limits): Don't delete patterns which refer to absolute pathnames in directories that don't exist: some portion of the makefile could create those directories before we match the pattern. Fixes bugs #775 and #108. Fixes from Jonathan R. Grant : * main.c (main): Free makefile_mtimes if we have any. * README.W32.template: Update documentation for the current status of the MS-Windows port. * NMakefile.template (MAKE): Add "MAKE = nmake". A conflicting environment variable is sometimes already defined which causes the build to fail. * main.c (debug_signal_handler): Only define this function if SIGUSR1 is available. Fixes for OS/2 from Andreas Beuning : * configure.in [OS/2]: Relocate setting of HAVE_SA_RESTART for OS/2. * README.OS2.template: Documentation updates. * build.template: Add LIBINTL into LOADLIBES. Add $CFLAGS to the link line for safety. * maintMakefile (build.sh.in): Remove an extraneous ")". * job.c (child_execute_job): Close saved FDs. * job.c (exec_command) [OS/2]: exec_command(): If the command can't be exec'ed and if the shell is not Unix-sh, then try again with argv = { "cmd", "/c", ... }. Normally, this code is never reached for the cmd shell unless the command really doesn't exist. (construct_command_argv_internal) [OS/2]: The code for cmd handling now uses new_argv = { "cmd", "/c", "original line", NULL}. The CMD builtin commands are case insensitive so use strcasecmp(). 2004-03-19 Paul D. Smith * read.c (do_define): Re-order line counter increment so the count is accurate (we were losing one line per define). Reported by Dave Yost . 2004-03-06 Paul D. Smith * configure.in (HAVE_ANSI_COMPILER): Define if we have an ANSI/ISO compiler. * make.h: Convert uses of __STDC__ to HAVE_ANSI_COMPILER. * misc.c (message,error,fatal): Ditto. * configh.dos.template: Define HAVE_ANSI_COMPILER. * config.h.W32.template: Ditto. * config.h-vms.template: Ditto. * config.ami.template: Ditto. 2004-03-04 Paul D. Smith * README.template: Add a note about broken /bin/sh on SunOS 4.1.3_U1 & 4.1.4. Fix up Savannah links. * misc.c (message, error, fatal): Don't use "..." if we're using varargs. ansi2knr should handle this but it doesn't work: it translates "..." to va_dcl etc. but _AFTER_ the preprocessor is done. On many systems (SunOS for example) va_dcl is a #define. So, force the use of the non-"..." version on pre-ANSI compilers. * maintMakefile (sign-dist): Create some rules to help automate the new GNU ftp upload method. 2004-02-24 Paul D. Smith * config.h.W32.template: Add HAVE_STDARG_H * config.h-vms.template: Ditto. * config.ami.template: Ditto. 2004-02-23 Jonathan Grant * README.W32.template: Add a notation about -j with BATCH_MODE_ONLY. * build_w32.bat: Remove extra "+". 2004-02-23 Paul D. Smith * make.h: Create an UNUSED macro to mark unused parameters. * (many): Clean up warnings by applying UNUSED, fixing signed/unsigned incompatibilities, etc. * acinclude.m4 (AC_STRUCT_ST_MTIM_NSEC): Add quoting to silence autoconf warnings. * filedef.h: Name the command_state enumeration. * file.c (set_command_state): Use the enumeration in the function argument. * configure.in: Explicitly set SET_MAKE to empty, to disable MAKE=make even when no make already exists. Fix bug #3823. 2004-02-22 Paul D. Smith * maintMakefile: Perl script to clean up all non-CVS files. Use it on all the subdirectories for the cvs-clean target. * main.c (decode_switches): Require non-empty strings for all our string command-line options. Fixes Debian bug # 164165. * configure.in: Check for stdarg.h and varargs.h. * make.h (USE_VARIADIC): Set this if we can use variadic functions for printing messages. * misc.c: Check USE_VARIADIC instead of (obsolete) HAVE_STDVARARGS. (message): Ditto. (error): Ditto. (fatal): Ditto. A number of patches for OS/2 support from Andreas Buening : * job.c (child_handler) [OS/2]: Allow this on OS/2 but we have to disable the SIGCHLD handler. (reap_children) [OS/2]: Remove special handling of job_rfd. (set_child_handler_action_flags) [OS/2]: Use this function in OS/2. (new_job) [OS/2]: Disable the SIGCHLD handler on OS/2. * main.c (main) [OS/2]: Special handling for paths in OS/2. * configure.in [OS/2]: Force SA_RESTART for OS/2. * Makefile.am (check-regression): Use $(EXEEXT) for Windows-type systems. 2004-02-21 Paul D. Smith * w32/subproc/sub_proc.c (process_easy) [W32]: Christoph Schulz reports that if process_begin() fails we don't handle the error condition correctly in all cases. * w32/subproc/w32err.c (map_windows32_error_to_string): Make sure to have a newline on the message. * job.c (construct_command_argv_internal): Add "test" to UNIX sh_cmds[]. Fixes Savannah bug # 7606. 2004-02-04 Paul D. Smith * job.c (vms_handle_apos) [VMS]: Fix various string handling situations in VMS DCL. Fixes Savannah bug #5533. Fix provided by Hartmut Becker . 2004-01-21 Paul D. Smith * job.c (load_too_high): Implement an algorithm to control the "thundering herd" problem when using -l to control job creation via the load average. The system only recomputes the load once a second but we can start many jobs in a second. To solve this we keep track of the number of jobs started in the last second and apply a weight to try to guess what a correct load would be. The algorithm was provided by Thomas Riedl . Also fixes bug #4693. (reap_children): Decrease the job count for this second. (start_job_command): Increase the job count for this second. * read.c (conditional_line): Expand the text after ifn?def before checking to see if it's a single word. Fixes bug #7257. 2004-01-09 Paul D. Smith * file.c (print_file): Recurse to print all targets in double-colon rules. Fixes bug #4518, reported (with patch) by Andrew Chatham . 2004-01-07 Paul D. Smith * acinclude.m4: Remove make_FUNC_SETVBUF_REVERSED. * configure.in: Change make_FUNC_SETVBUF_REVERSED to AC_FUNC_SETVBUF_REVERSED. * doc/make.texi (Target-specific): Fix Savannah bug #1772. (MAKE Variable): Fix Savannah bug #4898. * job.c (construct_command_argv_internal): Add "!" to the list of shell escape chars. POSIX sh allows it to appear before a command, to negate the exit code. Fixes bug #6404. * implicit.c (pattern_search): When matching an implicit rule, remember which dependencies have the ignore_mtime flag set. Original fix provided in Savannah patch #2349, by Benoit Poulot-Cazajous . 2003-11-22 Paul D. Smith * README.W32.template (Outputs): Clarification on -j with BATCH_MODE_ONLY_SEHLL suggested by Jonathan R. Grant . 2003-11-02 Paul D. Smith * function.c (func_if): Strip all the trailing whitespace from the condition, then don't expand it. Fixed bug # 5798. * expand.c (recursively_expand_for_file): If we're expanding a variable with no file context, then use the variable's context. Fixes bug # 6195. 2003-10-21 Paul D. Smith * main.c (log_working_directory): Add newlines to printf()s. * README.cvs: Add a note to ignore warnings during autoreconf. * maintMakefile (po_repo): Set a new URL for PO file updates. (get-config/config.guess get-config/config.sub): Get these files from the Savannah config project instead of ftp.gnu.org. 2003-10-05 Paul Eggert * main.c (main): Avoid potential subscript error if environ has short strings. 2003-08-22 Paul D. Smith * misc.c (xmalloc, xrealloc): Add one to 0 sizes, to cater to systems which don't yet implement the C89 standard :-/. 2003-07-18 Paul D. Smith * dir.c (directory_contents_hash_1, directory_contents_hash_1) [WINDOWS32]: Initialize hash. 2003-06-19 Earnie Boyd * dir.c (read_dirstream): Provide a workaround for broken versions of the MinGW dirent structure. 2003-05-30 Earnie Boyd * w32/include/dirent.h: Add __MINGW32__ filter. 2003-05-30 Earnie Boyd * make.h: Add global declaration of *make_host. * main.c (print_usage): Remove local declaration of *make_host. (print_version): Display "This program built for ..." after Copyright notice. 2003-05-30 Earnie Boyd * doc/make.texi: Change "ifinfo" to "ifnottex" as suggested by the execution of "makeinfo --html make.texi". 2003-04-30 Paul D. Smith * build.template: Make some changes to maybe allow this script to work on DOS/Windows/OS2 systems. Suggested by Andreas Buening. * README.OS2.template: New file for OS/2 support. Original contributed by Andreas Buening. * configure.in: Invoke new pds_AC_DOS_PATHS macro to test for DOS-style paths. 2003-04-19 Paul D. Smith Fix bug #1405: allow a target to match multiple pattern-specific variables. * rule.c (create_pattern_var, lookup_pattern_var): Move these to variable.c, where they've always belonged. * rule.h: Move the prototypes and struct pattern_var as well. * variable.c (initialize_file_variables): Invoke lookup_pattern_var() in a loop, until no more matches are found. If a match is found, create a new variable set for the target's pattern variables. Then merge the contents of each matching pattern variable set into the target's pattern variable set. (lookup_pattern_var): Change this function to be usable in a loop. It takes a starting position: if NULL, start at the beginning; if non-NULL, start with the pattern variable after that position, and return the next matching pattern. (create_pattern_var): Create a unique instance of pattern-specific variables for every definition in the makefile. Don't combine the same pattern together. This allows us to process the variable handling properly even when the same pattern is used multiple times. (parse_variable_definition): New function: break out the parsing of a variable definition line from try_variable_definition. (try_variable_definition): Call parse_variable_definition to parse. (print_variable_data_base): Print out pattern-specific variables. * variable.h (struct variable): Remember when a variable is conditional. Also remember its flavor. (struct pattern_var): Instead of keeping a variable set, we just keep a single variable for each pattern. * read.c (record_target_var): Each pattern variable contains only a single variable, not a set, so create it properly. * doc/make.texi (Pattern-specific): Document the new behavior. 2003-04-17 Paul D. Smith * dir.c (file_exists_p) [VMS]: Patch provided with Bug #3018 by Jean-Pierre Portier . I don't understand the file/directory naming rules for VMS so I can't tell whether this is correct or not. 2003-04-09 Paul D. Smith * configure.in (HAVE_DOS_PATHS): Define this on systems that need DOS-style pathnames: backslash separators and drive specifiers. 2003-03-28 Paul D. Smith * file.c (snap_deps): If .SECONDARY with no targets is given, set the intermediate flag on all targets. Fixes bug #2515. 2003-03-24 Paul D. Smith * configure.in, Makefile.am, glob/Makefile.am, doc/Makefile.am: Upgrade to autoconf 2.57 and automake 1.7.3. * job.c: More OS/2 changes from Andreas Buening. * file.c (print_file): Fix variable initialization. Fixes bug #2892. * remake.c (notice_finished_file): * make.h (ENULLLOOP): Set errno = 0 before invoking the command; some calls (like readdir()) return NULL in valid situations without resetting errno. Fixes bug #2846. 2003-02-25 Paul D. Smith Port to OS/2 (__EMX__) by Andreas Buening . * job.c (_is_unixy_shell) [OS/2]: New function. Set default shell to /bin/sh. (reap_children): Close the job_rfd pipe here since we don't use a SIGCHLD handler. (set_child_handler_action_flags): define this to empty on OS/2. (start_job_command): Close the jobserver pipe and use child_execute_job() instead of fork/exec. (child_execute_job): Rewrite to handle stdin/stdout FDs and spawn rather than exec'ing, then reconfigure stdin/stdout. (exec_command): Rewrite to use spawn instead of exec. Return the PID of the child. * main.c (main) [OS/2]: Call initialize_main(). Handle argv[0] as in DOS. Handle the TEMP environment variable as in DOS. Don't use a SIGCHLD handler on OS/2. Choose a shell as in DOS. Don't use -j in DOS mode. Use child_execute_job() instead of exec_command(). * function.c (func_shell) [OS/2]: Can't use fork/exec on OS/2: use spawn() instead. * job.h [OS/2]: Move CLOSE_ON_EXEC here from job.c. Add prototypes that return values. * remake.c (f_mtime) [OS/2]: Handle FAT timestamp offsets for OS/2. * read.c (readline) [OS/2]: Don't handle CRLF specially on OS/2. * default.c (default_suffixes) [OS/2]: Set proper default suffixes for OS/2. * vpath.c (construct_vpath_list) [OS/2]: Handle OS/2 paths like DOS paths. 2003-02-24 Paul D. Smith * default.c [VMS]: New default rules for .cxx -> .obj compiles. * job.c (child_execute_job) [VMS]: New code for handling spawn(). (child_execute_job) [VMS]: Handle error status properly. Patches provided by Hartmut Becker . * function.c (func_shell): Use EINTRLOOP() while reading from the subshell pipe (Fixes bug #2502). * job.c (free_child): Use EINTRLOOP() while writing tokens to the jobserver pipe. * main.c (main): Ditto. 2003-01-30 Paul D. Smith * read.c (eval): eval() was not fully reentrant, because the collapsed buffer was static. Change it to be an automatic variable so that eval() can be invoked recursively. Fixes bug # 2238. (eval): Apply patch # 1022: fix memory reference error on long target-specific variable lines. Patch provided by Steve Brown . * function.c (check_numeric): Combine the is_numeric() function into this function, since it's only called from one place. Constify this function. Have it print the incorrect string in the error message. Fixes bug #2407. (strip_whitespace): Constify. (func_if): Constify. * expand.c (expand_argument): Constify. 2003-01-29 Paul D. Smith Fix bug # 2169, also reported by other people on various systems. * make.h: Some systems, such as Solaris and PTX, do not fully implement POSIX-compliant SA_RESTART functionality; important system calls like stat() and readdir() can still fail with EINTR even if SA_RESTART has been set on the signal handler. So, introduce macros EINTRLOOP() and ENULLLOOP() which can loop on EINTR for system calls which return -1 or 0 (NULL), respectively, on error. Also, remove the old atomic_stat()/atomic_readdir() and HAVE_BROKEN_RESTART handling. * configure.in: Remove setting of HAVE_BROKEN_RESTART. * arscan.c (ar_member_touch): Use EINTRLOOP() to wrap fstat(). * remake.c (touch_file): Ditto. * commands.c (delete_target): Use EINTRLOOP() to wrap stat(). * read.c (construct_include_path): Ditto. * remake.c (name_mtime): Ditto. * vpath.c (selective_vpath_search): Ditto. * dir.c (find_directory): Ditto. (local_stat): Ditto. (find_directory): Use ENULLLOOP() to wrap opendir(). (dir_contents_file_exists_p): Use ENULLLOOP() to wrap readdir(). * misc.c: Remove HAVE_BROKEN_RESTART, atomic_stat(), and atomic_readdir() handling. 2003-01-22 Paul D. Smith * function.c (func_call): Fix Bug #1744. If we're inside a recursive invocation of $(call ...), mask any of the outer invocation's arguments that aren't used by this one, so that this invocation doesn't "inherit" them accidentally. 2002-12-05 Paul D. Smith * function.c (subst_expand): Valery Khamenia reported a pathological performance hit when doing substitutions on very large values with lots of words: turns out we were invoking strlen() a ridiculous number of times. Instead of having each call to sindex() call strlen() again, keep track of how much of the text we've seen and pass the length to sindex(). 2002-11-19 Paul D. Smith * README.cvs, configure.in: Upgrade to require autoconf 2.56. 2002-11-16 Paul D. Smith * NMakefile.template (OBJS): Add hash.c object file. * SMakefile.template (srcs): Ditto. * Makefile.ami (objs): Ditto. * build_w32.bat: Ditto. * Makefile.DOS.template: Remove extra dependencies. 2002-10-25 Paul D. Smith * expand.c (install_variable_buffer): New function. Install a new variable_buffer context and return the previous one. (restore_variable_buffer): New function. Free the current variable_buffer context and put a previously saved one back. * variable.h: Prototypes for {install,restore}_variable_buffer. * function.c (func_eval): Push a new variable_buffer context before we eval, then restore the old one when we're done. Fixes Bug #1517. * read.c (install_conditionals): New function. Install a new conditional context and return the previous one. (restore_conditionals): New function. Free the current conditional context and put a previously saved one back. (eval): Use the {install,restore}_conditionals for "include" handling. (eval_buffer): Use {install,restore}_conditionals to preserve the present conditional state before we evaluate the buffer. Fixes Bug #1516. * doc/make.texi (Quick Reference): Add references to $(eval ...) and $(value ...). (Recursion): Add a variable index entry for CURDIR. * README.cvs: Update to appropriate versions. * Makefile.am (nodist_loadavg_SOURCES): automake gurus point out I don't need to copy loadavg.c: automake is smart enough to create it for me. Still have a bug in automake related to ansi2knr tho. 2002-10-14 Paul D. Smith * remake.c (notice_finished_file): Only touch targets if they have at least one command (as per POSIX). Resolve Bug #1418. * *.c: Convert to using ANSI C-style function definitions. * Makefile.am: Enable the ansi2knr feature of automake. * configure.in: ditto. 2002-10-13 Paul D. Smith * commands.c (set_file_variables): Bug #1379: Don't use alloca() for automatic variable values like $^, etc. In the case of very large lists of prerequisites this causes problems. Instead reuse a static buffer (resizeable) for each variable. * read.c (eval): Fix Bug #1391: allow "export" keyword in target-specific variable definitions. Check for it and set an "exported" flag. (record_target_var): Set the export field to v_export if the "exported" flag is set. * doc/make.texi (Target-specific): Document the ability to use "export". * doc/make.texi: Change the name of the section on automatic variables from "Automatic" to "Automatic Variables". Added text clarifying the scope of automatic variables. 2002-10-04 Paul D. Smith * read.c (eval): Allow SysV $$@ variables to use {} braces as well as () braces. (record_files): Ditto. * expand.c (variable_expand_string): In $(A:x=y) expansion limit the search for the '=' to only within the enclosing parens. 2002-10-03 Paul D. Smith Version 3.80 released. * dir.c: Change hash functions to use K&R function definition style. * function.c: Ditto. * read.c: Ditto. * variable.c: Ditto. Update to automake 1.7. * Makefile.am (AUTOMAKE_OPTIONS): Update to require 1.7. (pdf): Remove this target as automake now provides one. * configure.in: Change AM_CONFIG_HEADER to AC_CONFIG_HEADERS. 2002-09-30 Martin P.J. Zinser * makefile.com: Updates for GNU make 3.80. * makefile.vms: Ditto. 2002-09-23 Paul D. Smith * read.c (enum make_word_type): Remove w_comment. (get_next_mword): Don't treat comment characters as special; where this function is used we will never see a comment (it's stripped before we get here) and treating comments specially means that targets like "foo\#bar" aren't handled properly. 2002-09-18 Paul D. Smith * doc/make.texi (Bugs): Update with some info on Savannah, etc. * read.c (eval): Expansion of arguments to export/unexport was ignoring all arguments after the first one. Change the algorithm to expand the whole line once, then parse the results. 2002-09-17 Paul D. Smith Fix Bug #940 (plus another bug I found while looking at this): * read.c (record_target_var): enter_file() will add a new entry if it's a double-colon target: we don't want to do that in this situation. Invoke lookup_file() and only enter_file() if it does not already exist. If the file we get back is a double-colon then add this variable to the "root" double-colon target. * variable.c (initialize_file_variables): If this file is a double-colon target but is not the "root" target, then initialize the root and make the root's variable list the parent of our variable list. 2002-09-13 Paul D. Smith * doc/make.texi (MAKE Variable): Add some indexing for "+". * hash.c (round_up_2): Get rid of a warning. 2002-09-12 Paul D. Smith * Makefile.am (loadavg_SOURCES, loadavg.c): Tiptoe around automake so it doesn't complain about getloadavg.c. * commands.c (set_file_variables): Make sure we always alloca() at least 1 character for the value of $? (for '\0'). 2002-09-11 Paul D. Smith * hash.h (STRING_COMPARE, ISTRING_COMPARE, STRING_N_COMPARE): Fix macro to use RESULT instead of the incorrect _RESULT_. * make.h (HAVE_BROKEN_RESTART): Add prototypes for atomic_stat() and atomic_readdir(). We need to #include dirent.h to get this to work. * misc.c (atomic_readdir): Fix typos. 2002-09-10 Paul D. Smith * read.c (eval): Expand variable lists given to export and unexport, so that "export $(LIST_OF_VARIABLES)" (etc.) works. (conditional_line): Ditto for "ifdef". Fixes bug #103. * doc/make.texi (Variables/Recursion): Document this. (Conditional Syntax): And here. 2002-09-09 Paul D. Smith * configure.in: Check for memmove(). 2002-09-07 Paul D. Smith * configure.in (HAVE_BROKEN_RESTART): Define this on PTX systems; Michael Sterrett reports that while it has SA_RESTART, it does not work properly. * misc.c (atomic_stat): If HAVE_BROKEN_RESTART, create a function that invokes stat() and loops to do it again if it returns EINTR. (atomic_readdir): Ditto, with readdir(). * make.h (stat, readdir): If HAVE_BROKEN_RESTART, alias stat() and readdir() to atomic_stat() and atomic_readdir(). 2002-09-04 Paul D. Smith * implicit.c (pattern_search): Daniel reports that GNU make sometimes doesn't recognize that targets can be made, when directories can be created as prerequisites. He reports that changing the order of predicates in the DEP->changed flag test so that lookup_file() is always performed, solves this problem. 2002-08-08 Paul D. Smith * configure.in: Require a newer version of gettext. * misc.c (perror_with_name): Translate the format string (for right-to-left language support). (pfatal_with_name): Ditto. * main.c: Create a static array of strings to store the usage text. This is done to facilitate translations. (struct command_switch): Remove argdesc and description fields. (switches): Remove values for obsolete fields. (print_usage): Print each element of the usage array. * hash.c: Change function definitions to be K&R style. 2002-08-02 Paul D. Smith * NEWS: Remove the mention of .TARGETS; we aren't going to publish this one because it's too hard to get right. We'll look at it for a future release. * main.c (main): Don't create the .TARGETS variable. * variable.c (handle_special_var): Don't handle .TARGETS. 2002-08-01 Paul D. Smith * main.c (switches): Add a new option, -B (--always-make). If specified, make will rebuild all targets that it encounters even if they don't appear to be out of date. (always_make_flag): New flag. * make.h: Extern always_make_flag. * remake.c (update_file_1): Check always_make_flag; if it's set we will always rebuild any target we can, even if none of its prerequisites are newer. * NEWS: Mention it. * doc/make.texi (Shell Function): Make it clear that make variables marked as "export" are not passed to instances of the shell function. Add new introspection variable .VARIABLES and .TARGETS. * variable.c (handle_special_var): New function. If the variable reference passed in is "special" (.VARIABLES or .TARGETS), calculate the new value if necessary. .VARIABLES is handled here: walk through the hash of defined variables and construct a value which is a list of the names. .TARGETS is handled by build_target_list(). (lookup_variable): Invoke handle_special_var(). * file.c (build_target_list): Walk through the hask of known files and construct a list of the names of all the ones marked as targets. * main.c (main): Initialize them to empty (and as simple variables). * doc/make.texi (Special Variables): Document them. * NEWS: Mention them. * variable.h (struct variable): Add a new flag "exportable" which is true if the variable name is valid for export. * variable.c (define_variable_in_set): Set "exportable" when a new variable is defined. (target_environment): Use the "exportable" flag instead of re-checking the name here... an efficiency improvement. 2002-07-31 Paul D. Smith * config.h-vms.template: Updates to build on VMS. Thanks to Brian_Benning@aksteel.com for helping verify the build. * makefile.com: Build the new hash.c file. * hash.h: Use strcpmi(), not stricmp(), in the HAVE_CASE_INSENSITIVE_FS case. 2002-07-30 Paul D. Smith * hash.h (ISTRING_COMPARE, return_ISTRING_COMPARE): Add missing backslashes to the HAVE_CASE_INSENSITIVE_FS case. Reported by . 2002-07-10 Paul D. Smith * variable.c (pop_variable_scope): Remove variable made unused by new hash infrastructure. * read.c (dep_hash_cmp): Rewrite this to handle ignore_mtime comparisons as well as name comparisons. * variable.h: Add a prototype for new hash_init_function_table(). * file.c (lookup_file): Remove variables made unused by new hash infrastructure. * dir.c (directory_contents_hash_2): Missing return of hash value. (dir_contents_file_exists_p): Remove variables made unused by new hash infrastructure. Installed Greg McGary's integration of the hash functions from the GNU id-utils package: 2002-07-10 Greg McGary * scripts/functions/filter-out: Add literals to to the pattern space in order to add complexity, and trigger use of an internal hash table. Fix documentation strings. * scripts/targets/INTERMEDIATE: Reverse order of files passed to expected `rm' command. 2002-07-10 Greg McGary * Makefile.am (SRCS): Add hash.c (noinst_HEADERS): Add hash.h * hash.c: New file, taken from id-utils. * hash.h: New file, taken from id-utils. * make.h (HASH, HASHI): Remove macros. (find_char_unquote): Change arglist in decl. (hash_init_directories): New function decl. * variable.h (hash.h): New #include. (MAKELEVEL_NAME, MAKELEVEL_LENGTH): New constants. * filedef.h (hash.h): New #include. (struct file) [next]: Remove member. (file_hash_enter): Remove function decl. (init_hash_files): New function decl. * ar.c (ar_name): Delay call to strlen until needed. * main.c (initialize_global_hash_tables): New function. (main): Call it. Use MAKELEVEL_NAME & MAKELEVEL_LENGTH. * misc.c (remove_comments): Pass char constants to find_char_unquote. * remake.c (notice_finished_file): Update last_mtime on `prev' chain. * dir.c (hash.h): New #include. (struct directory_contents) [next, files]: Remove members. [ctime]: Add member for VMS. [dirfiles]: Add hash-table member. (directory_contents_hash_1, directory_contents_hash_2, directory_contents_hash_cmp): New functions. (directories_contents): Change type to `struct hash_table'. (struct directory) [next]: Remove member. (directory_hash_1, directory_hash_2, directory_hash_cmp): New funcs. (directory): Change type to `struct hash_table'. (struct dirfile) [next]: Remove member. [length]: Add member. [impossible]: widen type to fill alignment gap. (dirfile_hash_1, dirfile_hash_2, dirfile_hash_cmp): New functions. (find_directory): Use new hash table package. (dir_contents_file_exists_p): Likewise. (file_impossible): Likewise. (file_impossible_p): Likewise. (print_dir_data_base): Likewise. (open_dirstream): Likewise. (read_dirstream): Likewise. (hash_init_directories): New function. * file.c (hash.h): New #include. (file_hash_1, file_hash_2, file_hash_cmp): New functions. (files): Change type to `struct hash_table'. (lookup_file): Use new hash table package. (enter_file): Likewise. (remove_intermediates): Likewise. (snap_deps): Likewise. (print_file_data_base): Likewise. * function.c (function_table_entry_hash_1, function_table_entry_hash_2, function_table_entry_hash_cmp): New functions. (lookup_function): Remove `table' argument. Use new hash table package. (struct a_word) [chain, length]: New members. (a_word_hash_1, a_word_hash_2, a_word_hash_cmp): New functions. (struct a_pattern): New struct. (func_filter_filterout): Pass through patterns noting boundaries and '%', if present. Note a_word length. Use a hash table if arglists are large enough to justify cost. (function_table_init): Renamed from function_table. (function_table): Declare as `struct hash_table'. (FUNCTION_TABLE_ENTRIES): New constant. (hash_init_function_table): New function. * read.c (hash.h): New #include. (read_makefile): Pass char constants to find_char_unquote. (dep_hash_1, dep_hash_2, dep_hash_cmp): New functions. (uniquize_deps): Use hash table to efficiently identify duplicates. (find_char_unquote): Accept two char-constant stop chars, rather than a string constant, avoiding zillions of calls to strchr. Tighten inner search loops to test only for desired delimiters. * variable.c (variable_hash_1, variable_hash_2, variable_hash_cmp): New functions. (variable_table): Declare as `struct hash_table'. (global_variable_set): Remove initialization. (init_hash_global_variable_set): New function. (define_variable_in_set): Use new hash table package. (lookup_variable): Likewise. (lookup_variable_in_set): Likewise. (initialize_file_variables): Likewise. (pop_variable_scope): Likewise. (create_new_variable_set): Likewise. (merge_variable_sets): Likewise. (define_automatic_variables): Likewise. (target_environment): Likewise. (print_variable_set): Likewise. 2002-07-10 Paul D. Smith Implement the SysV make syntax $$@, $$(@D), and $$(@F) in the prerequisite list. A real SysV make will expand the entire prerequisites list _twice_: we don't do that as it's a big backward-compatibility problem. We only replace those specific variables. * read.c (record_files): Replace any $@, $(@D), and $(@F) variable references left in the list of prerequisites. Check for .POSIX as we record targets, so we can disable non-POSIX behavior while reading makefiles as well as running them. (eval): Check the prerequisite list to see if we have anything that looks like a SysV prerequisite variable reference. 2002-07-09 Paul D. Smith * doc/make.texi (Prerequisite Types): Add a new section describing order-only prerequisites. * read.c (uniquize_deps): If we have the same file as both a normal and order-only prereq, get rid of the order-only prereq, since the normal one supersedes it. 2002-07-08 Paul D. Smith * AUTHORS: Added Greg McGary to the AUTHORS file. * NEWS: Blurbed order-only prerequisites. * file.c (print_file): Show order-only deps properly when printing the database. * maintMakefile: Add "update" targets for wget'ing the latest versions of various external files. Taken from Makefile.maint in autoconf, etc. * dosbuild.bat: Somehow we got _double_ ^M's. Remove them. Reported by Eli Zaretskii . 2002-07-07 Paul D. Smith * po/*.po: Remove. We'll use wget to retrieve them at release time. * variable.c (do_variable_definition) [W32]: On W32 using cmd rather than a shell you get an exception. Make sure we look up the variable. Patch provided by Eli Zaretskii . * remake.c (notice_finished_file): Fix handling of -t flag. Patch provided by Henning Makholm . * implicit.c (pattern_search): Some systems apparently run short of stack space, and using alloca() in this function caused an overrun. I modified it to use xmalloc() on the two variables which seemed like they might get large. Fixes Bug #476. * main.c (print_version): Update copyright notice to conform with GNU standards. (print_usage): Update help output. * function.c (func_eval): Create a new make function, $(eval ...). Expand the arguments, put them into a buffer, then invoke eval_buffer() on the resulting string. (func_quote): Create a new function, $(quote VARNAME). Inserts the value of the variable VARNAME without expanding it any further. * read.c (struct ebuffer): Change the linebuffer structure to an "eval buffer", which can be either a file or a buffer. (eval_makefile): Move the code in the old read_makefile() which located a makefile into here: create a struct ebuffer with that information. Have it invoke the new function eval() with that ebuffer. (eval_buffer): Create a new function that creates a struct ebuffer that holds a string buffer instead of a file. Have it invoke eval() with that ebuffer. (eval): New function that contains the guts of the old read_makefile() function: this function parses makefiles. Obtains data to parse from the provided ebuffer. Some modifications to make the flow of the function cleaner and clearer. Still could use some work here... (do_define): Takes a struct ebuffer instead of a FILE*. Read the contents of the define/endef variable from the ebuffer. (readstring): Read the next line from a string-style ebuffer. (readline): Read the next line from an ebuffer. If it's a string ebuffer, invoke readstring(). If it's a FILE* ebuffer, read it from the file. * dep.h (eval_buffer): Prototype eval_buffer(); * variable.c (do_variable_definition): Make sure that all non-target-specific variables are registered in the global set. If we're invoked from an $(eval ...) we might be inside a $(call ...) or other function which has pushed a variable scope; we still want to define our variables from evaluated makefile code in the global scope. 2002-07-03 Greg McGary * dep.h (struct dep) [ignore_mtime]: New member. [changed]: convert to a bitfield. * implicit.c (pattern_search): Zero ignore_mtime. * main.c (main, handle_non_switch_argument): Likewise. * rule.c (convert_suffix_rule): Likewise. * read.c (read_all_makefiles, read_makefile, multi_glob): Likewise. (read_makefile): Parse '|' in prerequisite list. (uniquize_deps): Consider ignore_mtime when comparing deps. * remake.c (update_file_1, check_dep): Don't force remake for dependencies that have d->ignore_mtime. * commands.c (FILE_LIST_SEPARATOR): New constant. (set_file_variables): Don't include a prerequisite in $+, $^ or $? if d->ignore_mtime. Define $|. 2002-06-18 Paul D. Smith * make.texinfo: Updates for next revision. New date/rev/etc. Recreate all Info menus. Change license on the manual to the GNU Free Documentation License. A number of typos. (Variables Simplify): Don't use "-" before it's defined. (Automatic Prerequisites): Rewrite the target example to work properly if the compile fails. Remove incorrect comments about how "set -e" behaves. (Text Functions): Move the "word", "wordlist", "words", and "firstword" functions here, from "File Name Functions". * make-stds.texi: Update from latest GNU version. * fdl.texi: (created) Import the latest GNU version. 2002-06-06 Paul D. Smith * variable.c (do_variable_definition): New function: extract the part of try_variable_definition() that actually sets the value into a separate function. (try_variable_definition): Call do_variable_definition() after parsing the variable definition string. (define_variable_in_set): Make the name argument const. * variable.h (enum variable_flavor): Make public. (do_variable_definition): Create prototype. * read.c (read_all_makefiles): Create a new built-in variable, MAKEFILE_LIST. (read_makefile): Add each makefile read in to this variable value. 2002-05-18 Eli Zaretskii * Makefile.DOS.template: Tweak according to changes in the distribution. Add back the dependencies of *.o files. * configh.dos.template: Synchronize with config.h.in. 2002-05-09 Paul D. Smith * file.c (file_timestamp_now): Use K&R function declaration. * getloadavg.c (getloadavg): Merge setlocale() fix from sh-utils getloadavg.c. Autoconf thinks QNX is SVR4-like, but it isn't, so #undef it. Remove predefined setup of NLIST_STRUCT. Decide whether to include nlist.h based on HAVE_NLIST_H. Change obsolete NLIST_NAME_UNION to new HAVE_STRUCT_NLIST_N_UN_N_NAME. * configure.in (NLIST_STRUCT): Define this if we have nlist.h and nlist.n_name is a pointer rather than an array. * acinclude.m4 (make_FUNC_SETVBUF_REVERSED): Grab the latest version of AC_FUNC_SETVBUF_REVERSED from autoconf CVS. * configure.in: Use it instead of the old version. * main.c (main): Prefer setvbuf() to setlinebuf(). 2002-05-08 Paul D. Smith * Makefile.am (make_LDADD): Add GETLOADAVG_LIBS. (loadavg_LDADD): Ditto. 2002-04-29 Paul D. Smith * expand.c (recursively_expand_for_file): Rename recursively_expand() to recursively_expand_for_file() and provide an extra argument, struct file. If the argument is provided, set the variable scope to that of the file before expanding. * variable.h (recursively_expand): Make this a macro that invokes recursively_expand_for_file() with a NULL file pointer. * variable.c (target_environment): Call the renamed function and provide the current file context. Fixes Debian bug #144306. 2002-04-28 Paul D. Smith Allow $(call ...) user-defined variables to be self-referencing without throwing an error. Allows implementation of transitive closures, among other possibly useful things. Requested by: Philip Guenther * variable.h (struct variable): Add a new field: exp_count, and new macros to hold its size and maximum value. (warn_undefined): Make this a macro. * variable.c (define_variable_in_set): Initialize it. * expand.c (recursively_expand): If we detect recursive expansion of a variable, check the exp_count field. If it's greater than 0 allow the recursion and decrement the count. (warn_undefined): Remove this (now a macro in variable.h). * function.c (func_call): Before we expand the user-defined function, modify its exp_count field to contain the maximum number of recursive calls we'll allow. After the call, reset it to 0. 2002-04-21 Paul D. Smith Modified to use latest autoconf (2.53), automake (1.6.1), and gettext (0.11.1). We're using gettext's new "external" support, to avoid including libintl source with GNU make. * README.cvs: New file. Explain how to build GNU make from CVS. * configure.in: Modify checking for the system glob library. Use AC_EGREP_CPP instead of AC_TRY_CPP. Remove the setting of GLOBDIR (we will always put "glob" in SUBDIRS, so automake etc. will manage it correctly). Set an automake conditional USE_LOCAL_GLOB to decide whether to compile the glob library. * getloadavg.c (main): Include make.h in the "TEST" program to avoid warnings. * Makefile.am: Remove special rules for loadavg. Replace them with Automake capabilities for building extra programs. * signame.c: This file does nothing if the system provide strsignal(). If not, it implements strsignal(). If the system doesn't define sys_siglist, then we make our own; otherwise we use the system version. * signame.h: Removed. * main.c (main): No need to invoke signame_init(). Update copyright. * ABOUT-NLS: Removed. * gettext.c: Removed. * gettext.h: Get a simplified copy from the gettext package. * po/*: Created. * i18n/*.po: Moved to po/. * i18n/: Removed. * config/*: Created. Contains package configuration helper files. * config.guess, config.sub: Moved to config directory. * configure.in (AC_CONFIG_FILES): Add po/Makefile.in, config/Makefile. Rework to use new-style autoconf features. Use the "external" mode for gettext. Make the build.sh config file conditional on whether build.sh.in exists, to avoid autoconf errors. * acinclude.m4: Removed almost all macros as being obsolete. Rewrote remaining macros to use AC_DEFINE. * acconfig.h: Removed. * Makefile.am (EXTRA_DIST): Add config/config.rpath. Use a conditional to handle customs support. Remove special handling for i18n features. 2002-04-20 Paul D. Smith * function.c (func_call): Don't mark the argument variables $1, etc. as recursive. They've already been fully expanded so there's no need to do it again, and doing so strips escaped $'s. Reported by Sebastian Glita . * remake.c (notice_finished_file): Walk through double-colon entries via the prev field, not the next field! Reported by Greg McGary . * main.c (main): If the user specifies -q and asks for a specific target which is a makefile, we got an assert. In that case it turns out we should continue normally instead. * i18n/de.po, i18n/fr.po: Installed an updated translation. * i18n/he.po: Installed a new translation. 2002-01-07 Paul D. Smith * i18n/es.po, i18n/ru.po: Installed an updated translation. 2001-12-04 Paul D. Smith * i18n/ja.po: Installed an updated translation. 2001-09-06 Paul Eggert * configure.in (AC_CHECK_HEADERS): Add sys/resource.h. (AC_CHECK_FUNCS): Add getrlimit, setrlimit. * main.c: Include if it, getrlimit, and setrlimit are available. (main): Get rid of any avoidable limit on stack size. 2001-09-04 Paul D. Smith * i18n/da.po: Installed an updated translation. 2001-08-03 Paul D. Smith * i18n/fr.po: Installed an updated translation. Resolves Debian bug #106720. 2001-06-13 Paul D. Smith * i18n/da.po, configure.in (ALL_LINGUAS): Installed a new translation. 2001-06-11 Paul D. Smith * i18n/ko.po: Installed a new translation. 2001-05-06 Paul D. Smith Modify the EINTR handling. * job.c (new_job): Reorganize the jobserver algorithm. Reorder the way in which we manage the file descriptor/signal handler race trap to be more efficient. 2001-05-06 Paul Eggert Restart almost all system calls that are interrupted, instead of worrying about EINTR. The lone exception is the read() for job tokens. * configure.in (HAVE_SA_RESTART): New macro. (MAKE_JOBSERVER): Define to 1 only if HAVE_SA_RESTART. * main.c (main): Use SA_RESTART instead of the old, nonstandard SA_INTERRUPT. * configure.in (AC_CHECK_FUNCS): Add bsd_signal. * main.c (bsd_signal): New function or macro, if the implementation doesn't supply it. (The bsd_signal function will be in POSIX 1003.1-200x.) (HANDLESIG): Remove. (main, FATAL_SIG): Use bsd_signal instead of signal or HANDLESIG. * make.h (EINTR_SET): Remove. (SA_RESTART): New macro. * arscan.c (ar_member_touch): Don't worry about EINTR. * function.c (func_shell): Likewise. * job.c (reap_children, free_child, new_job): Likewise. * main.c (main): Likewise. * remake.c (touch_file, name_mtime): Likewise. * arscan.c (ar_member_touch): Fix bug uncovered by EINTR removal; if fstat failed with errno!=EINTR, the error was ignored. * job.c (set_child_handler_action_flags): New function. (new_job): Use it to temporarily clear the SIGCHLD action flags while reading the token. 2001-05-02 Paul D. Smith * job.c (start_job_command): Don't add define/endef per-line flags to the top-level flags setting. 2001-04-03 Paul D. Smith * arscan.c (VMS_get_member_info,ar_scan) [VMS]: VMS sets the low bit on error, so check for odd return values, not non-0 return values. (VMS_get_member_info): Calculate the timezone differences correctly. Reported by John Fowler . 2001-03-14 Paul D. Smith * variable.c (lookup_variable) [VMS]: Null-terminate the variable value before invoking define_variable(). Reported by John Fowler . 2001-02-07 Paul D. Smith * read.c (record_target_var): If we reset the variable due to a command-line variable setting overriding it, turn off the "append" flag. 2001-01-17 Paul D. Smith * variable.c (lookup_variable) [VMS]: When getting values from the environment, allocate enough space for the _value_ plus escapes, not enough space for the name plus escapes :-/. Reported by John Fowler . * remake.c (f_mtime): Removed the "***" prefix from the mod time warnings that make generates, so it doesn't look like an error. Reported by Karl Berry . Fix for PR/2020: Rework appended target-specific variables. I'm fairly confident this algorithm is finally correct. * expand.c (allocated_variable_append): Rewrite. Instead of expanding each appended variable then adding all the expanded strings together, we append all the unexpanded values going up through the variable set contexts, then expand the final result. This behaves just like non-target-specific appended variable values, while the old way didn't in various corner cases. (variable_append): New function: recursively append the unexpanded value of a variable, walking from the outermost variable scope to the innermost. * variable.c (lookup_variable): Remove the code that looked up the variable set list if the found variable was "append". We don't need this anymore. (lookup_variable_in_set): Make this non-static so we can use it elsewhere. (try_variable_definition): Use lookup_variable_in_set() rather than faking out current_variable_set_list by hand (cleanup). * variable.h: Add a prototype for the now non-static lookup_variable_in_set(). 2000-11-17 Paul D. Smith * remake.c (f_mtime) [WINDOWS32]: On various advice, I changed the WINDOWS32 port to assume timestamps can be up to 3 seconds away before throwing a fit. 2000-11-17 Paul D. Smith * read.c (readline): CRLF calculations had a hole, if you hit the buffer grow scenario just right. Reworked the algorithm to avoid the need for len or lastlen at all. Problem description with sample code chages provided by Chris Faylor . 2000-10-24 Paul D. Smith * gettext.c (SWAP): Declare this with the prototype, otherwise some systems don't work (non-32-bit? Reported for Cray T3E). Reported by Thorstein Thorsteinsson . 2000-10-05 Paul D. Smith * acinclude.m4 (AM_LC_MESSAGES): Remove undefined macro AM_LC_MESSAGES; it doesn't seem to do anything anyway?? * i18n/gl.po, configure.in (ALL_LINGUAS): New Galician translation. 2000-09-22 Paul D. Smith * gettext.c: Don't #define _GETTEXT_H here; we only include some parts of the real gettext.h here, and we expect to really include the real gettext.h later. If we keep this #define, it's ignored. 2000-09-21 Paul D. Smith * main.c (log_working_directory): Rework the text to use complete sentences, to make life simpler for the translators. 2000-08-29 Paul D. Smith * file.c (remove_intermediates): Print a debug message before we remove intermediate files, so the user (if she uses -d) knows what's going on. 2000-08-21 Paul D. Smith * variable.c (try_variable_definition): Change how we handle target-specific append variable defns: instead of just setting the value, expand it as an append _but_ only within the current target's context. Otherwise we lose all but the last value if the variable is appended more than once within the current target context. Fixes PR/1831. 2000-08-16 Paul D. Smith * function.c (func_shell): Nul-terminate the buffer before printing an exec error message (just in case it's not!). Fixes PR/1860, reported by Joey Hess . 2000-07-25 Paul D. Smith * job.c (construct_command_argv_internal): Add "~" to the list of sh_chars[] which disallow optimizing out the shell call. 2000-07-23 Paul Eggert * NEWS, make.texinfo: Document .LOW_RESOLUTION_TIME, which supersedes --disable-nsec-timestamps. * make.texinfo: Consistently use "time stamp" instead of "timestamp". * README: Remove --disable-nsec-timestamps. * filedef.h (struct file.low_resolution_time): New member. * file.c (snap_deps): Add support for .LOW_RESOLUTION_TIME. * remake.c (update_file_1): Avoid spurious rebuilds due to low resolution time stamps, generalizing the earlier code that applied only to archive members. (f_mtime): Archive members always have low resolution time stamps. * configure.in: Remove --disable-nsec-timestamps, as this has been superseded by .LOW_RESOLUTION_TIME. 2000-07-23 Paul Eggert * configure.in (enable_nsec_timestamps): Renamed from make_cv_nsec_timestamps, since enable/disable options shouldn't be cached. 2000-07-23 Bruno Haible and Paul Eggert * file.c (file_timestamp_now): Use preprocessor-time check for FILE_TIMESTAMP_HI_RES so that clock_gettime is not linked unless needed. * filedef.h (FILE_TIMESTAMP_HI_RES): Remove definition; "configure" now does this. * configure.in (jm_AC_TYPE_UINTMAX_T): Move up, to before high resolution file timestamp check, since that check now uses uintmax_t. (FILE_TIMESTAMP_HI_RES): Define to nonzero if the code should use high resolution file timestamps. (HAVE_CLOCK_GETTIME): Do not define if !FILE_TIMESTAMP_HI_RES, so that we don't link in clock_gettime unnecessarily. 2000-07-17 Paul D. Smith * i18n/ja.po: New version of the translation file. 2000-07-07 Paul D. Smith * remake.c (f_mtime): If NO_FLOAT is defined, don't bother with the offset calculation. (name_mtime): Replace EINTR test with EINTR_SET macro. 2000-07-07 Paul Eggert Fix for PR/1811: * remake.c (update_file_1): Avoid spurious rebuilds of archive members due to their timestamp resolution being only one second. (f_mtime): Avoid spurious warnings of timestamps in the future due to the clock's resolution being lower than file timestamps'. When warning about future timestamps, report only the discrepancy, not the absolute value of the timestamp and the current time. * file.c (file_timestamp_now): New arg RESOLUTION. * filedef.h (file_timestamp_now): Likewise. (FILE_TIMESTAMP_NS): Now returns int. All uses changed. 2000-07-05 Paul D. Smith * variable.c (lookup_variable) [VMS]: Remove vestigial references to listp. Fixes PR/1793. 2000-06-26 Paul Eggert * Makefile.am (MAINTAINERCLEANFILES): New macro, with stamp-pot in it. * dir.c (vms_hash): Ensure ctype macro args are nonnegative. * remake.c (f_mtime): Remove unused var memtime. 2000-06-25 Martin Buchholz * make.texinfo, NEWS, TODO.private: Minor spelling corrections. Ran spell-check on make.texinfo. 2000-06-23 Paul D. Smith * main.c (main): Replace EXIT_SUCCESS, EXIT_FAILURE, and EXIT_TROUBLE with MAKE_SUCCESS, MAKE_FAILURE, and MAKE_TROUBLE. * make.h: Define these macros. * Version 3.79.1 released. * configure.in: Add a new option, --disable-nsec-timestamps, to avoid using sub-second timestamps on systems that support it. It can lead to problems, e.g. if your makefile relies on "cp -p". * README.template: Document the issue with "cp -p". * config.guess, config.sub: Updated. See ChangeLog.2, available in the CVS repository at: http://savannah.gnu.org/cvs/?group=make for earlier changes. Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. This file is part of GNU Make. GNU Make is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version. GNU Make is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GNU Make; see the file COPYING. If not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. make-doc-non-dfsg-3.81.orig/INSTALL0000644000175000017500000002243210416557457017121 0ustar srivastasrivastaInstallation Instructions ************************* Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004, 2005 Free Software Foundation, Inc. This file is free documentation; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. Basic Installation ================== These are generic installation instructions. The `configure' shell script attempts to guess correct values for various system-dependent variables used during compilation. It uses those values to create a `Makefile' in each directory of the package. It may also create one or more `.h' files containing system-dependent definitions. Finally, it creates a shell script `config.status' that you can run in the future to recreate the current configuration, and a file `config.log' containing compiler output (useful mainly for debugging `configure'). It can also use an optional file (typically called `config.cache' and enabled with `--cache-file=config.cache' or simply `-C') that saves the results of its tests to speed up reconfiguring. (Caching is disabled by default to prevent problems with accidental use of stale cache files.) If you need to do unusual things to compile the package, please try to figure out how `configure' could check whether to do them, and mail diffs or instructions to the address given in the `README' so they can be considered for the next release. If you are using the cache, and at some point `config.cache' contains results you don't want to keep, you may remove or edit it. The file `configure.ac' (or `configure.in') is used to create `configure' by a program called `autoconf'. You only need `configure.ac' if you want to change it or regenerate `configure' using a newer version of `autoconf'. The simplest way to compile this package is: 1. `cd' to the directory containing the package's source code and type `./configure' to configure the package for your system. If you're using `csh' on an old version of System V, you might need to type `sh ./configure' instead to prevent `csh' from trying to execute `configure' itself. Running `configure' takes awhile. While running, it prints some messages telling which features it is checking for. 2. Type `make' to compile the package. 3. Optionally, type `make check' to run any self-tests that come with the package. 4. Type `make install' to install the programs and any data files and documentation. 5. You can remove the program binaries and object files from the source code directory by typing `make clean'. To also remove the files that `configure' created (so you can compile the package for a different kind of computer), type `make distclean'. There is also a `make maintainer-clean' target, but that is intended mainly for the package's developers. If you use it, you may have to get all sorts of other programs in order to regenerate files that came with the distribution. Compilers and Options ===================== Some systems require unusual options for compilation or linking that the `configure' script does not know about. Run `./configure --help' for details on some of the pertinent environment variables. You can give `configure' initial values for configuration parameters by setting variables in the command line or in the environment. Here is an example: ./configure CC=c89 CFLAGS=-O2 LIBS=-lposix *Note Defining Variables::, for more details. Compiling For Multiple Architectures ==================================== You can compile the package for more than one kind of computer at the same time, by placing the object files for each architecture in their own directory. To do this, you must use a version of `make' that supports the `VPATH' variable, such as GNU `make'. `cd' to the directory where you want the object files and executables to go and run the `configure' script. `configure' automatically checks for the source code in the directory that `configure' is in and in `..'. If you have to use a `make' that does not support the `VPATH' variable, you have to compile the package for one architecture at a time in the source code directory. After you have installed the package for one architecture, use `make distclean' before reconfiguring for another architecture. Installation Names ================== By default, `make install' installs the package's commands under `/usr/local/bin', include files under `/usr/local/include', etc. You can specify an installation prefix other than `/usr/local' by giving `configure' the option `--prefix=PREFIX'. You can specify separate installation prefixes for architecture-specific files and architecture-independent files. If you pass the option `--exec-prefix=PREFIX' to `configure', the package uses PREFIX as the prefix for installing programs and libraries. Documentation and other data files still use the regular prefix. In addition, if you use an unusual directory layout you can give options like `--bindir=DIR' to specify different values for particular kinds of files. Run `configure --help' for a list of the directories you can set and what kinds of files go in them. If the package supports it, you can cause programs to be installed with an extra prefix or suffix on their names by giving `configure' the option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'. Optional Features ================= Some packages pay attention to `--enable-FEATURE' options to `configure', where FEATURE indicates an optional part of the package. They may also pay attention to `--with-PACKAGE' options, where PACKAGE is something like `gnu-as' or `x' (for the X Window System). The `README' should mention any `--enable-' and `--with-' options that the package recognizes. For packages that use the X Window System, `configure' can usually find the X include and library files automatically, but if it doesn't, you can use the `configure' options `--x-includes=DIR' and `--x-libraries=DIR' to specify their locations. Specifying the System Type ========================== There may be some features `configure' cannot figure out automatically, but needs to determine by the type of machine the package will run on. Usually, assuming the package is built to be run on the _same_ architectures, `configure' can figure that out, but if it prints a message saying it cannot guess the machine type, give it the `--build=TYPE' option. TYPE can either be a short name for the system type, such as `sun4', or a canonical name which has the form: CPU-COMPANY-SYSTEM where SYSTEM can have one of these forms: OS KERNEL-OS See the file `config.sub' for the possible values of each field. If `config.sub' isn't included in this package, then this package doesn't need to know the machine type. If you are _building_ compiler tools for cross-compiling, you should use the option `--target=TYPE' to select the type of system they will produce code for. If you want to _use_ a cross compiler, that generates code for a platform different from the build platform, you should specify the "host" platform (i.e., that on which the generated programs will eventually be run) with `--host=TYPE'. Sharing Defaults ================ If you want to set default values for `configure' scripts to share, you can create a site shell script called `config.site' that gives default values for variables like `CC', `cache_file', and `prefix'. `configure' looks for `PREFIX/share/config.site' if it exists, then `PREFIX/etc/config.site' if it exists. Or, you can set the `CONFIG_SITE' environment variable to the location of the site script. A warning: not all `configure' scripts look for a site script. Defining Variables ================== Variables not defined in a site shell script can be set in the environment passed to `configure'. However, some packages may run configure again during the build, and the customized values of these variables may be lost. In order to avoid this problem, you should set them in the `configure' command line, using `VAR=value'. For example: ./configure CC=/usr/local2/bin/gcc causes the specified `gcc' to be used as the C compiler (unless it is overridden in the site shell script). Here is a another example: /bin/bash ./configure CONFIG_SHELL=/bin/bash Here the `CONFIG_SHELL=/bin/bash' operand causes subsequent configuration-related scripts to be executed by `/bin/bash'. `configure' Invocation ====================== `configure' recognizes the following options to control how it operates. `--help' `-h' Print a summary of the options to `configure', and exit. `--version' `-V' Print the version of Autoconf used to generate the `configure' script, and exit. `--cache-file=FILE' Enable the cache: use and save the results of the tests in FILE, traditionally `config.cache'. FILE defaults to `/dev/null' to disable caching. `--config-cache' `-C' Alias for `--cache-file=config.cache'. `--quiet' `--silent' `-q' Do not print messages saying which checks are being made. To suppress all normal output, redirect it to `/dev/null' (any error messages will still be shown). `--srcdir=DIR' Look for the package's source code in directory DIR. Usually `configure' can determine that directory automatically. `configure' also accepts some other, not widely useful, options. Run `configure --help' for more details. make-doc-non-dfsg-3.81.orig/Makefile.am0000644000175000017500000000246510416557457020130 0ustar srivastasrivasta# This is a -*-Makefile-*-, or close enough # # Copyright (C) 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 # Free Software Foundation, Inc. # This file is part of GNU Make. # # GNU Make is free software; you can redistribute it and/or modify it under the # terms of the GNU General Public License as published by the Free Software # Foundation; either version 2, or (at your option) any later version. # # GNU Make is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR # A PARTICULAR PURPOSE. See the GNU General Public License for more details. # # You should have received a copy of the GNU General Public License along with # GNU Make; see the file COPYING. If not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. SUBDIRS = doc # Extra stuff to include in the distribution. EXTRA_DIST = README # Forward targets html: cd doc && $(MAKE) $(AM_MAKEFLAGS) $@ .PHONY: html # --------------- Internationalization Section localedir = $(datadir)/locale # --------------- Maintainer's Section # Tell automake that I haven't forgotten about this file and it will be # created before we build a distribution (see maintMakefile in the CVS # distribution). README: make-doc-non-dfsg-3.81.orig/Makefile.in0000644000175000017500000004466010416557457020144 0ustar srivastasrivasta# Makefile.in generated by automake 1.9.6 from Makefile.am. # @configure_input@ # Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, # 2003, 2004, 2005 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # This is a -*-Makefile-*-, or close enough # # Copyright (C) 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 # Free Software Foundation, Inc. # This file is part of GNU Make. # # GNU Make is free software; you can redistribute it and/or modify it under the # terms of the GNU General Public License as published by the Free Software # Foundation; either version 2, or (at your option) any later version. # # GNU Make is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR # A PARTICULAR PURPOSE. See the GNU General Public License for more details. # # You should have received a copy of the GNU General Public License along with # GNU Make; see the file COPYING. If not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. srcdir = @srcdir@ top_srcdir = @top_srcdir@ VPATH = @srcdir@ pkgdatadir = $(datadir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ top_builddir = . am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd INSTALL = @INSTALL@ install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : subdir = . DIST_COMMON = README $(am__configure_deps) $(srcdir)/Makefile.am \ $(srcdir)/Makefile.in $(top_srcdir)/configure ABOUT-NLS \ AUTHORS COPYING ChangeLog INSTALL NEWS install-sh missing ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/configure.in am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \ configure.lineno configure.status.lineno mkinstalldirs = $(install_sh) -d CONFIG_CLEAN_FILES = SOURCES = DIST_SOURCES = RECURSIVE_TARGETS = all-recursive check-recursive dvi-recursive \ html-recursive info-recursive install-data-recursive \ install-exec-recursive install-info-recursive \ install-recursive installcheck-recursive installdirs-recursive \ pdf-recursive ps-recursive uninstall-info-recursive \ uninstall-recursive ETAGS = etags CTAGS = ctags DIST_SUBDIRS = $(SUBDIRS) DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) distdir = $(PACKAGE)-$(VERSION) top_distdir = $(distdir) am__remove_distdir = \ { test ! -d $(distdir) \ || { find $(distdir) -type d ! -perm -200 -exec chmod u+w {} ';' \ && rm -fr $(distdir); }; } DIST_ARCHIVES = $(distdir).tar.gz GZIP_ENV = --best distuninstallcheck_listfiles = find . -type f -print distcleancheck_listfiles = find . -type f -print ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LTLIBOBJS = @LTLIBOBJS@ MAINT = @MAINT@ MAINTAINER_MODE_FALSE = @MAINTAINER_MODE_FALSE@ MAINTAINER_MODE_TRUE = @MAINTAINER_MODE_TRUE@ MAKEINFO = @MAKEINFO@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PERL = @PERL@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ ac_ct_STRIP = @ac_ct_STRIP@ am__leading_dot = @am__leading_dot@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build_alias = @build_alias@ datadir = @datadir@ exec_prefix = @exec_prefix@ host_alias = @host_alias@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ prefix = @prefix@ program_transform_name = @program_transform_name@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ SUBDIRS = doc # Extra stuff to include in the distribution. # Note we need all the glob stuff here, rather than in glob/Makefile.am, # because often that directory isn't built on the systems used by the # maintainers. EXTRA_DIST = README # This is built during configure, but behind configure's back DISTCLEANFILES = build.sh # --------------- Internationalization Section localedir = $(datadir)/locale all: all-recursive .SUFFIXES: am--refresh: @: $(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ echo ' cd $(srcdir) && $(AUTOMAKE) --gnu '; \ cd $(srcdir) && $(AUTOMAKE) --gnu \ && exit 0; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu Makefile'; \ cd $(top_srcdir) && \ $(AUTOMAKE) --gnu Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ echo ' $(SHELL) ./config.status'; \ $(SHELL) ./config.status;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) $(SHELL) ./config.status --recheck $(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) cd $(srcdir) && $(AUTOCONF) $(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) cd $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS) uninstall-info-am: # This directory's subdirectories are mostly independent; you can cd # into them and run `make' without going through this Makefile. # To change the values of `make' variables: instead of editing Makefiles, # (1) if the variable is set in `config.status', edit `config.status' # (which will cause the Makefiles to be regenerated when you run `make'); # (2) otherwise, pass the desired values on the `make' command line. $(RECURSIVE_TARGETS): @failcom='exit 1'; \ for f in x $$MAKEFLAGS; do \ case $$f in \ *=* | --[!k]*);; \ *k*) failcom='fail=yes';; \ esac; \ done; \ dot_seen=no; \ target=`echo $@ | sed s/-recursive//`; \ list='$(SUBDIRS)'; for subdir in $$list; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ dot_seen=yes; \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done; \ if test "$$dot_seen" = "no"; then \ $(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \ fi; test -z "$$fail" mostlyclean-recursive clean-recursive distclean-recursive \ maintainer-clean-recursive: @failcom='exit 1'; \ for f in x $$MAKEFLAGS; do \ case $$f in \ *=* | --[!k]*);; \ *k*) failcom='fail=yes';; \ esac; \ done; \ dot_seen=no; \ case "$@" in \ distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \ *) list='$(SUBDIRS)' ;; \ esac; \ rev=''; for subdir in $$list; do \ if test "$$subdir" = "."; then :; else \ rev="$$subdir $$rev"; \ fi; \ done; \ rev="$$rev ."; \ target=`echo $@ | sed s/-recursive//`; \ for subdir in $$rev; do \ echo "Making $$target in $$subdir"; \ if test "$$subdir" = "."; then \ local_target="$$target-am"; \ else \ local_target="$$target"; \ fi; \ (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \ || eval $$failcom; \ done && test -z "$$fail" tags-recursive: list='$(SUBDIRS)'; for subdir in $$list; do \ test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) tags); \ done ctags-recursive: list='$(SUBDIRS)'; for subdir in $$list; do \ test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) ctags); \ done ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES) list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | \ $(AWK) ' { files[$$0] = 1; } \ END { for (i in files) print i; }'`; \ mkid -fID $$unique tags: TAGS TAGS: tags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ $(TAGS_FILES) $(LISP) tags=; \ here=`pwd`; \ if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \ include_option=--etags-include; \ empty_fix=.; \ else \ include_option=--include; \ empty_fix=; \ fi; \ list='$(SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test ! -f $$subdir/TAGS || \ tags="$$tags $$include_option=$$here/$$subdir/TAGS"; \ fi; \ done; \ list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | \ $(AWK) ' { files[$$0] = 1; } \ END { for (i in files) print i; }'`; \ if test -z "$(ETAGS_ARGS)$$tags$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$tags $$unique; \ fi ctags: CTAGS CTAGS: ctags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ $(TAGS_FILES) $(LISP) tags=; \ here=`pwd`; \ list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | \ $(AWK) ' { files[$$0] = 1; } \ END { for (i in files) print i; }'`; \ test -z "$(CTAGS_ARGS)$$tags$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$tags $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && cd $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) $$here distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags distdir: $(DISTFILES) $(am__remove_distdir) mkdir $(distdir) @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \ list='$(DISTFILES)'; for file in $$list; do \ case $$file in \ $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \ $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \ esac; \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \ if test "$$dir" != "$$file" && test "$$dir" != "."; then \ dir="/$$dir"; \ $(mkdir_p) "$(distdir)$$dir"; \ else \ dir=''; \ fi; \ if test -d $$d/$$file; then \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \ fi; \ cp -pR $$d/$$file $(distdir)$$dir || exit 1; \ else \ test -f $(distdir)/$$file \ || cp -p $$d/$$file $(distdir)/$$file \ || exit 1; \ fi; \ done list='$(DIST_SUBDIRS)'; for subdir in $$list; do \ if test "$$subdir" = .; then :; else \ test -d "$(distdir)/$$subdir" \ || $(mkdir_p) "$(distdir)/$$subdir" \ || exit 1; \ distdir=`$(am__cd) $(distdir) && pwd`; \ top_distdir=`$(am__cd) $(top_distdir) && pwd`; \ (cd $$subdir && \ $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$$top_distdir" \ distdir="$$distdir/$$subdir" \ distdir) \ || exit 1; \ fi; \ done -find $(distdir) -type d ! -perm -777 -exec chmod a+rwx {} \; -o \ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \ ! -type d ! -perm -444 -exec $(SHELL) $(install_sh) -c -m a+r {} {} \; \ || chmod -R a+r $(distdir) dist-gzip: distdir tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz $(am__remove_distdir) dist-bzip2: distdir tardir=$(distdir) && $(am__tar) | bzip2 -9 -c >$(distdir).tar.bz2 $(am__remove_distdir) dist-tarZ: distdir tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z $(am__remove_distdir) dist-shar: distdir shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz $(am__remove_distdir) dist-zip: distdir -rm -f $(distdir).zip zip -rq $(distdir).zip $(distdir) $(am__remove_distdir) dist dist-all: distdir tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz $(am__remove_distdir) # This target untars the dist file and tries a VPATH configuration. Then # it guarantees that the distribution is self-contained by making another # tarfile. distcheck: dist case '$(DIST_ARCHIVES)' in \ *.tar.gz*) \ GZIP=$(GZIP_ENV) gunzip -c $(distdir).tar.gz | $(am__untar) ;;\ *.tar.bz2*) \ bunzip2 -c $(distdir).tar.bz2 | $(am__untar) ;;\ *.tar.Z*) \ uncompress -c $(distdir).tar.Z | $(am__untar) ;;\ *.shar.gz*) \ GZIP=$(GZIP_ENV) gunzip -c $(distdir).shar.gz | unshar ;;\ *.zip*) \ unzip $(distdir).zip ;;\ esac chmod -R a-w $(distdir); chmod a+w $(distdir) mkdir $(distdir)/_build mkdir $(distdir)/_inst chmod a-w $(distdir) dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \ && cd $(distdir)/_build \ && ../configure --srcdir=.. --prefix="$$dc_install_base" \ $(DISTCHECK_CONFIGURE_FLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) dvi \ && $(MAKE) $(AM_MAKEFLAGS) check \ && $(MAKE) $(AM_MAKEFLAGS) install \ && $(MAKE) $(AM_MAKEFLAGS) installcheck \ && $(MAKE) $(AM_MAKEFLAGS) uninstall \ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \ distuninstallcheck \ && chmod -R a-w "$$dc_install_base" \ && ({ \ (cd ../.. && umask 077 && mkdir "$$dc_destdir") \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \ } || { rm -rf "$$dc_destdir"; exit 1; }) \ && rm -rf "$$dc_destdir" \ && $(MAKE) $(AM_MAKEFLAGS) dist \ && rm -rf $(DIST_ARCHIVES) \ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck $(am__remove_distdir) @(echo "$(distdir) archives ready for distribution: "; \ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \ sed -e '1{h;s/./=/g;p;x;}' -e '$${p;x;}' distuninstallcheck: @cd $(distuninstallcheck_dir) \ && test `$(distuninstallcheck_listfiles) | wc -l` -le 1 \ || { echo "ERROR: files left after uninstall:" ; \ if test -n "$(DESTDIR)"; then \ echo " (check DESTDIR support)"; \ fi ; \ $(distuninstallcheck_listfiles) ; \ exit 1; } >&2 distcleancheck: distclean @if test '$(srcdir)' = . ; then \ echo "ERROR: distcleancheck can only run from a VPATH build" ; \ exit 1 ; \ fi @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left in build directory after distclean:" ; \ $(distcleancheck_listfiles) ; \ exit 1; } >&2 check-am: all-am check: check-recursive all-am: Makefile installdirs: installdirs-recursive installdirs-am: install: install-recursive install-exec: install-exec-recursive install-data: install-data-recursive uninstall: uninstall-recursive install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-recursive install-strip: $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ `test -z '$(STRIP)' || \ echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install mostlyclean-generic: clean-generic: distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test -z "$(DISTCLEANFILES)" || rm -f $(DISTCLEANFILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-recursive clean-am: clean-generic mostlyclean-am distclean: distclean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -f Makefile distclean-am: clean-am distclean-generic distclean-tags dvi: dvi-recursive dvi-am: info: info-recursive info-am: install-data-am: install-exec-am: install-info: install-info-recursive install-man: installcheck-am: maintainer-clean: maintainer-clean-recursive -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf $(top_srcdir)/autom4te.cache -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic mostlyclean: mostlyclean-recursive mostlyclean-am: mostlyclean-generic pdf: pdf-recursive pdf-am: ps: ps-recursive ps-am: uninstall-am: uninstall-info-am uninstall-info: uninstall-info-recursive .PHONY: $(RECURSIVE_TARGETS) CTAGS GTAGS all all-am am--refresh check \ check-am clean clean-generic clean-recursive ctags \ ctags-recursive dist dist-all dist-bzip2 dist-gzip dist-shar \ dist-tarZ dist-zip distcheck distclean distclean-generic \ distclean-recursive distclean-tags distcleancheck distdir \ distuninstallcheck dvi dvi-am html html-am info info-am \ install install-am install-data install-data-am install-exec \ install-exec-am install-info install-info-am install-man \ install-strip installcheck installcheck-am installdirs \ installdirs-am maintainer-clean maintainer-clean-generic \ maintainer-clean-recursive mostlyclean mostlyclean-generic \ mostlyclean-recursive pdf pdf-am ps ps-am tags tags-recursive \ uninstall uninstall-am uninstall-info-am # Forward targets html: cd doc && $(MAKE) $(AM_MAKEFLAGS) $@ .PHONY: html # --------------- Maintainer's Section # Tell automake that I haven't forgotten about this file and it will be # created before we build a distribution (see maintMakefile in the CVS # distribution). README: @MAINT_MAKEFILE@ # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: make-doc-non-dfsg-3.81.orig/NEWS0000644000175000017500000012360110416557457016567 0ustar srivastasrivastaGNU make NEWS -*-indented-text-*- History of user-visible changes. 1 April 2006 See the end of this file for copyrights and conditions. All changes mentioned here are more fully described in the GNU make manual, which is contained in this distribution as the file doc/make.texi. See the README file and the GNU make manual for instructions for reporting bugs. Version 3.81 * GNU make is ported to OS/2. * GNU make is ported to MinGW. The MinGW build is only supported by the build_w32.bat batch file; see the file README.W32 for more details. * WARNING: Future backward-incompatibility! Up to and including this release, the '$?' variable does not contain any prerequisite that does not exist, even though that prerequisite might have caused the target to rebuild. Starting with the _next_ release of GNU make, '$?' will contain all prerequisites that caused the target to be considered out of date. See this Savannah bug: http://savannah.gnu.org/bugs/index.php?func=detailitem&item_id=16051 * WARNING: Backward-incompatibility! GNU make now implements a generic "second expansion" feature on the prerequisites of both explicit and implicit (pattern) rules. In order to enable this feature, the special target '.SECONDEXPANSION' must be defined before the first target which takes advantage of it. If this feature is enabled then after all rules have been parsed the prerequisites are expanded again, this time with all the automatic variables in scope. This means that in addition to using standard SysV $$@ in prerequisites lists, you can also use complex functions such as $$(notdir $$@) etc. This behavior applies to implicit rules, as well, where the second expansion occurs when the rule is matched. However, this means that when '.SECONDEXPANSION' is enabled you must double-quote any "$" in your filenames; instead of "foo: boo$$bar" you now must write "foo: foo$$$$bar". Note that the SysV $$@ etc. feature, which used to be available by default, is now ONLY available when the .SECONDEXPANSION target is defined. If your makefiles take advantage of this SysV feature you will need to update them. * WARNING: Backward-incompatibility! In order to comply with POSIX, the way in which GNU make processes backslash-newline sequences in command strings has changed. If your makefiles use backslash-newline sequences inside of single-quoted strings in command scripts you will be impacted by this change. See the GNU make manual subsection "Splitting Command Lines" (node "Splitting Lines"), in section "Command Syntax", chapter "Writing the Commands in Rules", for details. * WARNING: Backward-incompatibility! Some previous versions of GNU make had a bug where "#" in a function invocation such as $(shell ...) was treated as a make comment. A workaround was to escape these with backslashes. This bug has been fixed: if your makefile uses "\#" in a function invocation the backslash is now preserved, so you'll need to remove it. * New command-line option: -L (--check-symlink-times). On systems that support symbolic links, if this option is given then GNU make will use the most recent modification time of any symbolic links that are used to resolve target files. The default behavior remains as it always has: use the modification time of the actual target file only. * The "else" conditional line can now be followed by any other valid conditional on the same line: this does not increase the depth of the conditional nesting, so only one "endif" is required to close the conditional. * All pattern-specific variables that match a given target are now used (previously only the first match was used). * Target-specific variables can be marked as exportable using the "export" keyword. * In a recursive $(call ...) context, any extra arguments from the outer call are now masked in the context of the inner call. * Implemented a solution for the "thundering herd" problem with "-j -l". This version of GNU make uses an algorithm suggested by Thomas Riedl to track the number of jobs started in the last second and artificially adjust GNU make's view of the system's load average accordingly. * New special variables available in this release: - .INCLUDE_DIRS: Expands to a list of directories that make searches for included makefiles. - .FEATURES: Contains a list of special features available in this version of GNU make. - .DEFAULT_GOAL: Set the name of the default goal make will use if no goals are provided on the command line. - MAKE_RESTARTS: If set, then this is the number of times this instance of make has been restarted (see "How Makefiles Are Remade" in the manual). - New automatic variable: $| (added in 3.80, actually): contains all the order-only prerequisites defined for the target. * New functions available in this release: - $(lastword ...) returns the last word in the list. This gives identical results as $(word $(words ...) ...), but is much faster. - $(abspath ...) returns the absolute path (all "." and ".." directories resolved, and any duplicate "/" characters removed) for each path provided. - $(realpath ...) returns the canonical pathname for each path provided. The canonical pathname is the absolute pathname, with all symbolic links resolved as well. - $(info ...) prints its arguments to stdout. No makefile name or line number info, etc. is printed. - $(flavor ...) returns the flavor of a variable. - $(or ...) provides a short-circuiting OR conditional: each argument is expanded. The first true (non-empty) argument is returned; no further arguments are expanded. Expands to empty if there are no true arguments. - $(and ...) provides a short-circuiting AND conditional: each argument is expanded. The first false (empty) argument is returned; no further arguments are expanded. Expands to the last argument if all arguments are true. * Changes made for POSIX compatibility: - Only touch targets (under -t) if they have at least one command. - Setting the SHELL make variable does NOT change the value of the SHELL environment variable given to programs invoked by make. As an enhancement to POSIX, if you export the make variable SHELL then it will be set in the environment, just as before. * On MS Windows systems, explicitly setting SHELL to a pathname ending in "cmd" or "cmd.exe" (case-insensitive) will force GNU make to use the DOS command interpreter in batch mode even if a UNIX-like shell could be found on the system. * On VMS there is now support for case-sensitive filesystems such as ODS5. See the readme.vms file for information. * Parallel builds (-jN) no longer require a working Bourne shell on Windows platforms. They work even with the stock Windows shells, such as cmd.exe and command.com. * Updated to autoconf 2.59, automake 1.9.5, and gettext 0.14.1. Users should not be impacted. * New translations for Swedish, Chinese (simplified), Ukrainian, Belarusian, Finnish, Kinyarwandan, and Irish. Many updated translations. A complete list of bugs fixed in this version is available here: http://savannah.gnu.org/bugs/index.php?group=make&report_id=111&fix_release_id=103 Version 3.80 * A new feature exists: order-only prerequisites. These prerequisites affect the order in which targets are built, but they do not impact the rebuild/no-rebuild decision of their dependents. That is to say, they allow you to require target B be built before target A, without requiring that target A will always be rebuilt if target B is updated. Patch for this feature provided by Greg McGary . * For compatibility with SysV make, GNU make now supports the peculiar syntax $$@, $$(@D), and $$(@F) in the prerequisites list of a rule. This syntax is only valid within explicit and static pattern rules: it cannot be used in implicit (suffix or pattern) rules. Edouard G. Parmelan provided a patch implementing this feature; however, I decided to implement it in a different way. * The argument to the "ifdef" conditional is now expanded before it's tested, so it can be a constructed variable name. Similarly, the arguments to "export" (when not used in a variable definition context) and "unexport" are also now expanded. * A new function is defined: $(value ...). The argument to this function is the _name_ of a variable. The result of the function is the value of the variable, without having been expanded. * A new function is defined: $(eval ...). The arguments to this function should expand to makefile commands, which will then be evaluated as if they had appeared in the makefile. In combination with define/endef multiline variable definitions this is an extremely powerful capability. The $(value ...) function is also sometimes useful here. * A new built-in variable is defined, $(MAKEFILE_LIST). It contains a list of each makefile GNU make has read, or started to read, in the order in which they were encountered. So, the last filename in the list when a makefile is just being read (before any includes) is the name of the current makefile. * A new built-in variable is defined: $(.VARIABLES). When it is expanded it returns a complete list of variable names defined by all makefiles at that moment. * A new command-line option is defined, -B or --always-make. If specified GNU make will consider all targets out-of-date even if they would otherwise not be. * The arguments to $(call ...) functions were being stored in $1, $2, etc. as recursive variables, even though they are fully expanded before assignment. This means that escaped dollar signs ($$ etc.) were not behaving properly. Now the arguments are stored as simple variables. This may mean that if you added extra escaping to your $(call ...) function arguments you will need to undo it now. * The variable invoked by $(call ...) can now be recursive: unlike other variables it can reference itself and this will not produce an error when it is used as the first argument to $(call ...) (but only then). * New pseudo-target .LOW_RESOLUTION_TIME, superseding the configure option --disable-nsec-timestamps. You might need this if your build process depends on tools like "cp -p" preserving time stamps, since "cp -p" (right now) doesn't preserve the subsecond portion of a time stamp. * Updated translations for French, Galician, German, Japanese, Korean, and Russian. New translations for Croatian, Danish, Hebrew, and Turkish. * Updated internationalization support to Gettext 0.11.5. GNU make now uses Gettext's "external" feature, and does not include any internationalization code itself. Configure will search your system for an existing implementation of GNU Gettext (only GNU Gettext is acceptable) and use it if it exists. If not, NLS will be disabled. See ABOUT-NLS for more information. * Updated to autoconf 2.54 and automake 1.7. Users should not be impacted. A complete list of bugs fixed in this version is available here: http://savannah.gnu.org/bugs/index.php?group=make&report_id=111&fix_release_id=102 Version 3.79.1 * .SECONDARY with no prerequisites now prevents any target from being removed because make thinks it's an intermediate file, not just those listed in the makefile. * New configure option --disable-nsec-timestamps, but this was superseded in later versions by the .LOW_RESOLUTION_TIME pseudo-target. Version 3.79 * GNU make optionally supports internationalization and locales via the GNU gettext (or local gettext if suitable) package. See the ABOUT-NLS file for more information on configuring GNU make for NLS. * Previously, GNU make quoted variables such as MAKEFLAGS and MAKEOVERRIDES for proper parsing by the shell. This allowed them to be used within make build scripts. However, using them there is not proper behavior: they are meant to be passed to subshells via the environment. Unfortunately the values were not quoted properly to be passed through the environment. This meant that make didn't properly pass some types of command line values to submakes. With this version we change that behavior: now these variables are quoted properly for passing through the environment, which is the correct way to do it. If you previously used these variables explicitly within a make rule you may need to re-examine your use for correctness given this change. * A new pseudo-target .NOTPARALLEL is available. If defined, the current makefile is run serially regardless of the value of -j. However, submakes are still eligible for parallel execution. * The --debug option has changed: it now allows optional flags controlling the amount and type of debugging output. By default only a minimal amount information is generated, displaying the names of "normal" targets (not makefiles) that were deemed out of date and in need of being rebuilt. Note that the -d option behaves as before: it takes no arguments and all debugging information is generated. * The `-p' (print database) output now includes filename and linenumber information for variable definitions, to aid debugging. * The wordlist function no longer reverses its arguments if the "start" value is greater than the "end" value. If that's true, nothing is returned. * Hartmut Becker provided many updates for the VMS port of GNU make. See the readme.vms file for more details. Version 3.78 * Two new functions, $(error ...) and $(warning ...) are available. The former will cause make to fail and exit immediately upon expansion of the function, with the text provided as the error message. The latter causes the text provided to be printed as a warning message, but make proceeds normally. * A new function $(call ...) is available. This allows users to create their own parameterized macros and invoke them later. Original implementation of this function was provided by Han-Wen Nienhuys . * A new function $(if ...) is available. It provides if-then-else capabilities in a builtin function. Original implementation of this function was provided by Han-Wen Nienhuys . * Make defines a new variable, .LIBPATTERNS. This variable controls how library dependency expansion (dependencies like ``-lfoo'') is performed. * Make accepts CRLF sequences as well as traditional LF, for compatibility with makefiles created on other operating systems. * Make accepts a new option: -R, or --no-builtin-variables. This option disables the definition of the rule-specific builtin variables (CC, LD, AR, etc.). Specifying this option forces -r (--no-builtin-rules) as well. * A "job server" feature, suggested by Howard Chu . On systems that support POSIX pipe(2) semantics, GNU make can now pass -jN options to submakes rather than forcing them all to use -j1. The top make and all its sub-make processes use a pipe to communicate with each other to ensure that no more than N jobs are started across all makes. To get the old behavior of -j back, you can configure make with the --disable-job-server option. * The confusing term "dependency" has been replaced by the more accurate and standard term "prerequisite", both in the manual and in all GNU make output. * GNU make supports the "big archive" library format introduced in AIX 4.3. * GNU make supports large files on AIX, HP-UX, and IRIX. These changes were provided by Paul Eggert . (Large file support for Solaris and Linux was introduced in 3.77, but the configuration had issues: these have also been resolved). * The Windows 95/98/NT (W32) version of GNU make now has native support for the Cygnus Cygwin release B20.1 shell (bash). * The GNU make regression test suite, long available separately "under the table", has been integrated into the release. You can invoke it by running "make check" in the distribution. Note that it requires Perl (either Perl 4 or Perl 5) to run. Version 3.77 * Implement BSD make's "?=" variable assignment operator. The variable is assigned the specified value only if that variable is not already defined. * Make defines a new variable, "CURDIR", to contain the current working directory (after the -C option, if any, has been processed). Modifying this variable has no effect on the operation of make. * Make defines a new default RCS rule, for new-style master file storage: ``% :: RCS/%'' (note no ``,v'' suffix). Make defines new default rules for DOS-style C++ file naming conventions, with ``.cpp'' suffixes. All the same rules as for ``.cc'' and ``.C'' suffixes are provided, along with LINK.cpp and COMPILE.cpp macros (which default to the same value as LINK.cc and COMPILE.cc). Note CPPFLAGS is still C preprocessor flags! You should use CXXFLAGS to change C++ compiler flags. * A new feature, "target-specific variable values", has been added. This is a large change so please see the appropriate sections of the manual for full details. Briefly, syntax like this: TARGET: VARIABLE = VALUE defines VARIABLE as VALUE within the context of TARGET. This is similar to SunOS make's "TARGET := VARIABLE = VALUE" feature. Note that the assignment may be of any type, not just recursive, and that the override keyword is available. COMPATIBILITY: This new syntax means that if you have any rules where the first or second dependency has an equal sign (=) in its name, you'll have to escape them with a backslash: "foo : bar\=baz". Further, if you have any dependencies which already contain "\=", you'll have to escape both of them: "foo : bar\\\=baz". * A new appendix listing the most common error and warning messages generated by GNU make, with some explanation, has been added to the GNU make User's Manual. * Updates to the GNU make Customs library support (see README.customs). * Updates to the Windows 95/NT port from Rob Tulloh (see README.W32), and to the DOS port from Eli Zaretski (see README.DOS). Version 3.76.1 * Small (but serious) bug fix. Quick rollout to get into the GNU source CD. Version 3.76 * GNU make now uses automake to control Makefile.in generation. This should make it more consistent with the GNU standards. * VPATH functionality has been changed to incorporate the VPATH+ patch, previously maintained by Paul Smith . See the manual. * Make defines a new variable, `MAKECMDGOALS', to contain the goals that were specified on the command line, if any. Modifying this variable has no effect on the operation of make. * A new function, `$(wordlist S,E,TEXT)', is available: it returns a list of words from number S to number E (inclusive) of TEXT. * Instead of an error, detection of future modification times gives a warning and continues. The warning is repeated just before GNU make exits, so it is less likely to be lost. * Fix the $(basename) and $(suffix) functions so they only operate on the last filename, not the entire string: Command Old Result New Result ------- ---------- ---------- $(basename a.b) a a $(basename a.b/c) a a.b/c $(suffix a.b) b b $(suffix a.b/c) b/c * The $(strip) function now removes newlines as well as TABs and spaces. * The $(shell) function now changes CRLF (\r\n) pairs to a space as well as newlines (\n). * Updates to the Windows 95/NT port from Rob Tulloh (see README.W32). * Eli Zaretskii has updated the port to 32-bit protected mode on MSDOS and MS-Windows, building with the DJGPP v2 port of GNU C/C++ compiler and utilities. See README.DOS for details, and direct all questions concerning this port to Eli Zaretskii or DJ Delorie . * John W. Eaton has updated the VMS port to support libraries and VPATH. Version 3.75 * The directory messages printed by `-w' and implicitly in sub-makes, are now omitted if Make runs no commands and has no other messages to print. * Make now detects files that for whatever reason have modification times in the future and gives an error. Files with such impossible timestamps can result from unsynchronized clocks, or archived distributions containing bogus timestamps; they confuse Make's dependency engine thoroughly. * The new directive `sinclude' is now recognized as another name for `-include', for compatibility with some other Makes. * Aaron Digulla has contributed a port to AmigaDOS. See README.Amiga for details, and direct all Amiga-related questions to . * Rob Tulloh of Tivoli Systems has contributed a port to Windows NT or 95. See README.W32 for details, and direct all Windows-related questions to . Version 3.73 * Converted to use Autoconf version 2, so `configure' has some new options. See INSTALL for details. * You can now send a SIGUSR1 signal to Make to toggle printing of debugging output enabled by -d, at any time during the run. Version 3.72 * DJ Delorie has ported Make to MS-DOS using the GO32 extender. He is maintaining the DOS port, not the GNU Make maintainer; please direct bugs and questions for DOS to . MS-DOS binaries are available for FTP from ftp.simtel.net in /pub/simtelnet/gnu/djgpp/. * The `MAKEFLAGS' variable (in the environment or in a makefile) can now contain variable definitions itself; these are treated just like command-line variable definitions. Make will automatically insert any variable definitions from the environment value of `MAKEFLAGS' or from the command line, into the `MAKEFLAGS' value exported to children. The `MAKEOVERRIDES' variable previously included in the value of `$(MAKE)' for sub-makes is now included in `MAKEFLAGS' instead. As before, you can reset `MAKEOVERRIDES' in your makefile to avoid putting all the variables in the environment when its size is limited. * If `.DELETE_ON_ERROR' appears as a target, Make will delete the target of a rule if it has changed when its commands exit with a nonzero status, just as when the commands get a signal. * The automatic variable `$+' is new. It lists all the dependencies like `$^', but preserves duplicates listed in the makefile. This is useful for linking rules, where library files sometimes need to be listed twice in the link order. * You can now specify the `.IGNORE' and `.SILENT' special targets with dependencies to limit their effects to those files. If a file appears as a dependency of `.IGNORE', then errors will be ignored while running the commands to update that file. Likewise if a file appears as a dependency of `.SILENT', then the commands to update that file will not be printed before they are run. (This change was made to conform to POSIX.2.) Version 3.71 * The automatic variables `$(@D)', `$(%D)', `$(*D)', `$(. Please see the section of the GNU make manual entitled `Problems and Bugs' for information on submitting useful and complete bug reports. You can also use the online bug tracking system in the Savannah GNU Make project to submit new problem reports or search for existing ones: http://savannah.gnu.org/bugs/?group=make If you need help using GNU make, try these forums: help-make@gnu.org help-utils@gnu.org news:gnu.utils.help news:gnu.utils.bug http://savannah.gnu.org/support/?group=make You may also find interesting patches to GNU Make available here: http://savannah.gnu.org/patch/?group=make Note these patches are provided by our users as a service and we make no statements regarding their correctness. Please contact the authors directly if you have a problem or suggestion for a patch available on this page. CVS Access ---------- The GNU make source repository is available via anonymous CVS from the GNU Subversions CVS server; look here for details: http://savannah.gnu.org/cvs/?group=make Please note: you won't be able to build GNU make from CVS without installing appropriate maintainer's tools, such as GNU m4, automake, autoconf, Perl, GNU make, and GCC. See the README.cvs file for hints on how to build GNU make once these tools are available. We make no guarantees about the contents or quality of the latest code in the CVS repository: it is not unheard of for code that is known to be broken to be checked in. Use at your own risk. System-specific Notes --------------------- It has been reported that the XLC 1.2 compiler on AIX 3.2 is buggy such that if you compile make with `cc -O' on AIX 3.2, it will not work correctly. It is said that using `cc' without `-O' does work. The standard /bin/sh on SunOS 4.1.3_U1 and 4.1.4 is broken and cannot be used to configure GNU make. Please install a different shell such as bash or pdksh in order to run "configure". See this message for more information: http://mail.gnu.org/archive/html/bug-autoconf/2003-10/msg00190.html One area that is often a problem in configuration and porting is the code to check the system's current load average. To make it easier to test and debug this code, you can do `make check-loadavg' to see if it works properly on your system. (You must run `configure' beforehand, but you need not build Make itself to run this test.) Another potential source of porting problems is the support for large files (LFS) in configure for those operating systems that provide it. Please report any bugs that you find in this area. If you run into difficulties, then as a workaround you should be able to disable LFS by adding the `--disable-largefile' option to the `configure' script. On systems that support micro- and nano-second timestamp values and where stat(2) provides this information, GNU make will use it when comparing timestamps to get the most accurate possible result. However, note that many current implementations of tools that *set* timestamps do not preserve micro- or nano-second granularity. This means that "cp -p" and other similar tools (tar, etc.) may not exactly duplicate timestamps with micro- and nano-second granularity on some systems. If your build system contains rules that depend on proper behavior of tools like "cp -p", you should consider using the .LOW_RESOLUTION_TIME pseudo-target to force make to treat them properly. See the manual for details. Ports ----- - See README.customs for details on integrating GNU make with the Customs distributed build environment from the Pmake distribution. - See readme.vms for details about GNU Make on OpenVMS. - See README.Amiga for details about GNU Make on AmigaDOS. - See README.W32 for details about GNU Make on Windows NT, 95, or 98. - See README.DOS for compilation instructions on MS-DOS and MS-Windows using DJGPP tools. A precompiled binary of the MSDOS port of GNU Make is available as part of DJGPP; see the WWW page http://www.delorie.com/djgpp/ for more information. Please note there are two _separate_ ports of GNU make for Microsoft systems: a native Windows tool built with (for example) MSVC or Cygwin, and a DOS-based tool built with DJGPP. Please be sure you are looking at the right README! ------------------------------------------------------------------------------- Copyright (C) 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. This file is part of GNU Make. GNU Make is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version. GNU Make is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GNU Make; see the file COPYING. If not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. make-doc-non-dfsg-3.81.orig/aclocal.m40000644000175000017500000010212010416557457017721 0ustar srivastasrivasta# generated automatically by aclocal 1.9.6 -*- Autoconf -*- # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, # 2005 Free Software Foundation, Inc. # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. # Copyright (C) 2002, 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_AUTOMAKE_VERSION(VERSION) # ---------------------------- # Automake X.Y traces this macro to ensure aclocal.m4 has been # generated from the m4 files accompanying Automake X.Y. AC_DEFUN([AM_AUTOMAKE_VERSION], [am__api_version="1.9"]) # AM_SET_CURRENT_AUTOMAKE_VERSION # ------------------------------- # Call AM_AUTOMAKE_VERSION so it can be traced. # This function is AC_REQUIREd by AC_INIT_AUTOMAKE. AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION], [AM_AUTOMAKE_VERSION([1.9.6])]) # AM_AUX_DIR_EXPAND -*- Autoconf -*- # Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets # $ac_aux_dir to `$srcdir/foo'. In other projects, it is set to # `$srcdir', `$srcdir/..', or `$srcdir/../..'. # # Of course, Automake must honor this variable whenever it calls a # tool from the auxiliary directory. The problem is that $srcdir (and # therefore $ac_aux_dir as well) can be either absolute or relative, # depending on how configure is run. This is pretty annoying, since # it makes $ac_aux_dir quite unusable in subdirectories: in the top # source directory, any form will work fine, but in subdirectories a # relative path needs to be adjusted first. # # $ac_aux_dir/missing # fails when called from a subdirectory if $ac_aux_dir is relative # $top_srcdir/$ac_aux_dir/missing # fails if $ac_aux_dir is absolute, # fails when called from a subdirectory in a VPATH build with # a relative $ac_aux_dir # # The reason of the latter failure is that $top_srcdir and $ac_aux_dir # are both prefixed by $srcdir. In an in-source build this is usually # harmless because $srcdir is `.', but things will broke when you # start a VPATH build or use an absolute $srcdir. # # So we could use something similar to $top_srcdir/$ac_aux_dir/missing, # iff we strip the leading $srcdir from $ac_aux_dir. That would be: # am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"` # and then we would define $MISSING as # MISSING="\${SHELL} $am_aux_dir/missing" # This will work as long as MISSING is not called from configure, because # unfortunately $(top_srcdir) has no meaning in configure. # However there are other variables, like CC, which are often used in # configure, and could therefore not use this "fixed" $ac_aux_dir. # # Another solution, used here, is to always expand $ac_aux_dir to an # absolute PATH. The drawback is that using absolute paths prevent a # configured tree to be moved without reconfiguration. AC_DEFUN([AM_AUX_DIR_EXPAND], [dnl Rely on autoconf to set up CDPATH properly. AC_PREREQ([2.50])dnl # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` ]) # AM_CONDITIONAL -*- Autoconf -*- # Copyright (C) 1997, 2000, 2001, 2003, 2004, 2005 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 7 # AM_CONDITIONAL(NAME, SHELL-CONDITION) # ------------------------------------- # Define a conditional. AC_DEFUN([AM_CONDITIONAL], [AC_PREREQ(2.52)dnl ifelse([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])], [$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl AC_SUBST([$1_TRUE]) AC_SUBST([$1_FALSE]) if $2; then $1_TRUE= $1_FALSE='#' else $1_TRUE='#' $1_FALSE= fi AC_CONFIG_COMMANDS_PRE( [if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then AC_MSG_ERROR([[conditional "$1" was never defined. Usually this means the macro was only invoked conditionally.]]) fi])]) # Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 8 # There are a few dirty hacks below to avoid letting `AC_PROG_CC' be # written in clear, in which case automake, when reading aclocal.m4, # will think it sees a *use*, and therefore will trigger all it's # C support machinery. Also note that it means that autoscan, seeing # CC etc. in the Makefile, will ask for an AC_PROG_CC use... # _AM_DEPENDENCIES(NAME) # ---------------------- # See how the compiler implements dependency checking. # NAME is "CC", "CXX", "GCJ", or "OBJC". # We try a few techniques and use that to set a single cache variable. # # We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was # modified to invoke _AM_DEPENDENCIES(CC); we would have a circular # dependency, and given that the user is not expected to run this macro, # just rely on AC_PROG_CC. AC_DEFUN([_AM_DEPENDENCIES], [AC_REQUIRE([AM_SET_DEPDIR])dnl AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl AC_REQUIRE([AM_MAKE_INCLUDE])dnl AC_REQUIRE([AM_DEP_TRACK])dnl ifelse([$1], CC, [depcc="$CC" am_compiler_list=], [$1], CXX, [depcc="$CXX" am_compiler_list=], [$1], OBJC, [depcc="$OBJC" am_compiler_list='gcc3 gcc'], [$1], GCJ, [depcc="$GCJ" am_compiler_list='gcc3 gcc'], [depcc="$$1" am_compiler_list=]) AC_CACHE_CHECK([dependency style of $depcc], [am_cv_$1_dependencies_compiler_type], [if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named `D' -- because `-MD' means `put the output # in D'. mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_$1_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp` fi for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with # Solaris 8's {/usr,}/bin/sh. touch sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf case $depmode in nosideeffect) # after this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; none) break ;; esac # We check with `-c' and `-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle `-M -o', and we need to detect this. if depmode=$depmode \ source=sub/conftest.c object=sub/conftest.${OBJEXT-o} \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c -o sub/conftest.${OBJEXT-o} sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftest.${OBJEXT-o} sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_$1_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_$1_dependencies_compiler_type=none fi ]) AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type]) AM_CONDITIONAL([am__fastdep$1], [ test "x$enable_dependency_tracking" != xno \ && test "$am_cv_$1_dependencies_compiler_type" = gcc3]) ]) # AM_SET_DEPDIR # ------------- # Choose a directory name for dependency files. # This macro is AC_REQUIREd in _AM_DEPENDENCIES AC_DEFUN([AM_SET_DEPDIR], [AC_REQUIRE([AM_SET_LEADING_DOT])dnl AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl ]) # AM_DEP_TRACK # ------------ AC_DEFUN([AM_DEP_TRACK], [AC_ARG_ENABLE(dependency-tracking, [ --disable-dependency-tracking speeds up one-time build --enable-dependency-tracking do not reject slow dependency extractors]) if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' fi AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno]) AC_SUBST([AMDEPBACKSLASH]) ]) # Generate code to set up dependency tracking. -*- Autoconf -*- # Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. #serial 3 # _AM_OUTPUT_DEPENDENCY_COMMANDS # ------------------------------ AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS], [for mf in $CONFIG_FILES; do # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. # We used to match only the files named `Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. # So let's grep whole file. if grep '^#.*generated by automake' $mf > /dev/null 2>&1; then dirpart=`AS_DIRNAME("$mf")` else continue fi # Extract the definition of DEPDIR, am__include, and am__quote # from the Makefile without running `make'. DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` test -z "$DEPDIR" && continue am__include=`sed -n 's/^am__include = //p' < "$mf"` test -z "am__include" && continue am__quote=`sed -n 's/^am__quote = //p' < "$mf"` # When using ansi2knr, U may be empty or an underscore; expand it U=`sed -n 's/^U = //p' < "$mf"` # Find all dependency output files, they are included files with # $(DEPDIR) in their names. We invoke sed twice because it is the # simplest approach to changing $(DEPDIR) to its actual value in the # expansion. for file in `sed -n " s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`AS_DIRNAME(["$file"])` AS_MKDIR_P([$dirpart/$fdir]) # echo "creating $dirpart/$file" echo '# dummy' > "$dirpart/$file" done done ])# _AM_OUTPUT_DEPENDENCY_COMMANDS # AM_OUTPUT_DEPENDENCY_COMMANDS # ----------------------------- # This macro should only be invoked once -- use via AC_REQUIRE. # # This code is only required when automatic dependency tracking # is enabled. FIXME. This creates each `.P' file that we will # need in order to bootstrap the dependency handling code. AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS], [AC_CONFIG_COMMANDS([depfiles], [test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS], [AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"]) ]) # Copyright (C) 1996, 1998, 1999, 2000, 2001, 2002, 2003, 2005 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 3 AC_DEFUN([AM_WITH_DMALLOC], [AC_MSG_CHECKING([if malloc debugging is wanted]) AC_ARG_WITH(dmalloc, [ --with-dmalloc use dmalloc, as in http://www.dmalloc.com/dmalloc.tar.gz], [if test "$withval" = yes; then AC_MSG_RESULT(yes) AC_DEFINE(WITH_DMALLOC,1, [Define if using the dmalloc debugging malloc package]) LIBS="$LIBS -ldmalloc" LDFLAGS="$LDFLAGS -g" else AC_MSG_RESULT(no) fi], [AC_MSG_RESULT(no)]) ]) AU_DEFUN([fp_WITH_DMALLOC], [AM_WITH_DMALLOC]) # Do all the work for Automake. -*- Autoconf -*- # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 12 # This macro actually does too much. Some checks are only needed if # your package does certain things. But this isn't really a big deal. # AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE]) # AM_INIT_AUTOMAKE([OPTIONS]) # ----------------------------------------------- # The call with PACKAGE and VERSION arguments is the old style # call (pre autoconf-2.50), which is being phased out. PACKAGE # and VERSION should now be passed to AC_INIT and removed from # the call to AM_INIT_AUTOMAKE. # We support both call styles for the transition. After # the next Automake release, Autoconf can make the AC_INIT # arguments mandatory, and then we can depend on a new Autoconf # release and drop the old call support. AC_DEFUN([AM_INIT_AUTOMAKE], [AC_PREREQ([2.58])dnl dnl Autoconf wants to disallow AM_ names. We explicitly allow dnl the ones we care about. m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl AC_REQUIRE([AC_PROG_INSTALL])dnl # test to see if srcdir already configured if test "`cd $srcdir && pwd`" != "`pwd`" && test -f $srcdir/config.status; then AC_MSG_ERROR([source directory already configured; run "make distclean" there first]) fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi AC_SUBST([CYGPATH_W]) # Define the identity of the package. dnl Distinguish between old-style and new-style calls. m4_ifval([$2], [m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl AC_SUBST([PACKAGE], [$1])dnl AC_SUBST([VERSION], [$2])], [_AM_SET_OPTIONS([$1])dnl AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl _AM_IF_OPTION([no-define],, [AC_DEFINE_UNQUOTED(PACKAGE, "$PACKAGE", [Name of package]) AC_DEFINE_UNQUOTED(VERSION, "$VERSION", [Version number of package])])dnl # Some tools Automake needs. AC_REQUIRE([AM_SANITY_CHECK])dnl AC_REQUIRE([AC_ARG_PROGRAM])dnl AM_MISSING_PROG(ACLOCAL, aclocal-${am__api_version}) AM_MISSING_PROG(AUTOCONF, autoconf) AM_MISSING_PROG(AUTOMAKE, automake-${am__api_version}) AM_MISSING_PROG(AUTOHEADER, autoheader) AM_MISSING_PROG(MAKEINFO, makeinfo) AM_PROG_INSTALL_SH AM_PROG_INSTALL_STRIP AC_REQUIRE([AM_PROG_MKDIR_P])dnl # We need awk for the "check" target. The system "awk" is bad on # some platforms. AC_REQUIRE([AC_PROG_AWK])dnl AC_REQUIRE([AC_PROG_MAKE_SET])dnl AC_REQUIRE([AM_SET_LEADING_DOT])dnl _AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])], [_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])], [_AM_PROG_TAR([v7])])]) _AM_IF_OPTION([no-dependencies],, [AC_PROVIDE_IFELSE([AC_PROG_CC], [_AM_DEPENDENCIES(CC)], [define([AC_PROG_CC], defn([AC_PROG_CC])[_AM_DEPENDENCIES(CC)])])dnl AC_PROVIDE_IFELSE([AC_PROG_CXX], [_AM_DEPENDENCIES(CXX)], [define([AC_PROG_CXX], defn([AC_PROG_CXX])[_AM_DEPENDENCIES(CXX)])])dnl ]) ]) # When config.status generates a header, we must update the stamp-h file. # This file resides in the same directory as the config header # that is generated. The stamp files are numbered to have different names. # Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the # loop where config.status creates the headers, so we can generate # our stamp files there. AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK], [# Compute $1's index in $config_headers. _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $1 | $1:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $1" >`AS_DIRNAME([$1])`/stamp-h[]$_am_stamp_count]) # Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_PROG_INSTALL_SH # ------------------ # Define $install_sh. AC_DEFUN([AM_PROG_INSTALL_SH], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl install_sh=${install_sh-"$am_aux_dir/install-sh"} AC_SUBST(install_sh)]) # Copyright (C) 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 2 # Check whether the underlying file-system supports filenames # with a leading dot. For instance MS-DOS doesn't. AC_DEFUN([AM_SET_LEADING_DOT], [rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null AC_SUBST([am__leading_dot])]) # Check to see how 'make' treats includes. -*- Autoconf -*- # Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 3 # AM_MAKE_INCLUDE() # ----------------- # Check to see how make treats includes. AC_DEFUN([AM_MAKE_INCLUDE], [am_make=${MAKE-make} cat > confinc << 'END' am__doit: @echo done .PHONY: am__doit END # If we don't find an include directive, just comment out the code. AC_MSG_CHECKING([for style of include used by $am_make]) am__include="#" am__quote= _am_result=none # First try GNU make style include. echo "include confinc" > confmf # We grep out `Entering directory' and `Leaving directory' # messages which can occur if `w' ends up in MAKEFLAGS. # In particular we don't look at `^make:' because GNU make might # be invoked under some other name (usually "gmake"), in which # case it prints its new name instead of `make'. if test "`$am_make -s -f confmf 2> /dev/null | grep -v 'ing directory'`" = "done"; then am__include=include am__quote= _am_result=GNU fi # Now try BSD make style include. if test "$am__include" = "#"; then echo '.include "confinc"' > confmf if test "`$am_make -s -f confmf 2> /dev/null`" = "done"; then am__include=.include am__quote="\"" _am_result=BSD fi fi AC_SUBST([am__include]) AC_SUBST([am__quote]) AC_MSG_RESULT([$_am_result]) rm -f confinc confmf ]) # Copyright (C) 1999, 2000, 2001, 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 3 # AM_PROG_CC_C_O # -------------- # Like AC_PROG_CC_C_O, but changed for automake. AC_DEFUN([AM_PROG_CC_C_O], [AC_REQUIRE([AC_PROG_CC_C_O])dnl AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl # FIXME: we rely on the cache variable name because # there is no other way. set dummy $CC ac_cc=`echo $[2] | sed ['s/[^a-zA-Z0-9_]/_/g;s/^[0-9]/_/']` if eval "test \"`echo '$ac_cv_prog_cc_'${ac_cc}_c_o`\" != yes"; then # Losing compiler, so override with the script. # FIXME: It is wrong to rewrite CC. # But if we don't then we get into trouble of one sort or another. # A longer-term fix would be to have automake use am__CC in this case, # and then we could set am__CC="\$(top_srcdir)/compile \$(CC)" CC="$am_aux_dir/compile $CC" fi ]) # Fake the existence of programs that GNU maintainers use. -*- Autoconf -*- # Copyright (C) 1997, 1999, 2000, 2001, 2003, 2005 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 4 # AM_MISSING_PROG(NAME, PROGRAM) # ------------------------------ AC_DEFUN([AM_MISSING_PROG], [AC_REQUIRE([AM_MISSING_HAS_RUN]) $1=${$1-"${am_missing_run}$2"} AC_SUBST($1)]) # AM_MISSING_HAS_RUN # ------------------ # Define MISSING if not defined so far and test if it supports --run. # If it does, set am_missing_run to use it, otherwise, to nothing. AC_DEFUN([AM_MISSING_HAS_RUN], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl test x"${MISSING+set}" = xset || MISSING="\${SHELL} $am_aux_dir/missing" # Use eval to expand $SHELL if eval "$MISSING --run true"; then am_missing_run="$MISSING --run " else am_missing_run= AC_MSG_WARN([`missing' script is too old or missing]) fi ]) # Copyright (C) 2003, 2004, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_PROG_MKDIR_P # --------------- # Check whether `mkdir -p' is supported, fallback to mkinstalldirs otherwise. # # Automake 1.8 used `mkdir -m 0755 -p --' to ensure that directories # created by `make install' are always world readable, even if the # installer happens to have an overly restrictive umask (e.g. 077). # This was a mistake. There are at least two reasons why we must not # use `-m 0755': # - it causes special bits like SGID to be ignored, # - it may be too restrictive (some setups expect 775 directories). # # Do not use -m 0755 and let people choose whatever they expect by # setting umask. # # We cannot accept any implementation of `mkdir' that recognizes `-p'. # Some implementations (such as Solaris 8's) are not thread-safe: if a # parallel make tries to run `mkdir -p a/b' and `mkdir -p a/c' # concurrently, both version can detect that a/ is missing, but only # one can create it and the other will error out. Consequently we # restrict ourselves to GNU make (using the --version option ensures # this.) AC_DEFUN([AM_PROG_MKDIR_P], [if mkdir -p --version . >/dev/null 2>&1 && test ! -d ./--version; then # We used to keeping the `.' as first argument, in order to # allow $(mkdir_p) to be used without argument. As in # $(mkdir_p) $(somedir) # where $(somedir) is conditionally defined. However this is wrong # for two reasons: # 1. if the package is installed by a user who cannot write `.' # make install will fail, # 2. the above comment should most certainly read # $(mkdir_p) $(DESTDIR)$(somedir) # so it does not work when $(somedir) is undefined and # $(DESTDIR) is not. # To support the latter case, we have to write # test -z "$(somedir)" || $(mkdir_p) $(DESTDIR)$(somedir), # so the `.' trick is pointless. mkdir_p='mkdir -p --' else # On NextStep and OpenStep, the `mkdir' command does not # recognize any option. It will interpret all options as # directories to create, and then abort because `.' already # exists. for d in ./-p ./--version; do test -d $d && rmdir $d done # $(mkinstalldirs) is defined by Automake if mkinstalldirs exists. if test -f "$ac_aux_dir/mkinstalldirs"; then mkdir_p='$(mkinstalldirs)' else mkdir_p='$(install_sh) -d' fi fi AC_SUBST([mkdir_p])]) # Helper functions for option handling. -*- Autoconf -*- # Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 3 # _AM_MANGLE_OPTION(NAME) # ----------------------- AC_DEFUN([_AM_MANGLE_OPTION], [[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])]) # _AM_SET_OPTION(NAME) # ------------------------------ # Set option NAME. Presently that only means defining a flag for this option. AC_DEFUN([_AM_SET_OPTION], [m4_define(_AM_MANGLE_OPTION([$1]), 1)]) # _AM_SET_OPTIONS(OPTIONS) # ---------------------------------- # OPTIONS is a space-separated list of Automake options. AC_DEFUN([_AM_SET_OPTIONS], [AC_FOREACH([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])]) # _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET]) # ------------------------------------------- # Execute IF-SET if OPTION is set, IF-NOT-SET otherwise. AC_DEFUN([_AM_IF_OPTION], [m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])]) # Copyright (C) 1996, 1997, 1998, 2000, 2001, 2002, 2003, 2005 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 4 AC_DEFUN([AM_C_PROTOTYPES], [AC_REQUIRE([AC_C_PROTOTYPES]) if test "$ac_cv_prog_cc_stdc" != no; then U= ANSI2KNR= else U=_ ANSI2KNR=./ansi2knr fi # Ensure some checks needed by ansi2knr itself. AC_REQUIRE([AC_HEADER_STDC]) AC_CHECK_HEADERS(string.h) AC_SUBST(U)dnl AC_SUBST(ANSI2KNR)dnl ]) AU_DEFUN([fp_C_PROTOTYPES], [AM_C_PROTOTYPES]) # Check to make sure that the build environment is sane. -*- Autoconf -*- # Copyright (C) 1996, 1997, 2000, 2001, 2003, 2005 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 4 # AM_SANITY_CHECK # --------------- AC_DEFUN([AM_SANITY_CHECK], [AC_MSG_CHECKING([whether build environment is sane]) # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$[*]" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$[*]" != "X $srcdir/configure conftest.file" \ && test "$[*]" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken alias in your environment]) fi test "$[2]" = conftest.file ) then # Ok. : else AC_MSG_ERROR([newly created file is older than distributed files! Check your system clock]) fi AC_MSG_RESULT(yes)]) # Copyright (C) 2001, 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # AM_PROG_INSTALL_STRIP # --------------------- # One issue with vendor `install' (even GNU) is that you can't # specify the program used to strip binaries. This is especially # annoying in cross-compiling environments, where the build's strip # is unlikely to handle the host's binaries. # Fortunately install-sh will honor a STRIPPROG variable, so we # always use install-sh in `make install-strip', and initialize # STRIPPROG with the value of the STRIP variable (set by the user). AC_DEFUN([AM_PROG_INSTALL_STRIP], [AC_REQUIRE([AM_PROG_INSTALL_SH])dnl # Installed binaries are usually stripped using `strip' when the user # run `make install-strip'. However `strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the `STRIP' environment variable to overrule this program. dnl Don't test for $cross_compiling = yes, because it might be `maybe'. if test "$cross_compiling" != no; then AC_CHECK_TOOL([STRIP], [strip], :) fi INSTALL_STRIP_PROGRAM="\${SHELL} \$(install_sh) -c -s" AC_SUBST([INSTALL_STRIP_PROGRAM])]) # Check how to create a tarball. -*- Autoconf -*- # Copyright (C) 2004, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 2 # _AM_PROG_TAR(FORMAT) # -------------------- # Check how to create a tarball in format FORMAT. # FORMAT should be one of `v7', `ustar', or `pax'. # # Substitute a variable $(am__tar) that is a command # writing to stdout a FORMAT-tarball containing the directory # $tardir. # tardir=directory && $(am__tar) > result.tar # # Substitute a variable $(am__untar) that extract such # a tarball read from stdin. # $(am__untar) < result.tar AC_DEFUN([_AM_PROG_TAR], [# Always define AMTAR for backward compatibility. AM_MISSING_PROG([AMTAR], [tar]) m4_if([$1], [v7], [am__tar='${AMTAR} chof - "$$tardir"'; am__untar='${AMTAR} xf -'], [m4_case([$1], [ustar],, [pax],, [m4_fatal([Unknown tar format])]) AC_MSG_CHECKING([how to create a $1 tar archive]) # Loop over all known methods to create a tar archive until one works. _am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none' _am_tools=${am_cv_prog_tar_$1-$_am_tools} # Do not fold the above two line into one, because Tru64 sh and # Solaris sh will not grok spaces in the rhs of `-'. for _am_tool in $_am_tools do case $_am_tool in gnutar) for _am_tar in tar gnutar gtar; do AM_RUN_LOG([$_am_tar --version]) && break done am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"' am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"' am__untar="$_am_tar -xf -" ;; plaintar) # Must skip GNU tar: if it does not support --format= it doesn't create # ustar tarball either. (tar --version) >/dev/null 2>&1 && continue am__tar='tar chf - "$$tardir"' am__tar_='tar chf - "$tardir"' am__untar='tar xf -' ;; pax) am__tar='pax -L -x $1 -w "$$tardir"' am__tar_='pax -L -x $1 -w "$tardir"' am__untar='pax -r' ;; cpio) am__tar='find "$$tardir" -print | cpio -o -H $1 -L' am__tar_='find "$tardir" -print | cpio -o -H $1 -L' am__untar='cpio -i -H $1 -d' ;; none) am__tar=false am__tar_=false am__untar=false ;; esac # If the value was cached, stop now. We just wanted to have am__tar # and am__untar set. test -n "${am_cv_prog_tar_$1}" && break # tar/untar a dummy directory, and stop if the command works rm -rf conftest.dir mkdir conftest.dir echo GrepMe > conftest.dir/file AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar]) rm -rf conftest.dir if test -s conftest.tar; then AM_RUN_LOG([$am__untar /dev/null 2>&1 && break fi done rm -rf conftest.dir AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool]) AC_MSG_RESULT([$am_cv_prog_tar_$1])]) AC_SUBST([am__tar]) AC_SUBST([am__untar]) ]) # _AM_PROG_TAR m4_include([config/dospaths.m4]) m4_include([config/gettext.m4]) m4_include([config/iconv.m4]) m4_include([config/lib-ld.m4]) m4_include([config/lib-link.m4]) m4_include([config/lib-prefix.m4]) m4_include([config/nls.m4]) m4_include([config/po.m4]) m4_include([config/progtest.m4]) m4_include([acinclude.m4]) make-doc-non-dfsg-3.81.orig/configure0000755000175000017500000024752410416557457020012 0ustar srivastasrivasta#! /bin/sh # From configure.in Id: configure.in,v 1.142 2006/04/01 06:36:40 psmith Exp . # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.59 for GNU make 3.81. # # Report bugs to . # # Copyright (C) 2003 Free Software Foundation, Inc. # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. ## --------------------- ## ## M4sh Initialization. ## ## --------------------- ## # Be Bourne compatible if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' elif test -n "${BASH_VERSION+set}" && (set -o posix) >/dev/null 2>&1; then set -o posix fi DUALCASE=1; export DUALCASE # for MKS sh # Support unset when possible. if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then as_unset=unset else as_unset=false fi # Work around bugs in pre-3.0 UWIN ksh. $as_unset ENV MAIL MAILPATH PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. for as_var in \ LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \ LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \ LC_TELEPHONE LC_TIME do if (set +x; test -z "`(eval $as_var=C; export $as_var) 2>&1`"); then eval $as_var=C; export $as_var else $as_unset $as_var fi done # Required to use basename. if expr a : '\(a\)' >/dev/null 2>&1; then as_expr=expr else as_expr=false fi if (basename /) >/dev/null 2>&1 && test "X`basename / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi # Name of the executable. as_me=`$as_basename "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)$' \| \ . : '\(.\)' 2>/dev/null || echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/; q; } /^X\/\(\/\/\)$/{ s//\1/; q; } /^X\/\(\/\).*/{ s//\1/; q; } s/.*/./; q'` # PATH needs CR, and LINENO needs CR and PATH. # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then echo "#! /bin/sh" >conf$$.sh echo "exit 0" >>conf$$.sh chmod +x conf$$.sh if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then PATH_SEPARATOR=';' else PATH_SEPARATOR=: fi rm -f conf$$.sh fi as_lineno_1=$LINENO as_lineno_2=$LINENO as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null` test "x$as_lineno_1" != "x$as_lineno_2" && test "x$as_lineno_3" = "x$as_lineno_2" || { # Find who we are. Look in the path if we contain no path at all # relative or not. case $0 in *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then { echo "$as_me: error: cannot find myself; rerun with an absolute path" >&2 { (exit 1); exit 1; }; } fi case $CONFIG_SHELL in '') as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for as_base in sh bash ksh sh5; do case $as_dir in /*) if ("$as_dir/$as_base" -c ' as_lineno_1=$LINENO as_lineno_2=$LINENO as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null` test "x$as_lineno_1" != "x$as_lineno_2" && test "x$as_lineno_3" = "x$as_lineno_2" ') 2>/dev/null; then $as_unset BASH_ENV || test "${BASH_ENV+set}" != set || { BASH_ENV=; export BASH_ENV; } $as_unset ENV || test "${ENV+set}" != set || { ENV=; export ENV; } CONFIG_SHELL=$as_dir/$as_base export CONFIG_SHELL exec "$CONFIG_SHELL" "$0" ${1+"$@"} fi;; esac done done ;; esac # Create $as_me.lineno as a copy of $as_myself, but with $LINENO # uniformly replaced by the line number. The first 'sed' inserts a # line-number line before each line; the second 'sed' does the real # work. The second script uses 'N' to pair each line-number line # with the numbered line, and appends trailing '-' during # substitution so that $LINENO is not a special case at line end. # (Raja R Harinath suggested sed '=', and Paul Eggert wrote the # second 'sed' script. Blame Lee E. McMahon for sed's syntax. :-) sed '=' <$as_myself | sed ' N s,$,-, : loop s,^\(['$as_cr_digits']*\)\(.*\)[$]LINENO\([^'$as_cr_alnum'_]\),\1\2\1\3, t loop s,-$,, s,^['$as_cr_digits']*\n,, ' >$as_me.lineno && chmod +x $as_me.lineno || { echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2 { (exit 1); exit 1; }; } # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensible to this). . ./$as_me.lineno # Exit status is that of the last command. exit } case `echo "testing\c"; echo 1,2,3`,`echo -n testing; echo 1,2,3` in *c*,-n*) ECHO_N= ECHO_C=' ' ECHO_T=' ' ;; *c*,* ) ECHO_N=-n ECHO_C= ECHO_T= ;; *) ECHO_N= ECHO_C='\c' ECHO_T= ;; esac if expr a : '\(a\)' >/dev/null 2>&1; then as_expr=expr else as_expr=false fi rm -f conf$$ conf$$.exe conf$$.file echo >conf$$.file if ln -s conf$$.file conf$$ 2>/dev/null; then # We could just check for DJGPP; but this test a) works b) is more generic # and c) will remain valid once DJGPP supports symlinks (DJGPP 2.04). if test -f conf$$.exe; then # Don't use ln at all; we don't have any links as_ln_s='cp -p' else as_ln_s='ln -s' fi elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -p' fi rm -f conf$$ conf$$.exe conf$$.file if mkdir -p . 2>/dev/null; then as_mkdir_p=: else test -d ./-p && rmdir ./-p as_mkdir_p=false fi as_executable_p="test -f" # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" # IFS # We need space, tab and new line, in precisely that order. as_nl=' ' IFS=" $as_nl" # CDPATH. $as_unset CDPATH # Name of the host. # hostname on some systems (SVR3.2, Linux) returns a bogus exit status, # so uname gets run too. ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q` exec 6>&1 # # Initializations. # ac_default_prefix=/usr/local ac_config_libobj_dir=. cross_compiling=no subdirs= MFLAGS= MAKEFLAGS= SHELL=${CONFIG_SHELL-/bin/sh} # Maximum number of lines to put in a shell here document. # This variable seems obsolete. It should probably be removed, and # only ac_max_sed_lines should be used. : ${ac_max_here_lines=38} # Identity of this package. PACKAGE_NAME='GNU make' PACKAGE_TARNAME='make' PACKAGE_VERSION='3.81' PACKAGE_STRING='GNU make 3.81' PACKAGE_BUGREPORT='bug-make@gnu.org' ac_subst_vars='SHELL PATH_SEPARATOR PACKAGE_NAME PACKAGE_TARNAME PACKAGE_VERSION PACKAGE_STRING PACKAGE_BUGREPORT exec_prefix prefix program_transform_name bindir sbindir libexecdir datadir sysconfdir sharedstatedir localstatedir libdir includedir oldincludedir infodir mandir build_alias host_alias target_alias DEFS ECHO_C ECHO_N ECHO_T LIBS MAINTAINER_MODE_TRUE MAINTAINER_MODE_FALSE MAINT INSTALL_PROGRAM INSTALL_SCRIPT INSTALL_DATA CYGPATH_W PACKAGE VERSION ACLOCAL AUTOCONF AUTOMAKE AUTOHEADER MAKEINFO install_sh STRIP ac_ct_STRIP INSTALL_STRIP_PROGRAM mkdir_p AWK SET_MAKE am__leading_dot AMTAR am__tar am__untar PERL LIBOBJS LTLIBOBJS' ac_subst_files='' # Initialize some variables set by options. ac_init_help= ac_init_version=false # The variables have the same names as the options, with # dashes changed to underlines. cache_file=/dev/null exec_prefix=NONE no_create= no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= verbose= x_includes=NONE x_libraries=NONE # Installation directory options. # These are left unexpanded so users can "make install exec_prefix=/foo" # and all the variables that are supposed to be based on exec_prefix # by default will actually change. # Use braces instead of parens because sh, perl, etc. also accept them. bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datadir='${prefix}/share' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' libdir='${exec_prefix}/lib' includedir='${prefix}/include' oldincludedir='/usr/include' infodir='${prefix}/info' mandir='${prefix}/man' ac_prev= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval "$ac_prev=\$ac_option" ac_prev= continue fi ac_optarg=`expr "x$ac_option" : 'x[^=]*=\(.*\)'` # Accept the important Cygnus configure options, so we can diagnose typos. case $ac_option in -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir=$ac_optarg ;; -build | --build | --buil | --bui | --bu) ac_prev=build_alias ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build_alias=$ac_optarg ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file=$ac_optarg ;; --config-cache | -C) cache_file=config.cache ;; -datadir | --datadir | --datadi | --datad | --data | --dat | --da) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=* | --data=* | --dat=* \ | --da=*) datadir=$ac_optarg ;; -disable-* | --disable-*) ac_feature=`expr "x$ac_option" : 'x-*disable-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_feature" : ".*[^-_$as_cr_alnum]" >/dev/null && { echo "$as_me: error: invalid feature name: $ac_feature" >&2 { (exit 1); exit 1; }; } ac_feature=`echo $ac_feature | sed 's/-/_/g'` eval "enable_$ac_feature=no" ;; -enable-* | --enable-*) ac_feature=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_feature" : ".*[^-_$as_cr_alnum]" >/dev/null && { echo "$as_me: error: invalid feature name: $ac_feature" >&2 { (exit 1); exit 1; }; } ac_feature=`echo $ac_feature | sed 's/-/_/g'` case $ac_option in *=*) ac_optarg=`echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"`;; *) ac_optarg=yes ;; esac eval "enable_$ac_feature='$ac_optarg'" ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix=$ac_optarg ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he | -h) ac_init_help=long ;; -help=r* | --help=r* | --hel=r* | --he=r* | -hr*) ac_init_help=recursive ;; -help=s* | --help=s* | --hel=s* | --he=s* | -hs*) ac_init_help=short ;; -host | --host | --hos | --ho) ac_prev=host_alias ;; -host=* | --host=* | --hos=* | --ho=*) host_alias=$ac_optarg ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir=$ac_optarg ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir=$ac_optarg ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir=$ac_optarg ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir=$ac_optarg ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst \ | --locals | --local | --loca | --loc | --lo) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* \ | --locals=* | --local=* | --loca=* | --loc=* | --lo=*) localstatedir=$ac_optarg ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir=$ac_optarg ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c | -n) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir=$ac_optarg ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix=$ac_optarg ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix=$ac_optarg ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix=$ac_optarg ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name=$ac_optarg ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir=$ac_optarg ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir=$ac_optarg ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site=$ac_optarg ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir=$ac_optarg ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir=$ac_optarg ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target_alias ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target_alias=$ac_optarg ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers | -V) ac_init_version=: ;; -with-* | --with-*) ac_package=`expr "x$ac_option" : 'x-*with-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_package" : ".*[^-_$as_cr_alnum]" >/dev/null && { echo "$as_me: error: invalid package name: $ac_package" >&2 { (exit 1); exit 1; }; } ac_package=`echo $ac_package| sed 's/-/_/g'` case $ac_option in *=*) ac_optarg=`echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"`;; *) ac_optarg=yes ;; esac eval "with_$ac_package='$ac_optarg'" ;; -without-* | --without-*) ac_package=`expr "x$ac_option" : 'x-*without-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_package" : ".*[^-_$as_cr_alnum]" >/dev/null && { echo "$as_me: error: invalid package name: $ac_package" >&2 { (exit 1); exit 1; }; } ac_package=`echo $ac_package | sed 's/-/_/g'` eval "with_$ac_package=no" ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes=$ac_optarg ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries=$ac_optarg ;; -*) { echo "$as_me: error: unrecognized option: $ac_option Try \`$0 --help' for more information." >&2 { (exit 1); exit 1; }; } ;; *=*) ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='` # Reject names that are not valid shell variable names. expr "x$ac_envvar" : ".*[^_$as_cr_alnum]" >/dev/null && { echo "$as_me: error: invalid variable name: $ac_envvar" >&2 { (exit 1); exit 1; }; } ac_optarg=`echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` eval "$ac_envvar='$ac_optarg'" export $ac_envvar ;; *) # FIXME: should be removed in autoconf 3.0. echo "$as_me: WARNING: you should use --build, --host, --target" >&2 expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null && echo "$as_me: WARNING: invalid host type: $ac_option" >&2 : ${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option} ;; esac done if test -n "$ac_prev"; then ac_option=--`echo $ac_prev | sed 's/_/-/g'` { echo "$as_me: error: missing argument to $ac_option" >&2 { (exit 1); exit 1; }; } fi # Be sure to have absolute paths. for ac_var in exec_prefix prefix do eval ac_val=$`echo $ac_var` case $ac_val in [\\/$]* | ?:[\\/]* | NONE | '' ) ;; *) { echo "$as_me: error: expected an absolute directory name for --$ac_var: $ac_val" >&2 { (exit 1); exit 1; }; };; esac done # Be sure to have absolute paths. for ac_var in bindir sbindir libexecdir datadir sysconfdir sharedstatedir \ localstatedir libdir includedir oldincludedir infodir mandir do eval ac_val=$`echo $ac_var` case $ac_val in [\\/$]* | ?:[\\/]* ) ;; *) { echo "$as_me: error: expected an absolute directory name for --$ac_var: $ac_val" >&2 { (exit 1); exit 1; }; };; esac done # There might be people who depend on the old broken behavior: `$host' # used to hold the argument of --host etc. # FIXME: To remove some day. build=$build_alias host=$host_alias target=$target_alias # FIXME: To remove some day. if test "x$host_alias" != x; then if test "x$build_alias" = x; then cross_compiling=maybe echo "$as_me: WARNING: If you wanted to set the --build type, don't use --host. If a cross compiler is detected then cross compile mode will be used." >&2 elif test "x$build_alias" != "x$host_alias"; then cross_compiling=yes fi fi ac_tool_prefix= test -n "$host_alias" && ac_tool_prefix=$host_alias- test "$silent" = yes && exec 6>/dev/null # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then its parent. ac_confdir=`(dirname "$0") 2>/dev/null || $as_expr X"$0" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$0" : 'X\(//\)[^/]' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| \ . : '\(.\)' 2>/dev/null || echo X"$0" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; } /^X\(\/\/\)[^/].*/{ s//\1/; q; } /^X\(\/\/\)$/{ s//\1/; q; } /^X\(\/\).*/{ s//\1/; q; } s/.*/./; q'` srcdir=$ac_confdir if test ! -r $srcdir/$ac_unique_file; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r $srcdir/$ac_unique_file; then if test "$ac_srcdir_defaulted" = yes; then { echo "$as_me: error: cannot find sources ($ac_unique_file) in $ac_confdir or .." >&2 { (exit 1); exit 1; }; } else { echo "$as_me: error: cannot find sources ($ac_unique_file) in $srcdir" >&2 { (exit 1); exit 1; }; } fi fi (cd $srcdir && test -r ./$ac_unique_file) 2>/dev/null || { echo "$as_me: error: sources are in $srcdir, but \`cd $srcdir' does not work" >&2 { (exit 1); exit 1; }; } srcdir=`echo "$srcdir" | sed 's%\([^\\/]\)[\\/]*$%\1%'` ac_env_build_alias_set=${build_alias+set} ac_env_build_alias_value=$build_alias ac_cv_env_build_alias_set=${build_alias+set} ac_cv_env_build_alias_value=$build_alias ac_env_host_alias_set=${host_alias+set} ac_env_host_alias_value=$host_alias ac_cv_env_host_alias_set=${host_alias+set} ac_cv_env_host_alias_value=$host_alias ac_env_target_alias_set=${target_alias+set} ac_env_target_alias_value=$target_alias ac_cv_env_target_alias_set=${target_alias+set} ac_cv_env_target_alias_value=$target_alias # # Report the --help message. # if test "$ac_init_help" = "long"; then # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF \`configure' configures GNU make 3.81 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... To assign environment variables (e.g., CC, CFLAGS...), specify them as VAR=VALUE. See below for descriptions of some of the useful variables. Defaults for the options are specified in brackets. Configuration: -h, --help display this help and exit --help=short display options specific to this package --help=recursive display the short help of all the included packages -V, --version display version information and exit -q, --quiet, --silent do not print \`checking...' messages --cache-file=FILE cache test results in FILE [disabled] -C, --config-cache alias for \`--cache-file=config.cache' -n, --no-create do not create output files --srcdir=DIR find the sources in DIR [configure dir or \`..'] _ACEOF cat <<_ACEOF Installation directories: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [PREFIX] By default, \`make install' will install all the files in \`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify an installation prefix other than \`$ac_default_prefix' using \`--prefix', for instance \`--prefix=\$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIR user executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIR program executables [EPREFIX/libexec] --datadir=DIR read-only architecture-independent data [PREFIX/share] --sysconfdir=DIR read-only single-machine data [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com] --localstatedir=DIR modifiable single-machine data [PREFIX/var] --libdir=DIR object code libraries [EPREFIX/lib] --includedir=DIR C header files [PREFIX/include] --oldincludedir=DIR C header files for non-gcc [/usr/include] --infodir=DIR info documentation [PREFIX/info] --mandir=DIR man documentation [PREFIX/man] _ACEOF cat <<\_ACEOF Program names: --program-prefix=PREFIX prepend PREFIX to installed program names --program-suffix=SUFFIX append SUFFIX to installed program names --program-transform-name=PROGRAM run sed PROGRAM on installed program names _ACEOF fi if test -n "$ac_init_help"; then case $ac_init_help in short | recursive ) echo "Configuration of GNU make 3.81:";; esac cat <<\_ACEOF Optional Features: --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --enable-maintainer-mode enable make rules and dependencies not useful (and sometimes confusing) to the casual installer Report bugs to . _ACEOF fi if test "$ac_init_help" = "recursive"; then # If there are subdirs, report their specific --help. ac_popdir=`pwd` for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue test -d $ac_dir || continue ac_builddir=. if test "$ac_dir" != .; then ac_dir_suffix=/`echo "$ac_dir" | sed 's,^\.[\\/],,'` # A "../" for each directory in $ac_dir_suffix. ac_top_builddir=`echo "$ac_dir_suffix" | sed 's,/[^\\/]*,../,g'` else ac_dir_suffix= ac_top_builddir= fi case $srcdir in .) # No --srcdir option. We are building in place. ac_srcdir=. if test -z "$ac_top_builddir"; then ac_top_srcdir=. else ac_top_srcdir=`echo $ac_top_builddir | sed 's,/$,,'` fi ;; [\\/]* | ?:[\\/]* ) # Absolute path. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ;; *) # Relative path. ac_srcdir=$ac_top_builddir$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_builddir$srcdir ;; esac # Do not use `cd foo && pwd` to compute absolute paths, because # the directories may not exist. case `pwd` in .) ac_abs_builddir="$ac_dir";; *) case "$ac_dir" in .) ac_abs_builddir=`pwd`;; [\\/]* | ?:[\\/]* ) ac_abs_builddir="$ac_dir";; *) ac_abs_builddir=`pwd`/"$ac_dir";; esac;; esac case $ac_abs_builddir in .) ac_abs_top_builddir=${ac_top_builddir}.;; *) case ${ac_top_builddir}. in .) ac_abs_top_builddir=$ac_abs_builddir;; [\\/]* | ?:[\\/]* ) ac_abs_top_builddir=${ac_top_builddir}.;; *) ac_abs_top_builddir=$ac_abs_builddir/${ac_top_builddir}.;; esac;; esac case $ac_abs_builddir in .) ac_abs_srcdir=$ac_srcdir;; *) case $ac_srcdir in .) ac_abs_srcdir=$ac_abs_builddir;; [\\/]* | ?:[\\/]* ) ac_abs_srcdir=$ac_srcdir;; *) ac_abs_srcdir=$ac_abs_builddir/$ac_srcdir;; esac;; esac case $ac_abs_builddir in .) ac_abs_top_srcdir=$ac_top_srcdir;; *) case $ac_top_srcdir in .) ac_abs_top_srcdir=$ac_abs_builddir;; [\\/]* | ?:[\\/]* ) ac_abs_top_srcdir=$ac_top_srcdir;; *) ac_abs_top_srcdir=$ac_abs_builddir/$ac_top_srcdir;; esac;; esac cd $ac_dir # Check for guested configure; otherwise get Cygnus style configure. if test -f $ac_srcdir/configure.gnu; then echo $SHELL $ac_srcdir/configure.gnu --help=recursive elif test -f $ac_srcdir/configure; then echo $SHELL $ac_srcdir/configure --help=recursive elif test -f $ac_srcdir/configure.ac || test -f $ac_srcdir/configure.in; then echo $ac_configure --help else echo "$as_me: WARNING: no configuration information is in $ac_dir" >&2 fi cd "$ac_popdir" done fi test -n "$ac_init_help" && exit 0 if $ac_init_version; then cat <<\_ACEOF GNU make configure 3.81 generated by GNU Autoconf 2.59 Copyright (C) 2003 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF exit 0 fi exec 5>config.log cat >&5 <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by GNU make $as_me 3.81, which was generated by GNU Autoconf 2.59. Invocation command line was $ $0 $@ _ACEOF { cat <<_ASUNAME ## --------- ## ## Platform. ## ## --------- ## hostname = `(hostname || uname -n) 2>/dev/null | sed 1q` uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown` /bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown` /bin/arch = `(/bin/arch) 2>/dev/null || echo unknown` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown` hostinfo = `(hostinfo) 2>/dev/null || echo unknown` /bin/machine = `(/bin/machine) 2>/dev/null || echo unknown` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown` /bin/universe = `(/bin/universe) 2>/dev/null || echo unknown` _ASUNAME as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. echo "PATH: $as_dir" done } >&5 cat >&5 <<_ACEOF ## ----------- ## ## Core tests. ## ## ----------- ## _ACEOF # Keep a trace of the command line. # Strip out --no-create and --no-recursion so they do not pile up. # Strip out --silent because we don't want to record it for future runs. # Also quote any args containing shell meta-characters. # Make two passes to allow for proper duplicate-argument suppression. ac_configure_args= ac_configure_args0= ac_configure_args1= ac_sep= ac_must_keep_next=false for ac_pass in 1 2 do for ac_arg do case $ac_arg in -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) continue ;; *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?\"\']*) ac_arg=`echo "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac case $ac_pass in 1) ac_configure_args0="$ac_configure_args0 '$ac_arg'" ;; 2) ac_configure_args1="$ac_configure_args1 '$ac_arg'" if test $ac_must_keep_next = true; then ac_must_keep_next=false # Got value, back to normal. else case $ac_arg in *=* | --config-cache | -C | -disable-* | --disable-* \ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \ | -with-* | --with-* | -without-* | --without-* | --x) case "$ac_configure_args0 " in "$ac_configure_args1"*" '$ac_arg' "* ) continue ;; esac ;; -* ) ac_must_keep_next=true ;; esac fi ac_configure_args="$ac_configure_args$ac_sep'$ac_arg'" # Get rid of the leading space. ac_sep=" " ;; esac done done $as_unset ac_configure_args0 || test "${ac_configure_args0+set}" != set || { ac_configure_args0=; export ac_configure_args0; } $as_unset ac_configure_args1 || test "${ac_configure_args1+set}" != set || { ac_configure_args1=; export ac_configure_args1; } # When interrupted or exit'd, cleanup temporary files, and complete # config.log. We remove comments because anyway the quotes in there # would cause problems or look ugly. # WARNING: Be sure not to use single quotes in there, as some shells, # such as our DU 5.0 friend, will then `close' the trap. trap 'exit_status=$? # Save into config.log some information that might help in debugging. { echo cat <<\_ASBOX ## ---------------- ## ## Cache variables. ## ## ---------------- ## _ASBOX echo # The following way of writing the cache mishandles newlines in values, { (set) 2>&1 | case `(ac_space='"'"' '"'"'; set | grep ac_space) 2>&1` in *ac_space=\ *) sed -n \ "s/'"'"'/'"'"'\\\\'"'"''"'"'/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='"'"'\\2'"'"'/p" ;; *) sed -n \ "s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1=\\2/p" ;; esac; } echo cat <<\_ASBOX ## ----------------- ## ## Output variables. ## ## ----------------- ## _ASBOX echo for ac_var in $ac_subst_vars do eval ac_val=$`echo $ac_var` echo "$ac_var='"'"'$ac_val'"'"'" done | sort echo if test -n "$ac_subst_files"; then cat <<\_ASBOX ## ------------- ## ## Output files. ## ## ------------- ## _ASBOX echo for ac_var in $ac_subst_files do eval ac_val=$`echo $ac_var` echo "$ac_var='"'"'$ac_val'"'"'" done | sort echo fi if test -s confdefs.h; then cat <<\_ASBOX ## ----------- ## ## confdefs.h. ## ## ----------- ## _ASBOX echo sed "/^$/d" confdefs.h | sort echo fi test "$ac_signal" != 0 && echo "$as_me: caught signal $ac_signal" echo "$as_me: exit $exit_status" } >&5 rm -f core *.core && rm -rf conftest* confdefs* conf$$* $ac_clean_files && exit $exit_status ' 0 for ac_signal in 1 2 13 15; do trap 'ac_signal='$ac_signal'; { (exit 1); exit 1; }' $ac_signal done ac_signal=0 # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -rf conftest* confdefs.h # AIX cpp loses on an empty file, so make sure it contains at least a newline. echo >confdefs.h # Predefined preprocessor variables. cat >>confdefs.h <<_ACEOF #define PACKAGE_NAME "$PACKAGE_NAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_TARNAME "$PACKAGE_TARNAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_VERSION "$PACKAGE_VERSION" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_STRING "$PACKAGE_STRING" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_BUGREPORT "$PACKAGE_BUGREPORT" _ACEOF # Let the site file select an alternate cache file if it wants to. # Prefer explicitly selected file to automatically selected ones. if test -z "$CONFIG_SITE"; then if test "x$prefix" != xNONE; then CONFIG_SITE="$prefix/share/config.site $prefix/etc/config.site" else CONFIG_SITE="$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site" fi fi for ac_site_file in $CONFIG_SITE; do if test -r "$ac_site_file"; then { echo "$as_me:$LINENO: loading site script $ac_site_file" >&5 echo "$as_me: loading site script $ac_site_file" >&6;} sed 's/^/| /' "$ac_site_file" >&5 . "$ac_site_file" fi done if test -r "$cache_file"; then # Some versions of bash will fail to source /dev/null (special # files actually), so we avoid doing that. if test -f "$cache_file"; then { echo "$as_me:$LINENO: loading cache $cache_file" >&5 echo "$as_me: loading cache $cache_file" >&6;} case $cache_file in [\\/]* | ?:[\\/]* ) . $cache_file;; *) . ./$cache_file;; esac fi else { echo "$as_me:$LINENO: creating cache $cache_file" >&5 echo "$as_me: creating cache $cache_file" >&6;} >$cache_file fi # Check that the precious variables saved in the cache have kept the same # value. ac_cache_corrupted=false for ac_var in `(set) 2>&1 | sed -n 's/^ac_env_\([a-zA-Z_0-9]*\)_set=.*/\1/p'`; do eval ac_old_set=\$ac_cv_env_${ac_var}_set eval ac_new_set=\$ac_env_${ac_var}_set eval ac_old_val="\$ac_cv_env_${ac_var}_value" eval ac_new_val="\$ac_env_${ac_var}_value" case $ac_old_set,$ac_new_set in set,) { echo "$as_me:$LINENO: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5 echo "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;} ac_cache_corrupted=: ;; ,set) { echo "$as_me:$LINENO: error: \`$ac_var' was not set in the previous run" >&5 echo "$as_me: error: \`$ac_var' was not set in the previous run" >&2;} ac_cache_corrupted=: ;; ,);; *) if test "x$ac_old_val" != "x$ac_new_val"; then { echo "$as_me:$LINENO: error: \`$ac_var' has changed since the previous run:" >&5 echo "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;} { echo "$as_me:$LINENO: former value: $ac_old_val" >&5 echo "$as_me: former value: $ac_old_val" >&2;} { echo "$as_me:$LINENO: current value: $ac_new_val" >&5 echo "$as_me: current value: $ac_new_val" >&2;} ac_cache_corrupted=: fi;; esac # Pass precious variables to config.status. if test "$ac_new_set" = set; then case $ac_new_val in *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?\"\']*) ac_arg=$ac_var=`echo "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;; *) ac_arg=$ac_var=$ac_new_val ;; esac case " $ac_configure_args " in *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy. *) ac_configure_args="$ac_configure_args '$ac_arg'" ;; esac fi done if $ac_cache_corrupted; then { echo "$as_me:$LINENO: error: changes in the environment can compromise the build" >&5 echo "$as_me: error: changes in the environment can compromise the build" >&2;} { { echo "$as_me:$LINENO: error: run \`make distclean' and/or \`rm $cache_file' and start over" >&5 echo "$as_me: error: run \`make distclean' and/or \`rm $cache_file' and start over" >&2;} { (exit 1); exit 1; }; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu echo "$as_me:$LINENO: checking whether to enable maintainer-specific portions of Makefiles" >&5 echo $ECHO_N "checking whether to enable maintainer-specific portions of Makefiles... $ECHO_C" >&6 # Check whether --enable-maintainer-mode or --disable-maintainer-mode was given. if test "${enable_maintainer_mode+set}" = set; then enableval="$enable_maintainer_mode" USE_MAINTAINER_MODE=$enableval else USE_MAINTAINER_MODE=no fi; echo "$as_me:$LINENO: result: $USE_MAINTAINER_MODE" >&5 echo "${ECHO_T}$USE_MAINTAINER_MODE" >&6 if test $USE_MAINTAINER_MODE = yes; then MAINTAINER_MODE_TRUE= MAINTAINER_MODE_FALSE='#' else MAINTAINER_MODE_TRUE='#' MAINTAINER_MODE_FALSE= fi MAINT=$MAINTAINER_MODE_TRUE # Automake setup am__api_version="1.9" ac_aux_dir= for ac_dir in $srcdir $srcdir/.. $srcdir/../..; do if test -f $ac_dir/install-sh; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install-sh -c" break elif test -f $ac_dir/install.sh; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install.sh -c" break elif test -f $ac_dir/shtool; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/shtool install -c" break fi done if test -z "$ac_aux_dir"; then { { echo "$as_me:$LINENO: error: cannot find install-sh or install.sh in $srcdir $srcdir/.. $srcdir/../.." >&5 echo "$as_me: error: cannot find install-sh or install.sh in $srcdir $srcdir/.. $srcdir/../.." >&2;} { (exit 1); exit 1; }; } fi ac_config_guess="$SHELL $ac_aux_dir/config.guess" ac_config_sub="$SHELL $ac_aux_dir/config.sub" ac_configure="$SHELL $ac_aux_dir/configure" # This should be Cygnus configure. # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or # incompatible versions: # SysV /etc/install, /usr/sbin/install # SunOS /usr/etc/install # IRIX /sbin/install # AIX /bin/install # AmigaOS /C/install, which installs bootblocks on floppy discs # AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag # AFS /usr/afsws/bin/install, which mishandles nonexistent args # SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" # OS/2's system install, which has a completely different semantic # ./install, which can be erroneously created by make from ./install.sh. echo "$as_me:$LINENO: checking for a BSD-compatible install" >&5 echo $ECHO_N "checking for a BSD-compatible install... $ECHO_C" >&6 if test -z "$INSTALL"; then if test "${ac_cv_path_install+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. # Account for people who put trailing slashes in PATH elements. case $as_dir/ in ./ | .// | /cC/* | \ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \ ?:\\/os2\\/install\\/* | ?:\\/OS2\\/INSTALL\\/* | \ /usr/ucb/* ) ;; *) # OSF1 and SCO ODT 3.0 have their own names for install. # Don't use installbsd from OSF since it installs stuff as root # by default. for ac_prog in ginstall scoinst install; do for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$ac_prog$ac_exec_ext"; then if test $ac_prog = install && grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. : elif test $ac_prog = install && grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # program-specific install script used by HP pwplus--don't use. : else ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c" break 3 fi fi done done ;; esac done fi if test "${ac_cv_path_install+set}" = set; then INSTALL=$ac_cv_path_install else # As a last resort, use the slow shell script. We don't cache a # path for INSTALL within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the path is relative. INSTALL=$ac_install_sh fi fi echo "$as_me:$LINENO: result: $INSTALL" >&5 echo "${ECHO_T}$INSTALL" >&6 # Use test -z because SunOS4 sh mishandles braces in ${var-val}. # It thinks the first close brace ends the variable substitution. test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}' test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6 # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi echo "$as_me:$LINENO: result: yes" >&5 echo "${ECHO_T}yes" >&6 test "$program_prefix" != NONE && program_transform_name="s,^,$program_prefix,;$program_transform_name" # Use a double $ so make ignores it. test "$program_suffix" != NONE && program_transform_name="s,\$,$program_suffix,;$program_transform_name" # Double any \ or $. echo might interpret backslashes. # By default was `s,x,x', remove it if useless. cat <<\_ACEOF >conftest.sed s/[\\$]/&&/g;s/;s,x,x,$// _ACEOF program_transform_name=`echo $program_transform_name | sed -f conftest.sed` rm conftest.sed # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` test x"${MISSING+set}" = xset || MISSING="\${SHELL} $am_aux_dir/missing" # Use eval to expand $SHELL if eval "$MISSING --run true"; then am_missing_run="$MISSING --run " else am_missing_run= { echo "$as_me:$LINENO: WARNING: \`missing' script is too old or missing" >&5 echo "$as_me: WARNING: \`missing' script is too old or missing" >&2;} fi if mkdir -p --version . >/dev/null 2>&1 && test ! -d ./--version; then # We used to keeping the `.' as first argument, in order to # allow $(mkdir_p) to be used without argument. As in # $(mkdir_p) $(somedir) # where $(somedir) is conditionally defined. However this is wrong # for two reasons: # 1. if the package is installed by a user who cannot write `.' # make install will fail, # 2. the above comment should most certainly read # $(mkdir_p) $(DESTDIR)$(somedir) # so it does not work when $(somedir) is undefined and # $(DESTDIR) is not. # To support the latter case, we have to write # test -z "$(somedir)" || $(mkdir_p) $(DESTDIR)$(somedir), # so the `.' trick is pointless. mkdir_p='mkdir -p --' else # On NextStep and OpenStep, the `mkdir' command does not # recognize any option. It will interpret all options as # directories to create, and then abort because `.' already # exists. for d in ./-p ./--version; do test -d $d && rmdir $d done # $(mkinstalldirs) is defined by Automake if mkinstalldirs exists. if test -f "$ac_aux_dir/mkinstalldirs"; then mkdir_p='$(mkinstalldirs)' else mkdir_p='$(install_sh) -d' fi fi for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 echo "$as_me:$LINENO: checking for $ac_word" >&5 echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6 if test "${ac_cv_prog_AWK+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else if test -n "$AWK"; then ac_cv_prog_AWK="$AWK" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_AWK="$ac_prog" echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done fi fi AWK=$ac_cv_prog_AWK if test -n "$AWK"; then echo "$as_me:$LINENO: result: $AWK" >&5 echo "${ECHO_T}$AWK" >&6 else echo "$as_me:$LINENO: result: no" >&5 echo "${ECHO_T}no" >&6 fi test -n "$AWK" && break done echo "$as_me:$LINENO: checking whether ${MAKE-make} sets \$(MAKE)" >&5 echo $ECHO_N "checking whether ${MAKE-make} sets \$(MAKE)... $ECHO_C" >&6 set dummy ${MAKE-make}; ac_make=`echo "$2" | sed 'y,:./+-,___p_,'` if eval "test \"\${ac_cv_prog_make_${ac_make}_set+set}\" = set"; then echo $ECHO_N "(cached) $ECHO_C" >&6 else cat >conftest.make <<\_ACEOF all: @echo 'ac_maketemp="$(MAKE)"' _ACEOF # GNU make sometimes prints "make[1]: Entering...", which would confuse us. eval `${MAKE-make} -f conftest.make 2>/dev/null | grep temp=` if test -n "$ac_maketemp"; then eval ac_cv_prog_make_${ac_make}_set=yes else eval ac_cv_prog_make_${ac_make}_set=no fi rm -f conftest.make fi if eval "test \"`echo '$ac_cv_prog_make_'${ac_make}_set`\" = yes"; then echo "$as_me:$LINENO: result: yes" >&5 echo "${ECHO_T}yes" >&6 SET_MAKE= else echo "$as_me:$LINENO: result: no" >&5 echo "${ECHO_T}no" >&6 SET_MAKE="MAKE=${MAKE-make}" fi rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null # test to see if srcdir already configured if test "`cd $srcdir && pwd`" != "`pwd`" && test -f $srcdir/config.status; then { { echo "$as_me:$LINENO: error: source directory already configured; run \"make distclean\" there first" >&5 echo "$as_me: error: source directory already configured; run \"make distclean\" there first" >&2;} { (exit 1); exit 1; }; } fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi # Define the identity of the package. PACKAGE='make' VERSION='3.81' cat >>confdefs.h <<_ACEOF #define PACKAGE "$PACKAGE" _ACEOF cat >>confdefs.h <<_ACEOF #define VERSION "$VERSION" _ACEOF # Some tools Automake needs. ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"} AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"} AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"} AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"} MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"} install_sh=${install_sh-"$am_aux_dir/install-sh"} # Installed binaries are usually stripped using `strip' when the user # run `make install-strip'. However `strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the `STRIP' environment variable to overrule this program. if test "$cross_compiling" != no; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 echo "$as_me:$LINENO: checking for $ac_word" >&5 echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6 if test "${ac_cv_prog_STRIP+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then echo "$as_me:$LINENO: result: $STRIP" >&5 echo "${ECHO_T}$STRIP" >&6 else echo "$as_me:$LINENO: result: no" >&5 echo "${ECHO_T}no" >&6 fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 echo "$as_me:$LINENO: checking for $ac_word" >&5 echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6 if test "${ac_cv_prog_ac_ct_STRIP+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_ac_ct_STRIP="strip" echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done test -z "$ac_cv_prog_ac_ct_STRIP" && ac_cv_prog_ac_ct_STRIP=":" fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then echo "$as_me:$LINENO: result: $ac_ct_STRIP" >&5 echo "${ECHO_T}$ac_ct_STRIP" >&6 else echo "$as_me:$LINENO: result: no" >&5 echo "${ECHO_T}no" >&6 fi STRIP=$ac_ct_STRIP else STRIP="$ac_cv_prog_STRIP" fi fi INSTALL_STRIP_PROGRAM="\${SHELL} \$(install_sh) -c -s" # We need awk for the "check" target. The system "awk" is bad on # some platforms. # Always define AMTAR for backward compatibility. AMTAR=${AMTAR-"${am_missing_run}tar"} am__tar='${AMTAR} chof - "$$tardir"'; am__untar='${AMTAR} xf -' # Checks for programs. # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or # incompatible versions: # SysV /etc/install, /usr/sbin/install # SunOS /usr/etc/install # IRIX /sbin/install # AIX /bin/install # AmigaOS /C/install, which installs bootblocks on floppy discs # AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag # AFS /usr/afsws/bin/install, which mishandles nonexistent args # SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" # OS/2's system install, which has a completely different semantic # ./install, which can be erroneously created by make from ./install.sh. echo "$as_me:$LINENO: checking for a BSD-compatible install" >&5 echo $ECHO_N "checking for a BSD-compatible install... $ECHO_C" >&6 if test -z "$INSTALL"; then if test "${ac_cv_path_install+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. # Account for people who put trailing slashes in PATH elements. case $as_dir/ in ./ | .// | /cC/* | \ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \ ?:\\/os2\\/install\\/* | ?:\\/OS2\\/INSTALL\\/* | \ /usr/ucb/* ) ;; *) # OSF1 and SCO ODT 3.0 have their own names for install. # Don't use installbsd from OSF since it installs stuff as root # by default. for ac_prog in ginstall scoinst install; do for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$ac_prog$ac_exec_ext"; then if test $ac_prog = install && grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. : elif test $ac_prog = install && grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # program-specific install script used by HP pwplus--don't use. : else ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c" break 3 fi fi done done ;; esac done fi if test "${ac_cv_path_install+set}" = set; then INSTALL=$ac_cv_path_install else # As a last resort, use the slow shell script. We don't cache a # path for INSTALL within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the path is relative. INSTALL=$ac_install_sh fi fi echo "$as_me:$LINENO: result: $INSTALL" >&5 echo "${ECHO_T}$INSTALL" >&6 # Use test -z because SunOS4 sh mishandles braces in ${var-val}. # It thinks the first close brace ends the variable substitution. test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}' test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' # Perl is needed for the test suite (only) # Extract the first word of "perl", so it can be a program name with args. set dummy perl; ac_word=$2 echo "$as_me:$LINENO: checking for $ac_word" >&5 echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6 if test "${ac_cv_prog_PERL+set}" = set; then echo $ECHO_N "(cached) $ECHO_C" >&6 else if test -n "$PERL"; then ac_cv_prog_PERL="$PERL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then ac_cv_prog_PERL="perl" echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done test -z "$ac_cv_prog_PERL" && ac_cv_prog_PERL="perl" fi fi PERL=$ac_cv_prog_PERL if test -n "$PERL"; then echo "$as_me:$LINENO: result: $PERL" >&5 echo "${ECHO_T}$PERL" >&6 else echo "$as_me:$LINENO: result: no" >&5 echo "${ECHO_T}no" >&6 fi # Specify what files are to be created. ac_config_files="$ac_config_files Makefile doc/Makefile" # OK, do it! cat >confcache <<\_ACEOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs, see configure's option --config-cache. # It is not useful on other systems. If it contains results you don't # want to keep, you may remove or edit it. # # config.status only pays attention to the cache file if you give it # the --recheck option to rerun configure. # # `ac_cv_env_foo' variables (set or unset) will be overridden when # loading this file, other *unset* `ac_cv_foo' will be assigned the # following values. _ACEOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, don't put newlines in cache variables' values. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. { (set) 2>&1 | case `(ac_space=' '; set | grep ac_space) 2>&1` in *ac_space=\ *) # `set' does not quote correctly, so add quotes (double-quote # substitution turns \\\\ into \\, and sed turns \\ into \). sed -n \ "s/'/'\\\\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p" ;; *) # `set' quotes correctly as required by POSIX, so do not add quotes. sed -n \ "s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1=\\2/p" ;; esac; } | sed ' t clear : clear s/^\([^=]*\)=\(.*[{}].*\)$/test "${\1+set}" = set || &/ t end /^ac_cv_env/!s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/ : end' >>confcache if diff $cache_file confcache >/dev/null 2>&1; then :; else if test -w $cache_file; then test "x$cache_file" != "x/dev/null" && echo "updating cache $cache_file" cat confcache >$cache_file else echo "not updating unwritable cache $cache_file" fi fi rm -f confcache test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' # VPATH may cause trouble with some makes, so we remove $(srcdir), # ${srcdir} and @srcdir@ from VPATH if srcdir is ".", strip leading and # trailing colons and then remove the whole line if VPATH becomes empty # (actually we leave an empty line to preserve line numbers). if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=/{ s/:*\$(srcdir):*/:/; s/:*\${srcdir}:*/:/; s/:*@srcdir@:*/:/; s/^\([^=]*=[ ]*\):*/\1/; s/:*$//; s/^[^=]*=[ ]*$//; }' fi # Transform confdefs.h into DEFS. # Protect against shell expansion while executing Makefile rules. # Protect against Makefile macro expansion. # # If the first sed substitution is executed (which looks for macros that # take arguments), then we branch to the quote section. Otherwise, # look for a macro that doesn't take arguments. cat >confdef2opt.sed <<\_ACEOF t clear : clear s,^[ ]*#[ ]*define[ ][ ]*\([^ (][^ (]*([^)]*)\)[ ]*\(.*\),-D\1=\2,g t quote s,^[ ]*#[ ]*define[ ][ ]*\([^ ][^ ]*\)[ ]*\(.*\),-D\1=\2,g t quote d : quote s,[ `~#$^&*(){}\\|;'"<>?],\\&,g s,\[,\\&,g s,\],\\&,g s,\$,$$,g p _ACEOF # We use echo to avoid assuming a particular line-breaking character. # The extra dot is to prevent the shell from consuming trailing # line-breaks from the sub-command output. A line-break within # single-quotes doesn't work because, if this script is created in a # platform that uses two characters for line-breaks (e.g., DOS), tr # would break. ac_LF_and_DOT=`echo; echo .` DEFS=`sed -n -f confdef2opt.sed confdefs.h | tr "$ac_LF_and_DOT" ' .'` rm -f confdef2opt.sed ac_libobjs= ac_ltlibobjs= for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue # 1. Remove the extension, and $U if already installed. ac_i=`echo "$ac_i" | sed 's/\$U\././;s/\.o$//;s/\.obj$//'` # 2. Add them. ac_libobjs="$ac_libobjs $ac_i\$U.$ac_objext" ac_ltlibobjs="$ac_ltlibobjs $ac_i"'$U.lo' done LIBOBJS=$ac_libobjs LTLIBOBJS=$ac_ltlibobjs if test -z "${MAINTAINER_MODE_TRUE}" && test -z "${MAINTAINER_MODE_FALSE}"; then { { echo "$as_me:$LINENO: error: conditional \"MAINTAINER_MODE\" was never defined. Usually this means the macro was only invoked conditionally." >&5 echo "$as_me: error: conditional \"MAINTAINER_MODE\" was never defined. Usually this means the macro was only invoked conditionally." >&2;} { (exit 1); exit 1; }; } fi : ${CONFIG_STATUS=./config.status} ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files $CONFIG_STATUS" { echo "$as_me:$LINENO: creating $CONFIG_STATUS" >&5 echo "$as_me: creating $CONFIG_STATUS" >&6;} cat >$CONFIG_STATUS <<_ACEOF #! $SHELL # Generated by $as_me. # Run this file to recreate the current configuration. # Compiler output produced by configure, useful for debugging # configure, is in config.log if it exists. debug=false ac_cs_recheck=false ac_cs_silent=false SHELL=\${CONFIG_SHELL-$SHELL} _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF ## --------------------- ## ## M4sh Initialization. ## ## --------------------- ## # Be Bourne compatible if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' elif test -n "${BASH_VERSION+set}" && (set -o posix) >/dev/null 2>&1; then set -o posix fi DUALCASE=1; export DUALCASE # for MKS sh # Support unset when possible. if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then as_unset=unset else as_unset=false fi # Work around bugs in pre-3.0 UWIN ksh. $as_unset ENV MAIL MAILPATH PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. for as_var in \ LANG LANGUAGE LC_ADDRESS LC_ALL LC_COLLATE LC_CTYPE LC_IDENTIFICATION \ LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER \ LC_TELEPHONE LC_TIME do if (set +x; test -z "`(eval $as_var=C; export $as_var) 2>&1`"); then eval $as_var=C; export $as_var else $as_unset $as_var fi done # Required to use basename. if expr a : '\(a\)' >/dev/null 2>&1; then as_expr=expr else as_expr=false fi if (basename /) >/dev/null 2>&1 && test "X`basename / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi # Name of the executable. as_me=`$as_basename "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)$' \| \ . : '\(.\)' 2>/dev/null || echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/; q; } /^X\/\(\/\/\)$/{ s//\1/; q; } /^X\/\(\/\).*/{ s//\1/; q; } s/.*/./; q'` # PATH needs CR, and LINENO needs CR and PATH. # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then echo "#! /bin/sh" >conf$$.sh echo "exit 0" >>conf$$.sh chmod +x conf$$.sh if (PATH="/nonexistent;."; conf$$.sh) >/dev/null 2>&1; then PATH_SEPARATOR=';' else PATH_SEPARATOR=: fi rm -f conf$$.sh fi as_lineno_1=$LINENO as_lineno_2=$LINENO as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null` test "x$as_lineno_1" != "x$as_lineno_2" && test "x$as_lineno_3" = "x$as_lineno_2" || { # Find who we are. Look in the path if we contain no path at all # relative or not. case $0 in *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then { { echo "$as_me:$LINENO: error: cannot find myself; rerun with an absolute path" >&5 echo "$as_me: error: cannot find myself; rerun with an absolute path" >&2;} { (exit 1); exit 1; }; } fi case $CONFIG_SHELL in '') as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for as_base in sh bash ksh sh5; do case $as_dir in /*) if ("$as_dir/$as_base" -c ' as_lineno_1=$LINENO as_lineno_2=$LINENO as_lineno_3=`(expr $as_lineno_1 + 1) 2>/dev/null` test "x$as_lineno_1" != "x$as_lineno_2" && test "x$as_lineno_3" = "x$as_lineno_2" ') 2>/dev/null; then $as_unset BASH_ENV || test "${BASH_ENV+set}" != set || { BASH_ENV=; export BASH_ENV; } $as_unset ENV || test "${ENV+set}" != set || { ENV=; export ENV; } CONFIG_SHELL=$as_dir/$as_base export CONFIG_SHELL exec "$CONFIG_SHELL" "$0" ${1+"$@"} fi;; esac done done ;; esac # Create $as_me.lineno as a copy of $as_myself, but with $LINENO # uniformly replaced by the line number. The first 'sed' inserts a # line-number line before each line; the second 'sed' does the real # work. The second script uses 'N' to pair each line-number line # with the numbered line, and appends trailing '-' during # substitution so that $LINENO is not a special case at line end. # (Raja R Harinath suggested sed '=', and Paul Eggert wrote the # second 'sed' script. Blame Lee E. McMahon for sed's syntax. :-) sed '=' <$as_myself | sed ' N s,$,-, : loop s,^\(['$as_cr_digits']*\)\(.*\)[$]LINENO\([^'$as_cr_alnum'_]\),\1\2\1\3, t loop s,-$,, s,^['$as_cr_digits']*\n,, ' >$as_me.lineno && chmod +x $as_me.lineno || { { echo "$as_me:$LINENO: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&5 echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2;} { (exit 1); exit 1; }; } # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensible to this). . ./$as_me.lineno # Exit status is that of the last command. exit } case `echo "testing\c"; echo 1,2,3`,`echo -n testing; echo 1,2,3` in *c*,-n*) ECHO_N= ECHO_C=' ' ECHO_T=' ' ;; *c*,* ) ECHO_N=-n ECHO_C= ECHO_T= ;; *) ECHO_N= ECHO_C='\c' ECHO_T= ;; esac if expr a : '\(a\)' >/dev/null 2>&1; then as_expr=expr else as_expr=false fi rm -f conf$$ conf$$.exe conf$$.file echo >conf$$.file if ln -s conf$$.file conf$$ 2>/dev/null; then # We could just check for DJGPP; but this test a) works b) is more generic # and c) will remain valid once DJGPP supports symlinks (DJGPP 2.04). if test -f conf$$.exe; then # Don't use ln at all; we don't have any links as_ln_s='cp -p' else as_ln_s='ln -s' fi elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -p' fi rm -f conf$$ conf$$.exe conf$$.file if mkdir -p . 2>/dev/null; then as_mkdir_p=: else test -d ./-p && rmdir ./-p as_mkdir_p=false fi as_executable_p="test -f" # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" # IFS # We need space, tab and new line, in precisely that order. as_nl=' ' IFS=" $as_nl" # CDPATH. $as_unset CDPATH exec 6>&1 # Open the log real soon, to keep \$[0] and so on meaningful, and to # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. Logging --version etc. is OK. exec 5>>config.log { echo sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX ## Running $as_me. ## _ASBOX } >&5 cat >&5 <<_CSEOF This file was extended by GNU make $as_me 3.81, which was generated by GNU Autoconf 2.59. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS CONFIG_LINKS = $CONFIG_LINKS CONFIG_COMMANDS = $CONFIG_COMMANDS $ $0 $@ _CSEOF echo "on `(hostname || uname -n) 2>/dev/null | sed 1q`" >&5 echo >&5 _ACEOF # Files that config.status was made for. if test -n "$ac_config_files"; then echo "config_files=\"$ac_config_files\"" >>$CONFIG_STATUS fi if test -n "$ac_config_headers"; then echo "config_headers=\"$ac_config_headers\"" >>$CONFIG_STATUS fi if test -n "$ac_config_links"; then echo "config_links=\"$ac_config_links\"" >>$CONFIG_STATUS fi if test -n "$ac_config_commands"; then echo "config_commands=\"$ac_config_commands\"" >>$CONFIG_STATUS fi cat >>$CONFIG_STATUS <<\_ACEOF ac_cs_usage="\ \`$as_me' instantiates files from templates according to the current configuration. Usage: $0 [OPTIONS] [FILE]... -h, --help print this help, then exit -V, --version print version number, then exit -q, --quiet do not print progress messages -d, --debug don't remove temporary files --recheck update $as_me by reconfiguring in the same conditions --file=FILE[:TEMPLATE] instantiate the configuration file FILE Configuration files: $config_files Report bugs to ." _ACEOF cat >>$CONFIG_STATUS <<_ACEOF ac_cs_version="\\ GNU make config.status 3.81 configured by $0, generated by GNU Autoconf 2.59, with options \\"`echo "$ac_configure_args" | sed 's/[\\""\`\$]/\\\\&/g'`\\" Copyright (C) 2003 Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." srcdir=$srcdir INSTALL="$INSTALL" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF # If no file are specified by the user, then we need to provide default # value. By we need to know if files were specified by the user. ac_need_defaults=: while test $# != 0 do case $1 in --*=*) ac_option=`expr "x$1" : 'x\([^=]*\)='` ac_optarg=`expr "x$1" : 'x[^=]*=\(.*\)'` ac_shift=: ;; -*) ac_option=$1 ac_optarg=$2 ac_shift=shift ;; *) # This is not an option, so the user has probably given explicit # arguments. ac_option=$1 ac_need_defaults=false;; esac case $ac_option in # Handling of the options. _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) ac_cs_recheck=: ;; --version | --vers* | -V ) echo "$ac_cs_version"; exit 0 ;; --he | --h) # Conflict between --help and --header { { echo "$as_me:$LINENO: error: ambiguous option: $1 Try \`$0 --help' for more information." >&5 echo "$as_me: error: ambiguous option: $1 Try \`$0 --help' for more information." >&2;} { (exit 1); exit 1; }; };; --help | --hel | -h ) echo "$ac_cs_usage"; exit 0 ;; --debug | --d* | -d ) debug=: ;; --file | --fil | --fi | --f ) $ac_shift CONFIG_FILES="$CONFIG_FILES $ac_optarg" ac_need_defaults=false;; --header | --heade | --head | --hea ) $ac_shift CONFIG_HEADERS="$CONFIG_HEADERS $ac_optarg" ac_need_defaults=false;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil | --si | --s) ac_cs_silent=: ;; # This is an error. -*) { { echo "$as_me:$LINENO: error: unrecognized option: $1 Try \`$0 --help' for more information." >&5 echo "$as_me: error: unrecognized option: $1 Try \`$0 --help' for more information." >&2;} { (exit 1); exit 1; }; } ;; *) ac_config_targets="$ac_config_targets $1" ;; esac shift done ac_configure_extra_args= if $ac_cs_silent; then exec 6>/dev/null ac_configure_extra_args="$ac_configure_extra_args --silent" fi _ACEOF cat >>$CONFIG_STATUS <<_ACEOF if \$ac_cs_recheck; then echo "running $SHELL $0 " $ac_configure_args \$ac_configure_extra_args " --no-create --no-recursion" >&6 exec $SHELL $0 $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion fi _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF for ac_config_target in $ac_config_targets do case "$ac_config_target" in # Handling of arguments. "Makefile" ) CONFIG_FILES="$CONFIG_FILES Makefile" ;; "doc/Makefile" ) CONFIG_FILES="$CONFIG_FILES doc/Makefile" ;; *) { { echo "$as_me:$LINENO: error: invalid argument: $ac_config_target" >&5 echo "$as_me: error: invalid argument: $ac_config_target" >&2;} { (exit 1); exit 1; }; };; esac done # If the user did not use the arguments to specify the items to instantiate, # then the envvar interface is used. Set only those that are not. # We use the long form for the default assignment because of an extremely # bizarre bug on SunOS 4.1.3. if $ac_need_defaults; then test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files fi # Have a temporary directory for convenience. Make it in the build tree # simply because there is no reason to put it here, and in addition, # creating and moving files from /tmp can sometimes cause problems. # Create a temporary directory, and hook for its removal unless debugging. $debug || { trap 'exit_status=$?; rm -rf $tmp && exit $exit_status' 0 trap '{ (exit 1); exit 1; }' 1 2 13 15 } # Create a (secure) tmp directory for tmp files. { tmp=`(umask 077 && mktemp -d -q "./confstatXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" } || { tmp=./confstat$$-$RANDOM (umask 077 && mkdir $tmp) } || { echo "$me: cannot create a temporary directory in ." >&2 { (exit 1); exit 1; } } _ACEOF cat >>$CONFIG_STATUS <<_ACEOF # # CONFIG_FILES section. # # No need to generate the scripts if there are no CONFIG_FILES. # This happens for instance when ./config.status config.h if test -n "\$CONFIG_FILES"; then # Protect against being on the right side of a sed subst in config.status. sed 's/,@/@@/; s/@,/@@/; s/,;t t\$/@;t t/; /@;t t\$/s/[\\\\&,]/\\\\&/g; s/@@/,@/; s/@@/@,/; s/@;t t\$/,;t t/' >\$tmp/subs.sed <<\\CEOF s,@SHELL@,$SHELL,;t t s,@PATH_SEPARATOR@,$PATH_SEPARATOR,;t t s,@PACKAGE_NAME@,$PACKAGE_NAME,;t t s,@PACKAGE_TARNAME@,$PACKAGE_TARNAME,;t t s,@PACKAGE_VERSION@,$PACKAGE_VERSION,;t t s,@PACKAGE_STRING@,$PACKAGE_STRING,;t t s,@PACKAGE_BUGREPORT@,$PACKAGE_BUGREPORT,;t t s,@exec_prefix@,$exec_prefix,;t t s,@prefix@,$prefix,;t t s,@program_transform_name@,$program_transform_name,;t t s,@bindir@,$bindir,;t t s,@sbindir@,$sbindir,;t t s,@libexecdir@,$libexecdir,;t t s,@datadir@,$datadir,;t t s,@sysconfdir@,$sysconfdir,;t t s,@sharedstatedir@,$sharedstatedir,;t t s,@localstatedir@,$localstatedir,;t t s,@libdir@,$libdir,;t t s,@includedir@,$includedir,;t t s,@oldincludedir@,$oldincludedir,;t t s,@infodir@,$infodir,;t t s,@mandir@,$mandir,;t t s,@build_alias@,$build_alias,;t t s,@host_alias@,$host_alias,;t t s,@target_alias@,$target_alias,;t t s,@DEFS@,$DEFS,;t t s,@ECHO_C@,$ECHO_C,;t t s,@ECHO_N@,$ECHO_N,;t t s,@ECHO_T@,$ECHO_T,;t t s,@LIBS@,$LIBS,;t t s,@MAINTAINER_MODE_TRUE@,$MAINTAINER_MODE_TRUE,;t t s,@MAINTAINER_MODE_FALSE@,$MAINTAINER_MODE_FALSE,;t t s,@MAINT@,$MAINT,;t t s,@INSTALL_PROGRAM@,$INSTALL_PROGRAM,;t t s,@INSTALL_SCRIPT@,$INSTALL_SCRIPT,;t t s,@INSTALL_DATA@,$INSTALL_DATA,;t t s,@CYGPATH_W@,$CYGPATH_W,;t t s,@PACKAGE@,$PACKAGE,;t t s,@VERSION@,$VERSION,;t t s,@ACLOCAL@,$ACLOCAL,;t t s,@AUTOCONF@,$AUTOCONF,;t t s,@AUTOMAKE@,$AUTOMAKE,;t t s,@AUTOHEADER@,$AUTOHEADER,;t t s,@MAKEINFO@,$MAKEINFO,;t t s,@install_sh@,$install_sh,;t t s,@STRIP@,$STRIP,;t t s,@ac_ct_STRIP@,$ac_ct_STRIP,;t t s,@INSTALL_STRIP_PROGRAM@,$INSTALL_STRIP_PROGRAM,;t t s,@mkdir_p@,$mkdir_p,;t t s,@AWK@,$AWK,;t t s,@SET_MAKE@,$SET_MAKE,;t t s,@am__leading_dot@,$am__leading_dot,;t t s,@AMTAR@,$AMTAR,;t t s,@am__tar@,$am__tar,;t t s,@am__untar@,$am__untar,;t t s,@PERL@,$PERL,;t t s,@LIBOBJS@,$LIBOBJS,;t t s,@LTLIBOBJS@,$LTLIBOBJS,;t t CEOF _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF # Split the substitutions into bite-sized pieces for seds with # small command number limits, like on Digital OSF/1 and HP-UX. ac_max_sed_lines=48 ac_sed_frag=1 # Number of current file. ac_beg=1 # First line for current file. ac_end=$ac_max_sed_lines # Line after last line for current file. ac_more_lines=: ac_sed_cmds= while $ac_more_lines; do if test $ac_beg -gt 1; then sed "1,${ac_beg}d; ${ac_end}q" $tmp/subs.sed >$tmp/subs.frag else sed "${ac_end}q" $tmp/subs.sed >$tmp/subs.frag fi if test ! -s $tmp/subs.frag; then ac_more_lines=false else # The purpose of the label and of the branching condition is to # speed up the sed processing (if there are no `@' at all, there # is no need to browse any of the substitutions). # These are the two extra sed commands mentioned above. (echo ':t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b' && cat $tmp/subs.frag) >$tmp/subs-$ac_sed_frag.sed if test -z "$ac_sed_cmds"; then ac_sed_cmds="sed -f $tmp/subs-$ac_sed_frag.sed" else ac_sed_cmds="$ac_sed_cmds | sed -f $tmp/subs-$ac_sed_frag.sed" fi ac_sed_frag=`expr $ac_sed_frag + 1` ac_beg=$ac_end ac_end=`expr $ac_end + $ac_max_sed_lines` fi done if test -z "$ac_sed_cmds"; then ac_sed_cmds=cat fi fi # test -n "$CONFIG_FILES" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF for ac_file in : $CONFIG_FILES; do test "x$ac_file" = x: && continue # Support "outfile[:infile[:infile...]]", defaulting infile="outfile.in". case $ac_file in - | *:- | *:-:* ) # input from stdin cat >$tmp/stdin ac_file_in=`echo "$ac_file" | sed 's,[^:]*:,,'` ac_file=`echo "$ac_file" | sed 's,:.*,,'` ;; *:* ) ac_file_in=`echo "$ac_file" | sed 's,[^:]*:,,'` ac_file=`echo "$ac_file" | sed 's,:.*,,'` ;; * ) ac_file_in=$ac_file.in ;; esac # Compute @srcdir@, @top_srcdir@, and @INSTALL@ for subdirectories. ac_dir=`(dirname "$ac_file") 2>/dev/null || $as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$ac_file" : 'X\(//\)[^/]' \| \ X"$ac_file" : 'X\(//\)$' \| \ X"$ac_file" : 'X\(/\)' \| \ . : '\(.\)' 2>/dev/null || echo X"$ac_file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; } /^X\(\/\/\)[^/].*/{ s//\1/; q; } /^X\(\/\/\)$/{ s//\1/; q; } /^X\(\/\).*/{ s//\1/; q; } s/.*/./; q'` { if $as_mkdir_p; then mkdir -p "$ac_dir" else as_dir="$ac_dir" as_dirs= while test ! -d "$as_dir"; do as_dirs="$as_dir $as_dirs" as_dir=`(dirname "$as_dir") 2>/dev/null || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| \ . : '\(.\)' 2>/dev/null || echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/; q; } /^X\(\/\/\)[^/].*/{ s//\1/; q; } /^X\(\/\/\)$/{ s//\1/; q; } /^X\(\/\).*/{ s//\1/; q; } s/.*/./; q'` done test ! -n "$as_dirs" || mkdir $as_dirs fi || { { echo "$as_me:$LINENO: error: cannot create directory \"$ac_dir\"" >&5 echo "$as_me: error: cannot create directory \"$ac_dir\"" >&2;} { (exit 1); exit 1; }; }; } ac_builddir=. if test "$ac_dir" != .; then ac_dir_suffix=/`echo "$ac_dir" | sed 's,^\.[\\/],,'` # A "../" for each directory in $ac_dir_suffix. ac_top_builddir=`echo "$ac_dir_suffix" | sed 's,/[^\\/]*,../,g'` else ac_dir_suffix= ac_top_builddir= fi case $srcdir in .) # No --srcdir option. We are building in place. ac_srcdir=. if test -z "$ac_top_builddir"; then ac_top_srcdir=. else ac_top_srcdir=`echo $ac_top_builddir | sed 's,/$,,'` fi ;; [\\/]* | ?:[\\/]* ) # Absolute path. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ;; *) # Relative path. ac_srcdir=$ac_top_builddir$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_builddir$srcdir ;; esac # Do not use `cd foo && pwd` to compute absolute paths, because # the directories may not exist. case `pwd` in .) ac_abs_builddir="$ac_dir";; *) case "$ac_dir" in .) ac_abs_builddir=`pwd`;; [\\/]* | ?:[\\/]* ) ac_abs_builddir="$ac_dir";; *) ac_abs_builddir=`pwd`/"$ac_dir";; esac;; esac case $ac_abs_builddir in .) ac_abs_top_builddir=${ac_top_builddir}.;; *) case ${ac_top_builddir}. in .) ac_abs_top_builddir=$ac_abs_builddir;; [\\/]* | ?:[\\/]* ) ac_abs_top_builddir=${ac_top_builddir}.;; *) ac_abs_top_builddir=$ac_abs_builddir/${ac_top_builddir}.;; esac;; esac case $ac_abs_builddir in .) ac_abs_srcdir=$ac_srcdir;; *) case $ac_srcdir in .) ac_abs_srcdir=$ac_abs_builddir;; [\\/]* | ?:[\\/]* ) ac_abs_srcdir=$ac_srcdir;; *) ac_abs_srcdir=$ac_abs_builddir/$ac_srcdir;; esac;; esac case $ac_abs_builddir in .) ac_abs_top_srcdir=$ac_top_srcdir;; *) case $ac_top_srcdir in .) ac_abs_top_srcdir=$ac_abs_builddir;; [\\/]* | ?:[\\/]* ) ac_abs_top_srcdir=$ac_top_srcdir;; *) ac_abs_top_srcdir=$ac_abs_builddir/$ac_top_srcdir;; esac;; esac case $INSTALL in [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;; *) ac_INSTALL=$ac_top_builddir$INSTALL ;; esac # Let's still pretend it is `configure' which instantiates (i.e., don't # use $as_me), people would be surprised to read: # /* config.h. Generated by config.status. */ if test x"$ac_file" = x-; then configure_input= else configure_input="$ac_file. " fi configure_input=$configure_input"Generated from `echo $ac_file_in | sed 's,.*/,,'` by configure." # First look for the input files in the build tree, otherwise in the # src tree. ac_file_inputs=`IFS=: for f in $ac_file_in; do case $f in -) echo $tmp/stdin ;; [\\/$]*) # Absolute (can't be DOS-style, as IFS=:) test -f "$f" || { { echo "$as_me:$LINENO: error: cannot find input file: $f" >&5 echo "$as_me: error: cannot find input file: $f" >&2;} { (exit 1); exit 1; }; } echo "$f";; *) # Relative if test -f "$f"; then # Build tree echo "$f" elif test -f "$srcdir/$f"; then # Source tree echo "$srcdir/$f" else # /dev/null tree { { echo "$as_me:$LINENO: error: cannot find input file: $f" >&5 echo "$as_me: error: cannot find input file: $f" >&2;} { (exit 1); exit 1; }; } fi;; esac done` || { (exit 1); exit 1; } if test x"$ac_file" != x-; then { echo "$as_me:$LINENO: creating $ac_file" >&5 echo "$as_me: creating $ac_file" >&6;} rm -f "$ac_file" fi _ACEOF cat >>$CONFIG_STATUS <<_ACEOF sed "$ac_vpsub $extrasub _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF :t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b s,@configure_input@,$configure_input,;t t s,@srcdir@,$ac_srcdir,;t t s,@abs_srcdir@,$ac_abs_srcdir,;t t s,@top_srcdir@,$ac_top_srcdir,;t t s,@abs_top_srcdir@,$ac_abs_top_srcdir,;t t s,@builddir@,$ac_builddir,;t t s,@abs_builddir@,$ac_abs_builddir,;t t s,@top_builddir@,$ac_top_builddir,;t t s,@abs_top_builddir@,$ac_abs_top_builddir,;t t s,@INSTALL@,$ac_INSTALL,;t t " $ac_file_inputs | (eval "$ac_sed_cmds") >$tmp/out rm -f $tmp/stdin if test x"$ac_file" != x-; then mv $tmp/out $ac_file else cat $tmp/out rm -f $tmp/out fi done _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF { (exit 0); exit 0; } _ACEOF chmod +x $CONFIG_STATUS ac_clean_files=$ac_clean_files_save # configure is writing to config.log, and then calls config.status. # config.status does its own redirection, appending to config.log. # Unfortunately, on DOS this fails, as config.log is still kept open # by configure, so config.status won't be able to write to it; its # output is simply discarded. So we exec the FD to /dev/null, # effectively closing config.log, so it can be properly (re)opened and # appended to by config.status. When coming back to configure, we # need to make the FD available again. if test "$no_create" != yes; then ac_cs_success=: ac_config_status_args= test "$silent" = yes && ac_config_status_args="$ac_config_status_args --quiet" exec 5>/dev/null $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false exec 5>>config.log # Use ||, not &&, to avoid exiting from the if with $? = 1, which # would make configure fail if this is the last instruction. $ac_cs_success || { (exit 1); exit 1; } fi make-doc-non-dfsg-3.81.orig/configure.in0000644000175000017500000000276210416557457020405 0ustar srivastasrivasta# Process this file with autoconf to produce a configure script. # # Copyright (C) 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, # 2003, 2004, 2005, 2006 Free Software Foundation, Inc. # This file is part of GNU Make. # # GNU Make is free software; you can redistribute it and/or modify it under the # terms of the GNU General Public License as published by the Free Software # Foundation; either version 2, or (at your option) any later version. # # GNU Make is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR # A PARTICULAR PURPOSE. See the GNU General Public License for more details. # # You should have received a copy of the GNU General Public License along with # GNU Make; see the file COPYING. If not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. AC_INIT([GNU make],[3.81],[bug-make@gnu.org]) AC_PREREQ(2.59) AC_REVISION([[$Id: configure.in,v 1.142 2006/04/01 06:36:40 psmith Exp $]]) AM_MAINTAINER_MODE # Automake setup AM_INIT_AUTOMAKE([1.9]) # Checks for programs. AC_PROG_INSTALL # Perl is needed for the test suite (only) AC_CHECK_PROG(PERL, perl, perl, perl) # Specify what files are to be created. AC_CONFIG_FILES(Makefile doc/Makefile) # OK, do it! AC_OUTPUT dnl Local Variables: dnl comment-start: "dnl " dnl comment-end: "" dnl comment-start-skip: "\\bdnl\\b\\s *" dnl compile-command: "make configure config.h.in" dnl End: make-doc-non-dfsg-3.81.orig/doc/0000755000175000017500000000000010416557457016632 5ustar srivastasrivastamake-doc-non-dfsg-3.81.orig/doc/Makefile.am0000644000175000017500000000233210416557457020666 0ustar srivastasrivasta# -*-Makefile-*-, or close enough # Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software # Foundation, Inc. # This file is part of GNU Make. # # GNU Make is free software; you can redistribute it and/or modify it under the # terms of the GNU General Public License as published by the Free Software # Foundation; either version 2, or (at your option) any later version. # # GNU Make is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR # A PARTICULAR PURPOSE. See the GNU General Public License for more details. # # You should have received a copy of the GNU General Public License along with # GNU Make; see the file COPYING. If not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. TEXI2HTML = texi2html TEXI2HTML_FLAGS = -split_chapter info_TEXINFOS = make.texi make_TEXINFOS = fdl.texi make-stds.texi CLEANFILES = make*.html ## ----------------------------- ## ## Other documentation formats. ## ## ----------------------------- ## html: make_1.html make_1.html: $(info_TEXINFOS) $(make_TEXINFOS) $(TEXI2HTML) $(TEXI2HTML_FLAGS) $(srcdir)/make.texi .PHONY: html make-doc-non-dfsg-3.81.orig/doc/Makefile.in0000644000175000017500000004041010416557457020676 0ustar srivastasrivasta# Makefile.in generated by automake 1.9.6 from Makefile.am. # @configure_input@ # Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, # 2003, 2004, 2005 Free Software Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # -*-Makefile-*-, or close enough # Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005, 2006 Free Software # Foundation, Inc. # This file is part of GNU Make. # # GNU Make is free software; you can redistribute it and/or modify it under the # terms of the GNU General Public License as published by the Free Software # Foundation; either version 2, or (at your option) any later version. # # GNU Make is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR # A PARTICULAR PURPOSE. See the GNU General Public License for more details. # # You should have received a copy of the GNU General Public License along with # GNU Make; see the file COPYING. If not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. srcdir = @srcdir@ top_srcdir = @top_srcdir@ VPATH = @srcdir@ pkgdatadir = $(datadir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ top_builddir = .. am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd INSTALL = @INSTALL@ install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ subdir = doc DIST_COMMON = $(make_TEXINFOS) $(srcdir)/Makefile.am \ $(srcdir)/Makefile.in $(srcdir)/stamp-vti \ $(srcdir)/version.texi ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/config/dospaths.m4 \ $(top_srcdir)/config/gettext.m4 $(top_srcdir)/config/iconv.m4 \ $(top_srcdir)/config/lib-ld.m4 \ $(top_srcdir)/config/lib-link.m4 \ $(top_srcdir)/config/lib-prefix.m4 $(top_srcdir)/config/nls.m4 \ $(top_srcdir)/config/po.m4 $(top_srcdir)/config/progtest.m4 \ $(top_srcdir)/acinclude.m4 $(top_srcdir)/configure.in am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) mkinstalldirs = $(SHELL) $(top_srcdir)/config/mkinstalldirs CONFIG_HEADER = $(top_builddir)/config.h CONFIG_CLEAN_FILES = SOURCES = DIST_SOURCES = INFO_DEPS = $(srcdir)/make.info TEXINFO_TEX = $(top_srcdir)/config/texinfo.tex am__TEXINFO_TEX_DIR = $(top_srcdir)/config DVIS = make.dvi PDFS = make.pdf PSS = make.ps HTMLS = make.html TEXINFOS = make.texi TEXI2DVI = texi2dvi TEXI2PDF = $(TEXI2DVI) --pdf --batch MAKEINFOHTML = $(MAKEINFO) --html AM_MAKEINFOHTMLFLAGS = $(AM_MAKEINFOFLAGS) DVIPS = dvips am__installdirs = "$(DESTDIR)$(infodir)" DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) ACLOCAL = @ACLOCAL@ ALLOCA = @ALLOCA@ AMDEP_FALSE = @AMDEP_FALSE@ AMDEP_TRUE = @AMDEP_TRUE@ AMTAR = @AMTAR@ AR = @AR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ GETLOADAVG_LIBS = @GETLOADAVG_LIBS@ GLOBINC = @GLOBINC@ GLOBLIB = @GLOBLIB@ GMSGFMT = @GMSGFMT@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ INTLLIBS = @INTLLIBS@ KMEM_GROUP = @KMEM_GROUP@ LDFLAGS = @LDFLAGS@ LIBICONV = @LIBICONV@ LIBINTL = @LIBINTL@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LTLIBICONV = @LTLIBICONV@ LTLIBINTL = @LTLIBINTL@ LTLIBOBJS = @LTLIBOBJS@ MAKEINFO = @MAKEINFO@ MAKE_HOST = @MAKE_HOST@ MKINSTALLDIRS = @MKINSTALLDIRS@ MSGFMT = @MSGFMT@ MSGMERGE = @MSGMERGE@ NEED_SETGID = @NEED_SETGID@ OBJEXT = @OBJEXT@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PERL = @PERL@ POSUB = @POSUB@ RANLIB = @RANLIB@ REMOTE = @REMOTE@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ U = @U@ USE_CUSTOMS_FALSE = @USE_CUSTOMS_FALSE@ USE_CUSTOMS_TRUE = @USE_CUSTOMS_TRUE@ USE_LOCAL_GLOB_FALSE = @USE_LOCAL_GLOB_FALSE@ USE_LOCAL_GLOB_TRUE = @USE_LOCAL_GLOB_TRUE@ USE_NLS = @USE_NLS@ VERSION = @VERSION@ WINDOWSENV_FALSE = @WINDOWSENV_FALSE@ WINDOWSENV_TRUE = @WINDOWSENV_TRUE@ XGETTEXT = @XGETTEXT@ ac_ct_CC = @ac_ct_CC@ ac_ct_RANLIB = @ac_ct_RANLIB@ ac_ct_STRIP = @ac_ct_STRIP@ am__fastdepCC_FALSE = @am__fastdepCC_FALSE@ am__fastdepCC_TRUE = @am__fastdepCC_TRUE@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ datadir = @datadir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ prefix = @prefix@ program_transform_name = @program_transform_name@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ TEXI2HTML = texi2html TEXI2HTML_FLAGS = -split_chapter info_TEXINFOS = make.texi make_TEXINFOS = fdl.texi make-stds.texi CLEANFILES = make*.html all: all-am .SUFFIXES: .SUFFIXES: .dvi .html .info .pdf .ps .texi $(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \ && exit 0; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/Makefile'; \ cd $(top_srcdir) && \ $(AUTOMAKE) --gnu doc/Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(top_srcdir)/configure: $(am__configure_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh $(ACLOCAL_M4): $(am__aclocal_m4_deps) cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh .texi.info: restore=: && backupdir="$(am__leading_dot)am$$$$" && \ am__cwd=`pwd` && cd $(srcdir) && \ rm -rf $$backupdir && mkdir $$backupdir && \ if ($(MAKEINFO) --version) >/dev/null 2>&1; then \ for f in $@ $@-[0-9] $@-[0-9][0-9] $(@:.info=).i[0-9] $(@:.info=).i[0-9][0-9]; do \ if test -f $$f; then mv $$f $$backupdir; restore=mv; else :; fi; \ done; \ else :; fi && \ cd "$$am__cwd"; \ if $(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \ -o $@ $<; \ then \ rc=0; \ cd $(srcdir); \ else \ rc=$$?; \ cd $(srcdir) && \ $$restore $$backupdir/* `echo "./$@" | sed 's|[^/]*$$||'`; \ fi; \ rm -rf $$backupdir; exit $$rc .texi.dvi: TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \ $(TEXI2DVI) $< .texi.pdf: TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \ $(TEXI2PDF) $< .texi.html: rm -rf $(@:.html=.htp) if $(MAKEINFOHTML) $(AM_MAKEINFOHTMLFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \ -o $(@:.html=.htp) $<; \ then \ rm -rf $@; \ if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \ mv $(@:.html=) $@; else mv $(@:.html=.htp) $@; fi; \ else \ if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \ rm -rf $(@:.html=); else rm -Rf $(@:.html=.htp) $@; fi; \ exit 1; \ fi $(srcdir)/make.info: make.texi $(srcdir)/version.texi $(make_TEXINFOS) make.dvi: make.texi $(srcdir)/version.texi $(make_TEXINFOS) make.pdf: make.texi $(srcdir)/version.texi $(make_TEXINFOS) make.html: make.texi $(srcdir)/version.texi $(make_TEXINFOS) $(srcdir)/version.texi: $(srcdir)/stamp-vti $(srcdir)/stamp-vti: make.texi $(top_srcdir)/configure @(dir=.; test -f ./make.texi || dir=$(srcdir); \ set `$(SHELL) $(top_srcdir)/config/mdate-sh $$dir/make.texi`; \ echo "@set UPDATED $$1 $$2 $$3"; \ echo "@set UPDATED-MONTH $$2 $$3"; \ echo "@set EDITION $(VERSION)"; \ echo "@set VERSION $(VERSION)") > vti.tmp @cmp -s vti.tmp $(srcdir)/version.texi \ || (echo "Updating $(srcdir)/version.texi"; \ cp vti.tmp $(srcdir)/version.texi) -@rm -f vti.tmp @cp $(srcdir)/version.texi $@ mostlyclean-vti: -rm -f vti.tmp maintainer-clean-vti: -rm -f $(srcdir)/stamp-vti $(srcdir)/version.texi .dvi.ps: TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \ $(DVIPS) -o $@ $< uninstall-info-am: @$(PRE_UNINSTALL) @if (install-info --version && \ install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \ list='$(INFO_DEPS)'; \ for file in $$list; do \ relfile=`echo "$$file" | sed 's|^.*/||'`; \ echo " install-info --info-dir='$(DESTDIR)$(infodir)' --remove '$(DESTDIR)$(infodir)/$$relfile'"; \ install-info --info-dir="$(DESTDIR)$(infodir)" --remove "$(DESTDIR)$(infodir)/$$relfile"; \ done; \ else :; fi @$(NORMAL_UNINSTALL) @list='$(INFO_DEPS)'; \ for file in $$list; do \ relfile=`echo "$$file" | sed 's|^.*/||'`; \ relfile_i=`echo "$$relfile" | sed 's|\.info$$||;s|$$|.i|'`; \ (if cd "$(DESTDIR)$(infodir)"; then \ echo " cd '$(DESTDIR)$(infodir)' && rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]"; \ rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]; \ else :; fi); \ done dist-info: $(INFO_DEPS) @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \ list='$(INFO_DEPS)'; \ for base in $$list; do \ case $$base in \ $(srcdir)/*) base=`echo "$$base" | sed "s|^$$srcdirstrip/||"`;; \ esac; \ if test -f $$base; then d=.; else d=$(srcdir); fi; \ for file in $$d/$$base*; do \ relfile=`expr "$$file" : "$$d/\(.*\)"`; \ test -f $(distdir)/$$relfile || \ cp -p $$file $(distdir)/$$relfile; \ done; \ done mostlyclean-aminfo: -rm -rf make.aux make.cp make.cps make.fn make.fns make.ky make.kys make.log \ make.pg make.tmp make.toc make.tp make.tps make.vr make.dvi \ make.pdf make.ps make.html maintainer-clean-aminfo: @list='$(INFO_DEPS)'; for i in $$list; do \ i_i=`echo "$$i" | sed 's|\.info$$||;s|$$|.i|'`; \ echo " rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]"; \ rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]; \ done tags: TAGS TAGS: ctags: CTAGS CTAGS: distdir: $(DISTFILES) @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \ list='$(DISTFILES)'; for file in $$list; do \ case $$file in \ $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \ $(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \ esac; \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \ if test "$$dir" != "$$file" && test "$$dir" != "."; then \ dir="/$$dir"; \ $(mkdir_p) "$(distdir)$$dir"; \ else \ dir=''; \ fi; \ if test -d $$d/$$file; then \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \ fi; \ cp -pR $$d/$$file $(distdir)$$dir || exit 1; \ else \ test -f $(distdir)/$$file \ || cp -p $$d/$$file $(distdir)/$$file \ || exit 1; \ fi; \ done $(MAKE) $(AM_MAKEFLAGS) \ top_distdir="$(top_distdir)" distdir="$(distdir)" \ dist-info check-am: all-am check: check-am all-am: Makefile $(INFO_DEPS) installdirs: for dir in "$(DESTDIR)$(infodir)"; do \ test -z "$$dir" || $(mkdir_p) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ `test -z '$(STRIP)' || \ echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install mostlyclean-generic: clean-generic: -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES) distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." clean: clean-am clean-am: clean-generic mostlyclean-am distclean: distclean-am -rm -f Makefile distclean-am: clean-am distclean-generic dvi: dvi-am dvi-am: $(DVIS) html-am: $(HTMLS) info: info-am info-am: $(INFO_DEPS) install-data-am: install-info-am install-exec-am: install-info: install-info-am install-info-am: $(INFO_DEPS) @$(NORMAL_INSTALL) test -z "$(infodir)" || $(mkdir_p) "$(DESTDIR)$(infodir)" @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \ list='$(INFO_DEPS)'; \ for file in $$list; do \ case $$file in \ $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \ esac; \ if test -f $$file; then d=.; else d=$(srcdir); fi; \ file_i=`echo "$$file" | sed 's|\.info$$||;s|$$|.i|'`; \ for ifile in $$d/$$file $$d/$$file-[0-9] $$d/$$file-[0-9][0-9] \ $$d/$$file_i[0-9] $$d/$$file_i[0-9][0-9] ; do \ if test -f $$ifile; then \ relfile=`echo "$$ifile" | sed 's|^.*/||'`; \ echo " $(INSTALL_DATA) '$$ifile' '$(DESTDIR)$(infodir)/$$relfile'"; \ $(INSTALL_DATA) "$$ifile" "$(DESTDIR)$(infodir)/$$relfile"; \ else : ; fi; \ done; \ done @$(POST_INSTALL) @if (install-info --version && \ install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \ list='$(INFO_DEPS)'; \ for file in $$list; do \ relfile=`echo "$$file" | sed 's|^.*/||'`; \ echo " install-info --info-dir='$(DESTDIR)$(infodir)' '$(DESTDIR)$(infodir)/$$relfile'";\ install-info --info-dir="$(DESTDIR)$(infodir)" "$(DESTDIR)$(infodir)/$$relfile" || :;\ done; \ else : ; fi install-man: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-aminfo \ maintainer-clean-generic maintainer-clean-vti mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-aminfo mostlyclean-generic mostlyclean-vti pdf: pdf-am pdf-am: $(PDFS) ps: ps-am ps-am: $(PSS) uninstall-am: uninstall-info-am .PHONY: all all-am check check-am clean clean-generic dist-info \ distclean distclean-generic distdir dvi dvi-am html html-am \ info info-am install install-am install-data install-data-am \ install-exec install-exec-am install-info install-info-am \ install-man install-strip installcheck installcheck-am \ installdirs maintainer-clean maintainer-clean-aminfo \ maintainer-clean-generic maintainer-clean-vti mostlyclean \ mostlyclean-aminfo mostlyclean-generic mostlyclean-vti pdf \ pdf-am ps ps-am uninstall uninstall-am uninstall-info-am html: make_1.html make_1.html: $(info_TEXINFOS) $(make_TEXINFOS) $(TEXI2HTML) $(TEXI2HTML_FLAGS) $(srcdir)/make.texi .PHONY: html # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: make-doc-non-dfsg-3.81.orig/doc/fdl.texi0000644000175000017500000005101010416557457020267 0ustar srivastasrivasta @node GNU Free Documentation License @appendixsec GNU Free Documentation License @cindex FDL, GNU Free Documentation License @center Version 1.2, November 2002 @display Copyright @copyright{} 2000,2001,2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. @end display @enumerate 0 @item PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document @dfn{free} in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of ``copyleft'', which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. @item APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The ``Document'', below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as ``you''. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. A ``Modified Version'' of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A ``Secondary Section'' is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The ``Invariant Sections'' are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. The ``Cover Texts'' are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A ``Transparent'' copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not ``Transparent'' is called ``Opaque''. Examples of suitable formats for Transparent copies include plain @sc{ascii} without markup, Texinfo input format, La@TeX{} input format, @acronym{SGML} or @acronym{XML} using a publicly available @acronym{DTD}, and standard-conforming simple @acronym{HTML}, PostScript or @acronym{PDF} designed for human modification. Examples of transparent image formats include @acronym{PNG}, @acronym{XCF} and @acronym{JPG}. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, @acronym{SGML} or @acronym{XML} for which the @acronym{DTD} and/or processing tools are not generally available, and the machine-generated @acronym{HTML}, PostScript or @acronym{PDF} produced by some word processors for output purposes only. The ``Title Page'' means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, ``Title Page'' means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text. A section ``Entitled XYZ'' means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as ``Acknowledgements'', ``Dedications'', ``Endorsements'', or ``History''.) To ``Preserve the Title'' of such a section when you modify the Document means that it remains a section ``Entitled XYZ'' according to this definition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. @item VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. @item COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. @item MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version: @enumerate A @item Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. @item List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement. @item State on the Title page the name of the publisher of the Modified Version, as the publisher. @item Preserve all the copyright notices of the Document. @item Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. @item Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. @item Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice. @item Include an unaltered copy of this License. @item Preserve the section Entitled ``History'', Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled ``History'' in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence. @item Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the ``History'' section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. @item For any section Entitled ``Acknowledgements'' or ``Dedications'', Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. @item Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. @item Delete any section Entitled ``Endorsements''. Such a section may not be included in the Modified Version. @item Do not retitle any existing section to be Entitled ``Endorsements'' or to conflict in title with any Invariant Section. @item Preserve any Warranty Disclaimers. @end enumerate If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles. You may add a section Entitled ``Endorsements'', provided it contains nothing but endorsements of your Modified Version by various parties---for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. @item COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled ``History'' in the various original documents, forming one section Entitled ``History''; likewise combine any sections Entitled ``Acknowledgements'', and any sections Entitled ``Dedications''. You must delete all sections Entitled ``Endorsements.'' @item COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. @item AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an ``aggregate'' if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. @item TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled ``Acknowledgements'', ``Dedications'', or ``History'', the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. @item TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. @item FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See @uref{http://www.gnu.org/copyleft/}. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License ``or any later version'' applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. @end enumerate @page @appendixsubsec ADDENDUM: How to use this License for your documents To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: @smallexample @group Copyright (C) @var{year} @var{your name}. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ``GNU Free Documentation License''. @end group @end smallexample If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the ``with...Texts.'' line with this: @smallexample @group with the Invariant Sections being @var{list their titles}, with the Front-Cover Texts being @var{list}, and with the Back-Cover Texts being @var{list}. @end group @end smallexample If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation. If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software. @c Local Variables: @c ispell-local-pdict: "ispell-dict" @c End: make-doc-non-dfsg-3.81.orig/doc/make-stds.texi0000644000175000017500000012404210416557457021420 0ustar srivastasrivasta@comment This file is included by both standards.texi and make.texinfo. @comment It was broken out of standards.texi on 1/6/93 by roland. @node Makefile Conventions @chapter Makefile Conventions @comment standards.texi does not print an index, but make.texinfo does. @cindex makefile, conventions for @cindex conventions for makefiles @cindex standards for makefiles @c Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 2000, 2001, @c 2004, 2005 Free Software Foundation, Inc. @c Permission is granted to copy, distribute and/or modify this document @c under the terms of the GNU Free Documentation License, Version 1.1 @c or any later version published by the Free Software Foundation; @c with no Invariant Sections, with no @c Front-Cover Texts, and with no Back-Cover Texts. @c A copy of the license is included in the section entitled ``GNU @c Free Documentation License''. This @ifinfo node @end ifinfo @iftex @ifset CODESTD section @end ifset @ifclear CODESTD chapter @end ifclear @end iftex describes conventions for writing the Makefiles for GNU programs. Using Automake will help you write a Makefile that follows these conventions. @menu * Makefile Basics:: General Conventions for Makefiles * Utilities in Makefiles:: Utilities in Makefiles * Command Variables:: Variables for Specifying Commands * Directory Variables:: Variables for Installation Directories * Standard Targets:: Standard Targets for Users * Install Command Categories:: Three categories of commands in the `install' rule: normal, pre-install and post-install. @end menu @node Makefile Basics @section General Conventions for Makefiles Every Makefile should contain this line: @example SHELL = /bin/sh @end example @noindent to avoid trouble on systems where the @code{SHELL} variable might be inherited from the environment. (This is never a problem with GNU @code{make}.) Different @code{make} programs have incompatible suffix lists and implicit rules, and this sometimes creates confusion or misbehavior. So it is a good idea to set the suffix list explicitly using only the suffixes you need in the particular Makefile, like this: @example .SUFFIXES: .SUFFIXES: .c .o @end example @noindent The first line clears out the suffix list, the second introduces all suffixes which may be subject to implicit rules in this Makefile. Don't assume that @file{.} is in the path for command execution. When you need to run programs that are a part of your package during the make, please make sure that it uses @file{./} if the program is built as part of the make or @file{$(srcdir)/} if the file is an unchanging part of the source code. Without one of these prefixes, the current search path is used. The distinction between @file{./} (the @dfn{build directory}) and @file{$(srcdir)/} (the @dfn{source directory}) is important because users can build in a separate directory using the @samp{--srcdir} option to @file{configure}. A rule of the form: @smallexample foo.1 : foo.man sedscript sed -e sedscript foo.man > foo.1 @end smallexample @noindent will fail when the build directory is not the source directory, because @file{foo.man} and @file{sedscript} are in the source directory. When using GNU @code{make}, relying on @samp{VPATH} to find the source file will work in the case where there is a single dependency file, since the @code{make} automatic variable @samp{$<} will represent the source file wherever it is. (Many versions of @code{make} set @samp{$<} only in implicit rules.) A Makefile target like @smallexample foo.o : bar.c $(CC) -I. -I$(srcdir) $(CFLAGS) -c bar.c -o foo.o @end smallexample @noindent should instead be written as @smallexample foo.o : bar.c $(CC) -I. -I$(srcdir) $(CFLAGS) -c $< -o $@@ @end smallexample @noindent in order to allow @samp{VPATH} to work correctly. When the target has multiple dependencies, using an explicit @samp{$(srcdir)} is the easiest way to make the rule work well. For example, the target above for @file{foo.1} is best written as: @smallexample foo.1 : foo.man sedscript sed -e $(srcdir)/sedscript $(srcdir)/foo.man > $@@ @end smallexample GNU distributions usually contain some files which are not source files---for example, Info files, and the output from Autoconf, Automake, Bison or Flex. Since these files normally appear in the source directory, they should always appear in the source directory, not in the build directory. So Makefile rules to update them should put the updated files in the source directory. However, if a file does not appear in the distribution, then the Makefile should not put it in the source directory, because building a program in ordinary circumstances should not modify the source directory in any way. Try to make the build and installation targets, at least (and all their subtargets) work correctly with a parallel @code{make}. @node Utilities in Makefiles @section Utilities in Makefiles Write the Makefile commands (and any shell scripts, such as @code{configure}) to run in @code{sh}, not in @code{csh}. Don't use any special features of @code{ksh} or @code{bash}. The @code{configure} script and the Makefile rules for building and installation should not use any utilities directly except these: @c dd find @c gunzip gzip md5sum @c mkfifo mknod tee uname @example cat cmp cp diff echo egrep expr false grep install-info ln ls mkdir mv pwd rm rmdir sed sleep sort tar test touch true @end example The compression program @code{gzip} can be used in the @code{dist} rule. Stick to the generally supported options for these programs. For example, don't use @samp{mkdir -p}, convenient as it may be, because most systems don't support it. It is a good idea to avoid creating symbolic links in makefiles, since a few systems don't support them. The Makefile rules for building and installation can also use compilers and related programs, but should do so via @code{make} variables so that the user can substitute alternatives. Here are some of the programs we mean: @example ar bison cc flex install ld ldconfig lex make makeinfo ranlib texi2dvi yacc @end example Use the following @code{make} variables to run those programs: @example $(AR) $(BISON) $(CC) $(FLEX) $(INSTALL) $(LD) $(LDCONFIG) $(LEX) $(MAKE) $(MAKEINFO) $(RANLIB) $(TEXI2DVI) $(YACC) @end example When you use @code{ranlib} or @code{ldconfig}, you should make sure nothing bad happens if the system does not have the program in question. Arrange to ignore an error from that command, and print a message before the command to tell the user that failure of this command does not mean a problem. (The Autoconf @samp{AC_PROG_RANLIB} macro can help with this.) If you use symbolic links, you should implement a fallback for systems that don't have symbolic links. Additional utilities that can be used via Make variables are: @example chgrp chmod chown mknod @end example It is ok to use other utilities in Makefile portions (or scripts) intended only for particular systems where you know those utilities exist. @node Command Variables @section Variables for Specifying Commands Makefiles should provide variables for overriding certain commands, options, and so on. In particular, you should run most utility programs via variables. Thus, if you use Bison, have a variable named @code{BISON} whose default value is set with @samp{BISON = bison}, and refer to it with @code{$(BISON)} whenever you need to use Bison. File management utilities such as @code{ln}, @code{rm}, @code{mv}, and so on, need not be referred to through variables in this way, since users don't need to replace them with other programs. Each program-name variable should come with an options variable that is used to supply options to the program. Append @samp{FLAGS} to the program-name variable name to get the options variable name---for example, @code{BISONFLAGS}. (The names @code{CFLAGS} for the C compiler, @code{YFLAGS} for yacc, and @code{LFLAGS} for lex, are exceptions to this rule, but we keep them because they are standard.) Use @code{CPPFLAGS} in any compilation command that runs the preprocessor, and use @code{LDFLAGS} in any compilation command that does linking as well as in any direct use of @code{ld}. If there are C compiler options that @emph{must} be used for proper compilation of certain files, do not include them in @code{CFLAGS}. Users expect to be able to specify @code{CFLAGS} freely themselves. Instead, arrange to pass the necessary options to the C compiler independently of @code{CFLAGS}, by writing them explicitly in the compilation commands or by defining an implicit rule, like this: @smallexample CFLAGS = -g ALL_CFLAGS = -I. $(CFLAGS) .c.o: $(CC) -c $(CPPFLAGS) $(ALL_CFLAGS) $< @end smallexample Do include the @samp{-g} option in @code{CFLAGS}, because that is not @emph{required} for proper compilation. You can consider it a default that is only recommended. If the package is set up so that it is compiled with GCC by default, then you might as well include @samp{-O} in the default value of @code{CFLAGS} as well. Put @code{CFLAGS} last in the compilation command, after other variables containing compiler options, so the user can use @code{CFLAGS} to override the others. @code{CFLAGS} should be used in every invocation of the C compiler, both those which do compilation and those which do linking. Every Makefile should define the variable @code{INSTALL}, which is the basic command for installing a file into the system. Every Makefile should also define the variables @code{INSTALL_PROGRAM} and @code{INSTALL_DATA}. (The default for @code{INSTALL_PROGRAM} should be @code{$(INSTALL)}; the default for @code{INSTALL_DATA} should be @code{$@{INSTALL@} -m 644}.) Then it should use those variables as the commands for actual installation, for executables and nonexecutables respectively. Use these variables as follows: @example $(INSTALL_PROGRAM) foo $(bindir)/foo $(INSTALL_DATA) libfoo.a $(libdir)/libfoo.a @end example Optionally, you may prepend the value of @code{DESTDIR} to the target filename. Doing this allows the installer to create a snapshot of the installation to be copied onto the real target filesystem later. Do not set the value of @code{DESTDIR} in your Makefile, and do not include it in any installed files. With support for @code{DESTDIR}, the above examples become: @example $(INSTALL_PROGRAM) foo $(DESTDIR)$(bindir)/foo $(INSTALL_DATA) libfoo.a $(DESTDIR)$(libdir)/libfoo.a @end example @noindent Always use a file name, not a directory name, as the second argument of the installation commands. Use a separate command for each file to be installed. @node Directory Variables @section Variables for Installation Directories Installation directories should always be named by variables, so it is easy to install in a nonstandard place. The standard names for these variables and the values they should have in GNU packages are described below. They are based on a standard filesystem layout; variants of it are used in GNU/Linux and other modern operating systems. Installers are expected to override these values when calling @command{make} (e.g., @kbd{make prefix=/usr install} or @command{configure} (e.g., @kbd{configure --prefix=/usr}). GNU packages should not try to guess which value should be appropriate for these variables on the system they are being installed onto: use the default settings specified here so that all GNU packages behave identically, allowing the installer to achieve any desired layout. These two variables set the root for the installation. All the other installation directories should be subdirectories of one of these two, and nothing should be directly installed into these two directories. @table @code @item prefix @vindex prefix A prefix used in constructing the default values of the variables listed below. The default value of @code{prefix} should be @file{/usr/local}. When building the complete GNU system, the prefix will be empty and @file{/usr} will be a symbolic link to @file{/}. (If you are using Autoconf, write it as @samp{@@prefix@@}.) Running @samp{make install} with a different value of @code{prefix} from the one used to build the program should @emph{not} recompile the program. @item exec_prefix @vindex exec_prefix A prefix used in constructing the default values of some of the variables listed below. The default value of @code{exec_prefix} should be @code{$(prefix)}. (If you are using Autoconf, write it as @samp{@@exec_prefix@@}.) Generally, @code{$(exec_prefix)} is used for directories that contain machine-specific files (such as executables and subroutine libraries), while @code{$(prefix)} is used directly for other directories. Running @samp{make install} with a different value of @code{exec_prefix} from the one used to build the program should @emph{not} recompile the program. @end table Executable programs are installed in one of the following directories. @table @code @item bindir @vindex bindir The directory for installing executable programs that users can run. This should normally be @file{/usr/local/bin}, but write it as @file{$(exec_prefix)/bin}. (If you are using Autoconf, write it as @samp{@@bindir@@}.) @item sbindir @vindex sbindir The directory for installing executable programs that can be run from the shell, but are only generally useful to system administrators. This should normally be @file{/usr/local/sbin}, but write it as @file{$(exec_prefix)/sbin}. (If you are using Autoconf, write it as @samp{@@sbindir@@}.) @item libexecdir @vindex libexecdir @comment This paragraph adjusted to avoid overfull hbox --roland 5jul94 The directory for installing executable programs to be run by other programs rather than by users. This directory should normally be @file{/usr/local/libexec}, but write it as @file{$(exec_prefix)/libexec}. (If you are using Autoconf, write it as @samp{@@libexecdir@@}.) The definition of @samp{libexecdir} is the same for all packages, so you should install your data in a subdirectory thereof. Most packages install their data under @file{$(libexecdir)/@var{package-name}/}, possibly within additional subdirectories thereof, such as @file{$(libexecdir)/@var{package-name}/@var{machine}/@var{version}}. @end table Data files used by the program during its execution are divided into categories in two ways. @itemize @bullet @item Some files are normally modified by programs; others are never normally modified (though users may edit some of these). @item Some files are architecture-independent and can be shared by all machines at a site; some are architecture-dependent and can be shared only by machines of the same kind and operating system; others may never be shared between two machines. @end itemize This makes for six different possibilities. However, we want to discourage the use of architecture-dependent files, aside from object files and libraries. It is much cleaner to make other data files architecture-independent, and it is generally not hard. Here are the variables Makefiles should use to specify directories to put these various kinds of files in: @table @samp @item datarootdir The root of the directory tree for read-only architecture-independent data files. This should normally be @file{/usr/local/share}, but write it as @file{$(prefix)/share}. (If you are using Autoconf, write it as @samp{@@datarootdir@@}.) @samp{datadir}'s default value is based on this variable; so are @samp{infodir}, @samp{mandir}, and others. @item datadir The directory for installing idiosyncratic read-only architecture-independent data files for this program. This is usually the same place as @samp{datarootdir}, but we use the two separate variables so that you can move these program-specific files without altering the location for Info files, man pages, etc. This should normally be @file{/usr/local/share}, but write it as @file{$(datarootdir)}. (If you are using Autoconf, write it as @samp{@@datadir@@}.) The definition of @samp{datadir} is the same for all packages, so you should install your data in a subdirectory thereof. Most packages install their data under @file{$(datadir)/@var{package-name}/}. @item sysconfdir The directory for installing read-only data files that pertain to a single machine--that is to say, files for configuring a host. Mailer and network configuration files, @file{/etc/passwd}, and so forth belong here. All the files in this directory should be ordinary ASCII text files. This directory should normally be @file{/usr/local/etc}, but write it as @file{$(prefix)/etc}. (If you are using Autoconf, write it as @samp{@@sysconfdir@@}.) Do not install executables here in this directory (they probably belong in @file{$(libexecdir)} or @file{$(sbindir)}). Also do not install files that are modified in the normal course of their use (programs whose purpose is to change the configuration of the system excluded). Those probably belong in @file{$(localstatedir)}. @item sharedstatedir The directory for installing architecture-independent data files which the programs modify while they run. This should normally be @file{/usr/local/com}, but write it as @file{$(prefix)/com}. (If you are using Autoconf, write it as @samp{@@sharedstatedir@@}.) @item localstatedir The directory for installing data files which the programs modify while they run, and that pertain to one specific machine. Users should never need to modify files in this directory to configure the package's operation; put such configuration information in separate files that go in @file{$(datadir)} or @file{$(sysconfdir)}. @file{$(localstatedir)} should normally be @file{/usr/local/var}, but write it as @file{$(prefix)/var}. (If you are using Autoconf, write it as @samp{@@localstatedir@@}.) @end table These variables specify the directory for installing certain specific types of files, if your program has them. Every GNU package should have Info files, so every program needs @samp{infodir}, but not all need @samp{libdir} or @samp{lispdir}. @table @samp @item includedir @c rewritten to avoid overfull hbox --roland The directory for installing header files to be included by user programs with the C @samp{#include} preprocessor directive. This should normally be @file{/usr/local/include}, but write it as @file{$(prefix)/include}. (If you are using Autoconf, write it as @samp{@@includedir@@}.) Most compilers other than GCC do not look for header files in directory @file{/usr/local/include}. So installing the header files this way is only useful with GCC. Sometimes this is not a problem because some libraries are only really intended to work with GCC. But some libraries are intended to work with other compilers. They should install their header files in two places, one specified by @code{includedir} and one specified by @code{oldincludedir}. @item oldincludedir The directory for installing @samp{#include} header files for use with compilers other than GCC. This should normally be @file{/usr/include}. (If you are using Autoconf, you can write it as @samp{@@oldincludedir@@}.) The Makefile commands should check whether the value of @code{oldincludedir} is empty. If it is, they should not try to use it; they should cancel the second installation of the header files. A package should not replace an existing header in this directory unless the header came from the same package. Thus, if your Foo package provides a header file @file{foo.h}, then it should install the header file in the @code{oldincludedir} directory if either (1) there is no @file{foo.h} there or (2) the @file{foo.h} that exists came from the Foo package. To tell whether @file{foo.h} came from the Foo package, put a magic string in the file---part of a comment---and @code{grep} for that string. @item docdir The directory for installing documentation files (other than Info) for this package. By default, it should be @file{/usr/local/share/doc/@var{yourpkg}}, but it should be written as @file{$(datarootdir)/doc/@var{yourpkg}}. (If you are using Autoconf, write it as @samp{@@docdir@@}.) The @var{yourpkg} subdirectory, which may include a version number, prevents collisions among files with common names, such as @file{README}. @item infodir The directory for installing the Info files for this package. By default, it should be @file{/usr/local/share/info}, but it should be written as @file{$(datarootdir)/info}. (If you are using Autoconf, write it as @samp{@@infodir@@}.) @code{infodir} is separate from @code{docdir} for compatibility with existing practice. @item htmldir @itemx dvidir @itemx pdfdir @itemx psdir Directories for installing documentation files in the particular format. (It is not required to support documentation in all these formats.) They should all be set to @code{$(docdir)} by default. (If you are using Autoconf, write them as @samp{@@htmldir@@}, @samp{@@dvidir@@}, etc.) Packages which supply several translations of their documentation should install them in @samp{$(htmldir)/}@var{ll}, @samp{$(pdfdir)/}@var{ll}, etc. where @var{ll} is a locale abbreviation such as @samp{en} or @samp{pt_BR}. @item libdir The directory for object files and libraries of object code. Do not install executables here, they probably ought to go in @file{$(libexecdir)} instead. The value of @code{libdir} should normally be @file{/usr/local/lib}, but write it as @file{$(exec_prefix)/lib}. (If you are using Autoconf, write it as @samp{@@libdir@@}.) @item lispdir The directory for installing any Emacs Lisp files in this package. By default, it should be @file{/usr/local/share/emacs/site-lisp}, but it should be written as @file{$(datarootdir)/emacs/site-lisp}. If you are using Autoconf, write the default as @samp{@@lispdir@@}. In order to make @samp{@@lispdir@@} work, you need the following lines in your @file{configure.in} file: @example lispdir='$@{datarootdir@}/emacs/site-lisp' AC_SUBST(lispdir) @end example @item localedir The directory for installing locale-specific message catalogs for this package. By default, it should be @file{/usr/local/share/locale}, but it should be written as @file{$(datarootdir)/locale}. (If you are using Autoconf, write it as @samp{@@localedir@@}.) This directory usually has a subdirectory per locale. @end table Unix-style man pages are installed in one of the following: @table @samp @item mandir The top-level directory for installing the man pages (if any) for this package. It will normally be @file{/usr/local/share/man}, but you should write it as @file{$(datarootdir)/man}. (If you are using Autoconf, write it as @samp{@@mandir@@}.) @item man1dir The directory for installing section 1 man pages. Write it as @file{$(mandir)/man1}. @item man2dir The directory for installing section 2 man pages. Write it as @file{$(mandir)/man2} @item @dots{} @strong{Don't make the primary documentation for any GNU software be a man page. Write a manual in Texinfo instead. Man pages are just for the sake of people running GNU software on Unix, which is a secondary application only.} @item manext The file name extension for the installed man page. This should contain a period followed by the appropriate digit; it should normally be @samp{.1}. @item man1ext The file name extension for installed section 1 man pages. @item man2ext The file name extension for installed section 2 man pages. @item @dots{} Use these names instead of @samp{manext} if the package needs to install man pages in more than one section of the manual. @end table And finally, you should set the following variable: @table @samp @item srcdir The directory for the sources being compiled. The value of this variable is normally inserted by the @code{configure} shell script. (If you are using Autconf, use @samp{srcdir = @@srcdir@@}.) @end table For example: @smallexample @c I have changed some of the comments here slightly to fix an overfull @c hbox, so the make manual can format correctly. --roland # Common prefix for installation directories. # NOTE: This directory must exist when you start the install. prefix = /usr/local datarootdir = $(prefix)/share datadir = $(datarootdir) exec_prefix = $(prefix) # Where to put the executable for the command `gcc'. bindir = $(exec_prefix)/bin # Where to put the directories used by the compiler. libexecdir = $(exec_prefix)/libexec # Where to put the Info files. infodir = $(datarootdir)/info @end smallexample If your program installs a large number of files into one of the standard user-specified directories, it might be useful to group them into a subdirectory particular to that program. If you do this, you should write the @code{install} rule to create these subdirectories. Do not expect the user to include the subdirectory name in the value of any of the variables listed above. The idea of having a uniform set of variable names for installation directories is to enable the user to specify the exact same values for several different GNU packages. In order for this to be useful, all the packages must be designed so that they will work sensibly when the user does so. @node Standard Targets @section Standard Targets for Users All GNU programs should have the following targets in their Makefiles: @table @samp @item all Compile the entire program. This should be the default target. This target need not rebuild any documentation files; Info files should normally be included in the distribution, and DVI files should be made only when explicitly asked for. By default, the Make rules should compile and link with @samp{-g}, so that executable programs have debugging symbols. Users who don't mind being helpless can strip the executables later if they wish. @item install Compile the program and copy the executables, libraries, and so on to the file names where they should reside for actual use. If there is a simple test to verify that a program is properly installed, this target should run that test. Do not strip executables when installing them. Devil-may-care users can use the @code{install-strip} target to do that. If possible, write the @code{install} target rule so that it does not modify anything in the directory where the program was built, provided @samp{make all} has just been done. This is convenient for building the program under one user name and installing it under another. The commands should create all the directories in which files are to be installed, if they don't already exist. This includes the directories specified as the values of the variables @code{prefix} and @code{exec_prefix}, as well as all subdirectories that are needed. One way to do this is by means of an @code{installdirs} target as described below. Use @samp{-} before any command for installing a man page, so that @code{make} will ignore any errors. This is in case there are systems that don't have the Unix man page documentation system installed. The way to install Info files is to copy them into @file{$(infodir)} with @code{$(INSTALL_DATA)} (@pxref{Command Variables}), and then run the @code{install-info} program if it is present. @code{install-info} is a program that edits the Info @file{dir} file to add or update the menu entry for the given Info file; it is part of the Texinfo package. Here is a sample rule to install an Info file: @comment This example has been carefully formatted for the Make manual. @comment Please do not reformat it without talking to roland@gnu.ai.mit.edu. @smallexample $(DESTDIR)$(infodir)/foo.info: foo.info $(POST_INSTALL) # There may be a newer info file in . than in srcdir. -if test -f foo.info; then d=.; \ else d=$(srcdir); fi; \ $(INSTALL_DATA) $$d/foo.info $(DESTDIR)$@@; \ # Run install-info only if it exists. # Use `if' instead of just prepending `-' to the # line so we notice real errors from install-info. # We use `$(SHELL) -c' because some shells do not # fail gracefully when there is an unknown command. if $(SHELL) -c 'install-info --version' \ >/dev/null 2>&1; then \ install-info --dir-file=$(DESTDIR)$(infodir)/dir \ $(DESTDIR)$(infodir)/foo.info; \ else true; fi @end smallexample When writing the @code{install} target, you must classify all the commands into three categories: normal ones, @dfn{pre-installation} commands and @dfn{post-installation} commands. @xref{Install Command Categories}. @item install-html @itemx install-dvi @itemx install-pdf @itemx install-ps These targets install documentation in formats other than Info; they're intended to be called explicitly by the person installing the package, if that format is desired. GNU prefers Info files, so these must be installed by the @code{install} target. When you have many documentation files to install, we recommend that you avoid collisions and clutter by arranging for these targets to install in subdirectories of the appropriate installation directory, such as @code{htmldir}. As one example, if your package has multiple manuals, and you wish to install HTML documentation with many files (such as the ``split'' mode output by @code{makeinfo --html}), you'll certainly want to use subdirectories, or two nodes with the same name in different manuals will overwrite each other. @item uninstall Delete all the installed files---the copies that the @samp{install} and @samp{install-*} targets create. This rule should not modify the directories where compilation is done, only the directories where files are installed. The uninstallation commands are divided into three categories, just like the installation commands. @xref{Install Command Categories}. @item install-strip Like @code{install}, but strip the executable files while installing them. In simple cases, this target can use the @code{install} target in a simple way: @smallexample install-strip: $(MAKE) INSTALL_PROGRAM='$(INSTALL_PROGRAM) -s' \ install @end smallexample But if the package installs scripts as well as real executables, the @code{install-strip} target can't just refer to the @code{install} target; it has to strip the executables but not the scripts. @code{install-strip} should not strip the executables in the build directory which are being copied for installation. It should only strip the copies that are installed. Normally we do not recommend stripping an executable unless you are sure the program has no bugs. However, it can be reasonable to install a stripped executable for actual execution while saving the unstripped executable elsewhere in case there is a bug. @comment The gratuitous blank line here is to make the table look better @comment in the printed Make manual. Please leave it in. @item clean Delete all files in the current directory that are normally created by building the program. Also delete files in other directories if they are created by this makefile. However, don't delete the files that record the configuration. Also preserve files that could be made by building, but normally aren't because the distribution comes with them. There is no need to delete parent directories that were created with @samp{mkdir -p}, since they could have existed anyway. Delete @file{.dvi} files here if they are not part of the distribution. @item distclean Delete all files in the current directory (or created by this makefile) that are created by configuring or building the program. If you have unpacked the source and built the program without creating any other files, @samp{make distclean} should leave only the files that were in the distribution. However, there is no need to delete parent directories that were created with @samp{mkdir -p}, since they could have existed anyway. @item mostlyclean Like @samp{clean}, but may refrain from deleting a few files that people normally don't want to recompile. For example, the @samp{mostlyclean} target for GCC does not delete @file{libgcc.a}, because recompiling it is rarely necessary and takes a lot of time. @item maintainer-clean Delete almost everything that can be reconstructed with this Makefile. This typically includes everything deleted by @code{distclean}, plus more: C source files produced by Bison, tags tables, Info files, and so on. The reason we say ``almost everything'' is that running the command @samp{make maintainer-clean} should not delete @file{configure} even if @file{configure} can be remade using a rule in the Makefile. More generally, @samp{make maintainer-clean} should not delete anything that needs to exist in order to run @file{configure} and then begin to build the program. Also, there is no need to delete parent directories that were created with @samp{mkdir -p}, since they could have existed anyway. These are the only exceptions; @code{maintainer-clean} should delete everything else that can be rebuilt. The @samp{maintainer-clean} target is intended to be used by a maintainer of the package, not by ordinary users. You may need special tools to reconstruct some of the files that @samp{make maintainer-clean} deletes. Since these files are normally included in the distribution, we don't take care to make them easy to reconstruct. If you find you need to unpack the full distribution again, don't blame us. To help make users aware of this, the commands for the special @code{maintainer-clean} target should start with these two: @smallexample @@echo 'This command is intended for maintainers to use; it' @@echo 'deletes files that may need special tools to rebuild.' @end smallexample @item TAGS Update a tags table for this program. @c ADR: how? @item info Generate any Info files needed. The best way to write the rules is as follows: @smallexample info: foo.info foo.info: foo.texi chap1.texi chap2.texi $(MAKEINFO) $(srcdir)/foo.texi @end smallexample @noindent You must define the variable @code{MAKEINFO} in the Makefile. It should run the @code{makeinfo} program, which is part of the Texinfo distribution. Normally a GNU distribution comes with Info files, and that means the Info files are present in the source directory. Therefore, the Make rule for an info file should update it in the source directory. When users build the package, ordinarily Make will not update the Info files because they will already be up to date. @item dvi @itemx html @itemx pdf @itemx ps Generate documentation files in the given format, if possible. Here's an example rule for generating DVI files from Texinfo: @smallexample dvi: foo.dvi foo.dvi: foo.texi chap1.texi chap2.texi $(TEXI2DVI) $(srcdir)/foo.texi @end smallexample @noindent You must define the variable @code{TEXI2DVI} in the Makefile. It should run the program @code{texi2dvi}, which is part of the Texinfo distribution.@footnote{@code{texi2dvi} uses @TeX{} to do the real work of formatting. @TeX{} is not distributed with Texinfo.} Alternatively, write just the dependencies, and allow GNU @code{make} to provide the command. Here's another example, this one for generating HTML from Texinfo: @smallexample html: foo.html foo.html: foo.texi chap1.texi chap2.texi $(TEXI2HTML) $(srcdir)/foo.texi @end smallexample @noindent Again, you would define the variable @code{TEXI2HTML} in the Makefile; for example, it might run @code{makeinfo --no-split --html} (@command{makeinfo} is part of the Texinfo distribution). @item dist Create a distribution tar file for this program. The tar file should be set up so that the file names in the tar file start with a subdirectory name which is the name of the package it is a distribution for. This name can include the version number. For example, the distribution tar file of GCC version 1.40 unpacks into a subdirectory named @file{gcc-1.40}. The easiest way to do this is to create a subdirectory appropriately named, use @code{ln} or @code{cp} to install the proper files in it, and then @code{tar} that subdirectory. Compress the tar file with @code{gzip}. For example, the actual distribution file for GCC version 1.40 is called @file{gcc-1.40.tar.gz}. The @code{dist} target should explicitly depend on all non-source files that are in the distribution, to make sure they are up to date in the distribution. @ifset CODESTD @xref{Releases, , Making Releases}. @end ifset @ifclear CODESTD @xref{Releases, , Making Releases, standards, GNU Coding Standards}. @end ifclear @item check Perform self-tests (if any). The user must build the program before running the tests, but need not install the program; you should write the self-tests so that they work when the program is built but not installed. @end table The following targets are suggested as conventional names, for programs in which they are useful. @table @code @item installcheck Perform installation tests (if any). The user must build and install the program before running the tests. You should not assume that @file{$(bindir)} is in the search path. @item installdirs It's useful to add a target named @samp{installdirs} to create the directories where files are installed, and their parent directories. There is a script called @file{mkinstalldirs} which is convenient for this; you can find it in the Texinfo package. @c It's in /gd/gnu/lib/mkinstalldirs. You can use a rule like this: @comment This has been carefully formatted to look decent in the Make manual. @comment Please be sure not to make it extend any further to the right.--roland @smallexample # Make sure all installation directories (e.g. $(bindir)) # actually exist by making them if necessary. installdirs: mkinstalldirs $(srcdir)/mkinstalldirs $(bindir) $(datadir) \ $(libdir) $(infodir) \ $(mandir) @end smallexample @noindent or, if you wish to support @env{DESTDIR}, @smallexample # Make sure all installation directories (e.g. $(bindir)) # actually exist by making them if necessary. installdirs: mkinstalldirs $(srcdir)/mkinstalldirs \ $(DESTDIR)$(bindir) $(DESTDIR)$(datadir) \ $(DESTDIR)$(libdir) $(DESTDIR)$(infodir) \ $(DESTDIR)$(mandir) @end smallexample This rule should not modify the directories where compilation is done. It should do nothing but create installation directories. @end table @node Install Command Categories @section Install Command Categories @cindex pre-installation commands @cindex post-installation commands When writing the @code{install} target, you must classify all the commands into three categories: normal ones, @dfn{pre-installation} commands and @dfn{post-installation} commands. Normal commands move files into their proper places, and set their modes. They may not alter any files except the ones that come entirely from the package they belong to. Pre-installation and post-installation commands may alter other files; in particular, they can edit global configuration files or data bases. Pre-installation commands are typically executed before the normal commands, and post-installation commands are typically run after the normal commands. The most common use for a post-installation command is to run @code{install-info}. This cannot be done with a normal command, since it alters a file (the Info directory) which does not come entirely and solely from the package being installed. It is a post-installation command because it needs to be done after the normal command which installs the package's Info files. Most programs don't need any pre-installation commands, but we have the feature just in case it is needed. To classify the commands in the @code{install} rule into these three categories, insert @dfn{category lines} among them. A category line specifies the category for the commands that follow. A category line consists of a tab and a reference to a special Make variable, plus an optional comment at the end. There are three variables you can use, one for each category; the variable name specifies the category. Category lines are no-ops in ordinary execution because these three Make variables are normally undefined (and you @emph{should not} define them in the makefile). Here are the three possible category lines, each with a comment that explains what it means: @smallexample $(PRE_INSTALL) # @r{Pre-install commands follow.} $(POST_INSTALL) # @r{Post-install commands follow.} $(NORMAL_INSTALL) # @r{Normal commands follow.} @end smallexample If you don't use a category line at the beginning of the @code{install} rule, all the commands are classified as normal until the first category line. If you don't use any category lines, all the commands are classified as normal. These are the category lines for @code{uninstall}: @smallexample $(PRE_UNINSTALL) # @r{Pre-uninstall commands follow.} $(POST_UNINSTALL) # @r{Post-uninstall commands follow.} $(NORMAL_UNINSTALL) # @r{Normal commands follow.} @end smallexample Typically, a pre-uninstall command would be used for deleting entries from the Info directory. If the @code{install} or @code{uninstall} target has any dependencies which act as subroutines of installation, then you should start @emph{each} dependency's commands with a category line, and start the main target's commands with a category line also. This way, you can ensure that each command is placed in the right category regardless of which of the dependencies actually run. Pre-installation and post-installation commands should not run any programs except for these: @example [ basename bash cat chgrp chmod chown cmp cp dd diff echo egrep expand expr false fgrep find getopt grep gunzip gzip hostname install install-info kill ldconfig ln ls md5sum mkdir mkfifo mknod mv printenv pwd rm rmdir sed sort tee test touch true uname xargs yes @end example @cindex binary packages The reason for distinguishing the commands in this way is for the sake of making binary packages. Typically a binary package contains all the executables and other files that need to be installed, and has its own method of installing them---so it does not need to run the normal installation commands. But installing the binary package does need to execute the pre-installation and post-installation commands. Programs to build binary packages work by extracting the pre-installation and post-installation commands. Here is one way of extracting the pre-installation commands (the @option{-s} option to @command{make} is needed to silence messages about entering subdirectories): @smallexample make -s -n install -o all \ PRE_INSTALL=pre-install \ POST_INSTALL=post-install \ NORMAL_INSTALL=normal-install \ | gawk -f pre-install.awk @end smallexample @noindent where the file @file{pre-install.awk} could contain this: @smallexample $0 ~ /^(normal-install|post-install)[ \t]*$/ @{on = 0@} on @{print $0@} $0 ~ /^pre-install[ \t]*$/ @{on = 1@} @end smallexample make-doc-non-dfsg-3.81.orig/doc/make.info0000644000175000017500000001250210416557457020424 0ustar srivastasrivastaThis is make.info, produced by makeinfo version 4.8 from make.texi. This file documents the GNU `make' utility, which determines automatically which pieces of a large program need to be recompiled, and issues the commands to recompile them. This is Edition 0.70, last updated 1 April 2006, of `The GNU Make Manual', for GNU `make' version 3.81. Copyright (C) 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with the Front-Cover Texts being "A GNU Manual," and with the Back-Cover Texts as in (a) below. A copy of the license is included in the section entitled "GNU Free Documentation License." (a) The FSF's Back-Cover Text is: "You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development." INFO-DIR-SECTION GNU Packages START-INFO-DIR-ENTRY * Make: (make). Remake files automatically. END-INFO-DIR-ENTRY  Indirect: make.info-1: 1297 make.info-2: 301265  Tag Table: (Indirect) Node: Top1297 Node: Overview14702 Node: Preparing15712 Node: Reading16684 Node: Bugs17611 Node: Introduction19441 Node: Rule Introduction21033 Node: Simple Makefile22777 Node: How Make Works26406 Node: Variables Simplify29061 Node: make Deduces31267 Node: Combine By Prerequisite33007 Node: Cleanup34036 Node: Makefiles35455 Node: Makefile Contents36421 Node: Makefile Names39376 Node: Include40987 Ref: Include-Footnote-144619 Node: MAKEFILES Variable44753 Node: MAKEFILE_LIST Variable46263 Node: Special Variables47531 Node: Remaking Makefiles51038 Node: Overriding Makefiles55287 Node: Reading Makefiles57340 Node: Secondary Expansion60244 Node: Rules67678 Node: Rule Example70350 Node: Rule Syntax71207 Node: Prerequisite Types73710 Node: Wildcards75486 Node: Wildcard Examples77204 Node: Wildcard Pitfall78460 Node: Wildcard Function80249 Node: Directory Search82033 Node: General Search83175 Node: Selective Search84890 Node: Search Algorithm87878 Node: Commands/Search90397 Node: Implicit/Search91743 Node: Libraries/Search92687 Node: Phony Targets94779 Node: Force Targets99865 Node: Empty Targets100910 Node: Special Targets102208 Node: Multiple Targets109382 Node: Multiple Rules111257 Node: Static Pattern113493 Node: Static Usage114145 Node: Static versus Implicit117866 Node: Double-Colon119610 Node: Automatic Prerequisites121267 Node: Commands125545 Node: Command Syntax126753 Node: Splitting Lines128778 Node: Variables in Commands131759 Node: Echoing133086 Node: Execution134378 Ref: Execution-Footnote-1135629 Node: Choosing the Shell135775 Node: Parallel139744 Node: Errors143337 Node: Interrupts146983 Node: Recursion148570 Node: MAKE Variable150664 Node: Variables/Recursion152931 Node: Options/Recursion158372 Node: -w Option163537 Node: Sequences164532 Node: Empty Commands167544 Node: Using Variables168718 Node: Reference171831 Node: Flavors173390 Node: Advanced179128 Node: Substitution Refs179633 Node: Computed Names181186 Node: Values185730 Node: Setting186643 Node: Appending188679 Node: Override Directive192605 Node: Defining193989 Node: Environment196453 Node: Target-specific198702 Node: Pattern-specific201669 Node: Conditionals203071 Node: Conditional Example203781 Node: Conditional Syntax206358 Node: Testing Flags212083 Node: Functions213185 Node: Syntax of Functions214605 Node: Text Functions216804 Node: File Name Functions225375 Node: Conditional Functions230597 Node: Foreach Function232971 Node: Call Function236183 Node: Value Function239068 Node: Eval Function240505 Node: Origin Function242779 Node: Flavor Function245997 Node: Shell Function247063 Node: Make Control Functions248697 Node: Running250366 Node: Makefile Arguments252355 Node: Goals253071 Node: Instead of Execution257812 Node: Avoiding Compilation261098 Node: Overriding263073 Node: Testing265371 Node: Options Summary267256 Node: Implicit Rules277382 Node: Using Implicit279530 Node: Catalogue of Rules283069 Node: Implicit Variables292419 Node: Chained Rules297254 Node: Pattern Rules301265 Node: Pattern Intro302801 Node: Pattern Examples305698 Node: Automatic Variables307507 Node: Pattern Match314878 Node: Match-Anything Rules316514 Node: Canceling Rules320389 Node: Last Resort321105 Node: Suffix Rules322952 Node: Implicit Rule Search326681 Node: Archives330200 Node: Archive Members330898 Node: Archive Update332511 Node: Archive Symbols334425 Node: Archive Pitfalls335659 Node: Archive Suffix Rules336382 Node: Features337929 Node: Missing346484 Node: Makefile Conventions350222 Node: Makefile Basics351008 Node: Utilities in Makefiles354175 Node: Command Variables356313 Node: Directory Variables359883 Node: Standard Targets374023 Ref: Standard Targets-Footnote-1387142 Node: Install Command Categories387242 Node: Quick Reference391768 Node: Error Messages402464 Node: Complex Makefile410154 Node: GNU Free Documentation License418872 Node: Concept Index441321 Node: Name Index506510  End Tag Table make-doc-non-dfsg-3.81.orig/doc/make.info-10000644000175000017500000111432110416557457020565 0ustar srivastasrivastaThis is make.info, produced by makeinfo version 4.8 from make.texi. This file documents the GNU `make' utility, which determines automatically which pieces of a large program need to be recompiled, and issues the commands to recompile them. This is Edition 0.70, last updated 1 April 2006, of `The GNU Make Manual', for GNU `make' version 3.81. Copyright (C) 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with the Front-Cover Texts being "A GNU Manual," and with the Back-Cover Texts as in (a) below. A copy of the license is included in the section entitled "GNU Free Documentation License." (a) The FSF's Back-Cover Text is: "You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development." INFO-DIR-SECTION GNU Packages START-INFO-DIR-ENTRY * Make: (make). Remake files automatically. END-INFO-DIR-ENTRY  File: make.info, Node: Top, Next: Overview, Prev: (dir), Up: (dir) GNU `make' ********** This file documents the GNU `make' utility, which determines automatically which pieces of a large program need to be recompiled, and issues the commands to recompile them. This is Edition 0.70, last updated 1 April 2006, of `The GNU Make Manual', for GNU `make' version 3.81. Copyright (C) 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with the Front-Cover Texts being "A GNU Manual," and with the Back-Cover Texts as in (a) below. A copy of the license is included in the section entitled "GNU Free Documentation License." (a) The FSF's Back-Cover Text is: "You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development." * Menu: * Overview:: Overview of `make'. * Introduction:: An introduction to `make'. * Makefiles:: Makefiles tell `make' what to do. * Rules:: Rules describe when a file must be remade. * Commands:: Commands say how to remake a file. * Using Variables:: You can use variables to avoid repetition. * Conditionals:: Use or ignore parts of the makefile based on the values of variables. * Functions:: Many powerful ways to manipulate text. * Invoking make: Running. How to invoke `make' on the command line. * Implicit Rules:: Use implicit rules to treat many files alike, based on their file names. * Archives:: How `make' can update library archives. * Features:: Features GNU `make' has over other `make's. * Missing:: What GNU `make' lacks from other `make's. * Makefile Conventions:: Conventions for writing makefiles for GNU programs. * Quick Reference:: A quick reference for experienced users. * Error Messages:: A list of common errors generated by `make'. * Complex Makefile:: A real example of a straightforward, but nontrivial, makefile. * GNU Free Documentation License:: License for copying this manual * Concept Index:: Index of Concepts * Name Index:: Index of Functions, Variables, & Directives --- The Detailed Node Listing --- Overview of `make' * Preparing:: Preparing and Running Make * Reading:: On Reading this Text * Bugs:: Problems and Bugs An Introduction to Makefiles * Rule Introduction:: What a rule looks like. * Simple Makefile:: A Simple Makefile * How Make Works:: How `make' Processes This Makefile * Variables Simplify:: Variables Make Makefiles Simpler * make Deduces:: Letting `make' Deduce the Commands * Combine By Prerequisite:: Another Style of Makefile * Cleanup:: Rules for Cleaning the Directory Writing Makefiles * Makefile Contents:: What makefiles contain. * Makefile Names:: How to name your makefile. * Include:: How one makefile can use another makefile. * MAKEFILES Variable:: The environment can specify extra makefiles. * MAKEFILE_LIST Variable:: Discover which makefiles have been read. * Special Variables:: Other special variables. * Remaking Makefiles:: How makefiles get remade. * Overriding Makefiles:: How to override part of one makefile with another makefile. * Reading Makefiles:: How makefiles are parsed. * Secondary Expansion:: How and when secondary expansion is performed. Writing Rules * Rule Example:: An example explained. * Rule Syntax:: General syntax explained. * Prerequisite Types:: There are two types of prerequisites. * Wildcards:: Using wildcard characters such as `*'. * Directory Search:: Searching other directories for source files. * Phony Targets:: Using a target that is not a real file's name. * Force Targets:: You can use a target without commands or prerequisites to mark other targets as phony. * Empty Targets:: When only the date matters and the files are empty. * Special Targets:: Targets with special built-in meanings. * Multiple Targets:: When to make use of several targets in a rule. * Multiple Rules:: How to use several rules with the same target. * Static Pattern:: Static pattern rules apply to multiple targets and can vary the prerequisites according to the target name. * Double-Colon:: How to use a special kind of rule to allow several independent rules for one target. * Automatic Prerequisites:: How to automatically generate rules giving prerequisites from source files themselves. Using Wildcard Characters in File Names * Wildcard Examples:: Several examples * Wildcard Pitfall:: Problems to avoid. * Wildcard Function:: How to cause wildcard expansion where it does not normally take place. Searching Directories for Prerequisites * General Search:: Specifying a search path that applies to every prerequisite. * Selective Search:: Specifying a search path for a specified class of names. * Search Algorithm:: When and how search paths are applied. * Commands/Search:: How to write shell commands that work together with search paths. * Implicit/Search:: How search paths affect implicit rules. * Libraries/Search:: Directory search for link libraries. Static Pattern Rules * Static Usage:: The syntax of static pattern rules. * Static versus Implicit:: When are they better than implicit rules? Writing the Commands in Rules * Command Syntax:: Command syntax features and pitfalls. * Echoing:: How to control when commands are echoed. * Execution:: How commands are executed. * Parallel:: How commands can be executed in parallel. * Errors:: What happens after a command execution error. * Interrupts:: What happens when a command is interrupted. * Recursion:: Invoking `make' from makefiles. * Sequences:: Defining canned sequences of commands. * Empty Commands:: Defining useful, do-nothing commands. Command Syntax * Splitting Lines:: Breaking long command lines for readability. * Variables in Commands:: Using `make' variables in commands. Command Execution * Choosing the Shell:: How `make' chooses the shell used to run commands. Recursive Use of `make' * MAKE Variable:: The special effects of using `$(MAKE)'. * Variables/Recursion:: How to communicate variables to a sub-`make'. * Options/Recursion:: How to communicate options to a sub-`make'. * -w Option:: How the `-w' or `--print-directory' option helps debug use of recursive `make' commands. How to Use Variables * Reference:: How to use the value of a variable. * Flavors:: Variables come in two flavors. * Advanced:: Advanced features for referencing a variable. * Values:: All the ways variables get their values. * Setting:: How to set a variable in the makefile. * Appending:: How to append more text to the old value of a variable. * Override Directive:: How to set a variable in the makefile even if the user has set it with a command argument. * Defining:: An alternate way to set a variable to a verbatim string. * Environment:: Variable values can come from the environment. * Target-specific:: Variable values can be defined on a per-target basis. * Pattern-specific:: Target-specific variable values can be applied to a group of targets that match a pattern. Advanced Features for Reference to Variables * Substitution Refs:: Referencing a variable with substitutions on the value. * Computed Names:: Computing the name of the variable to refer to. Conditional Parts of Makefiles * Conditional Example:: Example of a conditional * Conditional Syntax:: The syntax of conditionals. * Testing Flags:: Conditionals that test flags. Functions for Transforming Text * Syntax of Functions:: How to write a function call. * Text Functions:: General-purpose text manipulation functions. * File Name Functions:: Functions for manipulating file names. * Conditional Functions:: Functions that implement conditions. * Foreach Function:: Repeat some text with controlled variation. * Call Function:: Expand a user-defined function. * Value Function:: Return the un-expanded value of a variable. * Eval Function:: Evaluate the arguments as makefile syntax. * Origin Function:: Find where a variable got its value. * Flavor Function:: Find out the flavor of a variable. * Shell Function:: Substitute the output of a shell command. * Make Control Functions:: Functions that control how make runs. How to Run `make' * Makefile Arguments:: How to specify which makefile to use. * Goals:: How to use goal arguments to specify which parts of the makefile to use. * Instead of Execution:: How to use mode flags to specify what kind of thing to do with the commands in the makefile other than simply execute them. * Avoiding Compilation:: How to avoid recompiling certain files. * Overriding:: How to override a variable to specify an alternate compiler and other things. * Testing:: How to proceed past some errors, to test compilation. * Options Summary:: Summary of Options Using Implicit Rules * Using Implicit:: How to use an existing implicit rule to get the commands for updating a file. * Catalogue of Rules:: A list of built-in implicit rules. * Implicit Variables:: How to change what predefined rules do. * Chained Rules:: How to use a chain of implicit rules. * Pattern Rules:: How to define new implicit rules. * Last Resort:: How to define commands for rules which cannot find any. * Suffix Rules:: The old-fashioned style of implicit rule. * Implicit Rule Search:: The precise algorithm for applying implicit rules. Defining and Redefining Pattern Rules * Pattern Intro:: An introduction to pattern rules. * Pattern Examples:: Examples of pattern rules. * Automatic Variables:: How to use automatic variables in the commands of implicit rules. * Pattern Match:: How patterns match. * Match-Anything Rules:: Precautions you should take prior to defining rules that can match any target file whatever. * Canceling Rules:: How to override or cancel built-in rules. Using `make' to Update Archive Files * Archive Members:: Archive members as targets. * Archive Update:: The implicit rule for archive member targets. * Archive Pitfalls:: Dangers to watch out for when using archives. * Archive Suffix Rules:: You can write a special kind of suffix rule for updating archives. Implicit Rule for Archive Member Targets * Archive Symbols:: How to update archive symbol directories.  File: make.info, Node: Overview, Next: Introduction, Prev: Top, Up: Top 1 Overview of `make' ******************** The `make' utility automatically determines which pieces of a large program need to be recompiled, and issues commands to recompile them. This manual describes GNU `make', which was implemented by Richard Stallman and Roland McGrath. Development since Version 3.76 has been handled by Paul D. Smith. GNU `make' conforms to section 6.2 of `IEEE Standard 1003.2-1992' (POSIX.2). Our examples show C programs, since they are most common, but you can use `make' with any programming language whose compiler can be run with a shell command. Indeed, `make' is not limited to programs. You can use it to describe any task where some files must be updated automatically from others whenever the others change. * Menu: * Preparing:: Preparing and Running Make * Reading:: On Reading this Text * Bugs:: Problems and Bugs  File: make.info, Node: Preparing, Next: Reading, Prev: Overview, Up: Overview Preparing and Running Make ========================== To prepare to use `make', you must write a file called the "makefile" that describes the relationships among files in your program and provides commands for updating each file. In a program, typically, the executable file is updated from object files, which are in turn made by compiling source files. Once a suitable makefile exists, each time you change some source files, this simple shell command: make suffices to perform all necessary recompilations. The `make' program uses the makefile data base and the last-modification times of the files to decide which of the files need to be updated. For each of those files, it issues the commands recorded in the data base. You can provide command line arguments to `make' to control which files should be recompiled, or how. *Note How to Run `make': Running.  File: make.info, Node: Reading, Next: Bugs, Prev: Preparing, Up: Overview 1.1 How to Read This Manual =========================== If you are new to `make', or are looking for a general introduction, read the first few sections of each chapter, skipping the later sections. In each chapter, the first few sections contain introductory or general information and the later sections contain specialized or technical information. The exception is the second chapter, *Note An Introduction to Makefiles: Introduction, all of which is introductory. If you are familiar with other `make' programs, see *Note Features of GNU `make': Features, which lists the enhancements GNU `make' has, and *Note Incompatibilities and Missing Features: Missing, which explains the few things GNU `make' lacks that others have. For a quick summary, see *Note Options Summary::, *Note Quick Reference::, and *Note Special Targets::.  File: make.info, Node: Bugs, Prev: Reading, Up: Overview 1.2 Problems and Bugs ===================== If you have problems with GNU `make' or think you've found a bug, please report it to the developers; we cannot promise to do anything but we might well want to fix it. Before reporting a bug, make sure you've actually found a real bug. Carefully reread the documentation and see if it really says you can do what you're trying to do. If it's not clear whether you should be able to do something or not, report that too; it's a bug in the documentation! Before reporting a bug or trying to fix it yourself, try to isolate it to the smallest possible makefile that reproduces the problem. Then send us the makefile and the exact results `make' gave you, including any error or warning messages. Please don't paraphrase these messages: it's best to cut and paste them into your report. When generating this small makefile, be sure to not use any non-free or unusual tools in your commands: you can almost always emulate what such a tool would do with simple shell commands. Finally, be sure to explain what you expected to occur; this will help us decide whether the problem was really in the documentation. Once you have a precise problem you can report it in one of two ways. Either send electronic mail to: bug-make@gnu.org or use our Web-based project management tool, at: http://savannah.gnu.org/projects/make/ In addition to the information above, please be careful to include the version number of `make' you are using. You can get this information with the command `make --version'. Be sure also to include the type of machine and operating system you are using. One way to obtain this information is by looking at the final lines of output from the command `make --help'.  File: make.info, Node: Introduction, Next: Makefiles, Prev: Overview, Up: Top 2 An Introduction to Makefiles ****************************** You need a file called a "makefile" to tell `make' what to do. Most often, the makefile tells `make' how to compile and link a program. In this chapter, we will discuss a simple makefile that describes how to compile and link a text editor which consists of eight C source files and three header files. The makefile can also tell `make' how to run miscellaneous commands when explicitly asked (for example, to remove certain files as a clean-up operation). To see a more complex example of a makefile, see *Note Complex Makefile::. When `make' recompiles the editor, each changed C source file must be recompiled. If a header file has changed, each C source file that includes the header file must be recompiled to be safe. Each compilation produces an object file corresponding to the source file. Finally, if any source file has been recompiled, all the object files, whether newly made or saved from previous compilations, must be linked together to produce the new executable editor. * Menu: * Rule Introduction:: What a rule looks like. * Simple Makefile:: A Simple Makefile * How Make Works:: How `make' Processes This Makefile * Variables Simplify:: Variables Make Makefiles Simpler * make Deduces:: Letting `make' Deduce the Commands * Combine By Prerequisite:: Another Style of Makefile * Cleanup:: Rules for Cleaning the Directory  File: make.info, Node: Rule Introduction, Next: Simple Makefile, Prev: Introduction, Up: Introduction 2.1 What a Rule Looks Like ========================== A simple makefile consists of "rules" with the following shape: TARGET ... : PREREQUISITES ... COMMAND ... ... A "target" is usually the name of a file that is generated by a program; examples of targets are executable or object files. A target can also be the name of an action to carry out, such as `clean' (*note Phony Targets::). A "prerequisite" is a file that is used as input to create the target. A target often depends on several files. A "command" is an action that `make' carries out. A rule may have more than one command, each on its own line. *Please note:* you need to put a tab character at the beginning of every command line! This is an obscurity that catches the unwary. Usually a command is in a rule with prerequisites and serves to create a target file if any of the prerequisites change. However, the rule that specifies commands for the target need not have prerequisites. For example, the rule containing the delete command associated with the target `clean' does not have prerequisites. A "rule", then, explains how and when to remake certain files which are the targets of the particular rule. `make' carries out the commands on the prerequisites to create or update the target. A rule can also explain how and when to carry out an action. *Note Writing Rules: Rules. A makefile may contain other text besides rules, but a simple makefile need only contain rules. Rules may look somewhat more complicated than shown in this template, but all fit the pattern more or less.  File: make.info, Node: Simple Makefile, Next: How Make Works, Prev: Rule Introduction, Up: Introduction 2.2 A Simple Makefile ===================== Here is a straightforward makefile that describes the way an executable file called `edit' depends on eight object files which, in turn, depend on eight C source and three header files. In this example, all the C files include `defs.h', but only those defining editing commands include `command.h', and only low level files that change the editor buffer include `buffer.h'. edit : main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o cc -o edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o main.o : main.c defs.h cc -c main.c kbd.o : kbd.c defs.h command.h cc -c kbd.c command.o : command.c defs.h command.h cc -c command.c display.o : display.c defs.h buffer.h cc -c display.c insert.o : insert.c defs.h buffer.h cc -c insert.c search.o : search.c defs.h buffer.h cc -c search.c files.o : files.c defs.h buffer.h command.h cc -c files.c utils.o : utils.c defs.h cc -c utils.c clean : rm edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o We split each long line into two lines using backslash-newline; this is like using one long line, but is easier to read. To use this makefile to create the executable file called `edit', type: make To use this makefile to delete the executable file and all the object files from the directory, type: make clean In the example makefile, the targets include the executable file `edit', and the object files `main.o' and `kbd.o'. The prerequisites are files such as `main.c' and `defs.h'. In fact, each `.o' file is both a target and a prerequisite. Commands include `cc -c main.c' and `cc -c kbd.c'. When a target is a file, it needs to be recompiled or relinked if any of its prerequisites change. In addition, any prerequisites that are themselves automatically generated should be updated first. In this example, `edit' depends on each of the eight object files; the object file `main.o' depends on the source file `main.c' and on the header file `defs.h'. A shell command follows each line that contains a target and prerequisites. These shell commands say how to update the target file. A tab character must come at the beginning of every command line to distinguish command lines from other lines in the makefile. (Bear in mind that `make' does not know anything about how the commands work. It is up to you to supply commands that will update the target file properly. All `make' does is execute the commands in the rule you have specified when the target file needs to be updated.) The target `clean' is not a file, but merely the name of an action. Since you normally do not want to carry out the actions in this rule, `clean' is not a prerequisite of any other rule. Consequently, `make' never does anything with it unless you tell it specifically. Note that this rule not only is not a prerequisite, it also does not have any prerequisites, so the only purpose of the rule is to run the specified commands. Targets that do not refer to files but are just actions are called "phony targets". *Note Phony Targets::, for information about this kind of target. *Note Errors in Commands: Errors, to see how to cause `make' to ignore errors from `rm' or any other command.  File: make.info, Node: How Make Works, Next: Variables Simplify, Prev: Simple Makefile, Up: Introduction 2.3 How `make' Processes a Makefile =================================== By default, `make' starts with the first target (not targets whose names start with `.'). This is called the "default goal". ("Goals" are the targets that `make' strives ultimately to update. You can override this behavior using the command line (*note Arguments to Specify the Goals: Goals.) or with the `.DEFAULT_GOAL' special variable (*note Other Special Variables: Special Variables.). In the simple example of the previous section, the default goal is to update the executable program `edit'; therefore, we put that rule first. Thus, when you give the command: make `make' reads the makefile in the current directory and begins by processing the first rule. In the example, this rule is for relinking `edit'; but before `make' can fully process this rule, it must process the rules for the files that `edit' depends on, which in this case are the object files. Each of these files is processed according to its own rule. These rules say to update each `.o' file by compiling its source file. The recompilation must be done if the source file, or any of the header files named as prerequisites, is more recent than the object file, or if the object file does not exist. The other rules are processed because their targets appear as prerequisites of the goal. If some other rule is not depended on by the goal (or anything it depends on, etc.), that rule is not processed, unless you tell `make' to do so (with a command such as `make clean'). Before recompiling an object file, `make' considers updating its prerequisites, the source file and header files. This makefile does not specify anything to be done for them--the `.c' and `.h' files are not the targets of any rules--so `make' does nothing for these files. But `make' would update automatically generated C programs, such as those made by Bison or Yacc, by their own rules at this time. After recompiling whichever object files need it, `make' decides whether to relink `edit'. This must be done if the file `edit' does not exist, or if any of the object files are newer than it. If an object file was just recompiled, it is now newer than `edit', so `edit' is relinked. Thus, if we change the file `insert.c' and run `make', `make' will compile that file to update `insert.o', and then link `edit'. If we change the file `command.h' and run `make', `make' will recompile the object files `kbd.o', `command.o' and `files.o' and then link the file `edit'.  File: make.info, Node: Variables Simplify, Next: make Deduces, Prev: How Make Works, Up: Introduction 2.4 Variables Make Makefiles Simpler ==================================== In our example, we had to list all the object files twice in the rule for `edit' (repeated here): edit : main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o cc -o edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o Such duplication is error-prone; if a new object file is added to the system, we might add it to one list and forget the other. We can eliminate the risk and simplify the makefile by using a variable. "Variables" allow a text string to be defined once and substituted in multiple places later (*note How to Use Variables: Using Variables.). It is standard practice for every makefile to have a variable named `objects', `OBJECTS', `objs', `OBJS', `obj', or `OBJ' which is a list of all object file names. We would define such a variable `objects' with a line like this in the makefile: objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o Then, each place we want to put a list of the object file names, we can substitute the variable's value by writing `$(objects)' (*note How to Use Variables: Using Variables.). Here is how the complete simple makefile looks when you use a variable for the object files: objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o edit : $(objects) cc -o edit $(objects) main.o : main.c defs.h cc -c main.c kbd.o : kbd.c defs.h command.h cc -c kbd.c command.o : command.c defs.h command.h cc -c command.c display.o : display.c defs.h buffer.h cc -c display.c insert.o : insert.c defs.h buffer.h cc -c insert.c search.o : search.c defs.h buffer.h cc -c search.c files.o : files.c defs.h buffer.h command.h cc -c files.c utils.o : utils.c defs.h cc -c utils.c clean : rm edit $(objects)  File: make.info, Node: make Deduces, Next: Combine By Prerequisite, Prev: Variables Simplify, Up: Introduction 2.5 Letting `make' Deduce the Commands ====================================== It is not necessary to spell out the commands for compiling the individual C source files, because `make' can figure them out: it has an "implicit rule" for updating a `.o' file from a correspondingly named `.c' file using a `cc -c' command. For example, it will use the command `cc -c main.c -o main.o' to compile `main.c' into `main.o'. We can therefore omit the commands from the rules for the object files. *Note Using Implicit Rules: Implicit Rules. When a `.c' file is used automatically in this way, it is also automatically added to the list of prerequisites. We can therefore omit the `.c' files from the prerequisites, provided we omit the commands. Here is the entire example, with both of these changes, and a variable `objects' as suggested above: objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o edit : $(objects) cc -o edit $(objects) main.o : defs.h kbd.o : defs.h command.h command.o : defs.h command.h display.o : defs.h buffer.h insert.o : defs.h buffer.h search.o : defs.h buffer.h files.o : defs.h buffer.h command.h utils.o : defs.h .PHONY : clean clean : rm edit $(objects) This is how we would write the makefile in actual practice. (The complications associated with `clean' are described elsewhere. See *Note Phony Targets::, and *Note Errors in Commands: Errors.) Because implicit rules are so convenient, they are important. You will see them used frequently.  File: make.info, Node: Combine By Prerequisite, Next: Cleanup, Prev: make Deduces, Up: Introduction 2.6 Another Style of Makefile ============================= When the objects of a makefile are created only by implicit rules, an alternative style of makefile is possible. In this style of makefile, you group entries by their prerequisites instead of by their targets. Here is what one looks like: objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o edit : $(objects) cc -o edit $(objects) $(objects) : defs.h kbd.o command.o files.o : command.h display.o insert.o search.o files.o : buffer.h Here `defs.h' is given as a prerequisite of all the object files; `command.h' and `buffer.h' are prerequisites of the specific object files listed for them. Whether this is better is a matter of taste: it is more compact, but some people dislike it because they find it clearer to put all the information about each target in one place.  File: make.info, Node: Cleanup, Prev: Combine By Prerequisite, Up: Introduction 2.7 Rules for Cleaning the Directory ==================================== Compiling a program is not the only thing you might want to write rules for. Makefiles commonly tell how to do a few other things besides compiling a program: for example, how to delete all the object files and executables so that the directory is `clean'. Here is how we could write a `make' rule for cleaning our example editor: clean: rm edit $(objects) In practice, we might want to write the rule in a somewhat more complicated manner to handle unanticipated situations. We would do this: .PHONY : clean clean : -rm edit $(objects) This prevents `make' from getting confused by an actual file called `clean' and causes it to continue in spite of errors from `rm'. (See *Note Phony Targets::, and *Note Errors in Commands: Errors.) A rule such as this should not be placed at the beginning of the makefile, because we do not want it to run by default! Thus, in the example makefile, we want the rule for `edit', which recompiles the editor, to remain the default goal. Since `clean' is not a prerequisite of `edit', this rule will not run at all if we give the command `make' with no arguments. In order to make the rule run, we have to type `make clean'. *Note How to Run `make': Running.  File: make.info, Node: Makefiles, Next: Rules, Prev: Introduction, Up: Top 3 Writing Makefiles ******************* The information that tells `make' how to recompile a system comes from reading a data base called the "makefile". * Menu: * Makefile Contents:: What makefiles contain. * Makefile Names:: How to name your makefile. * Include:: How one makefile can use another makefile. * MAKEFILES Variable:: The environment can specify extra makefiles. * MAKEFILE_LIST Variable:: Discover which makefiles have been read. * Special Variables:: Other special variables. * Remaking Makefiles:: How makefiles get remade. * Overriding Makefiles:: How to override part of one makefile with another makefile. * Reading Makefiles:: How makefiles are parsed. * Secondary Expansion:: How and when secondary expansion is performed.  File: make.info, Node: Makefile Contents, Next: Makefile Names, Prev: Makefiles, Up: Makefiles 3.1 What Makefiles Contain ========================== Makefiles contain five kinds of things: "explicit rules", "implicit rules", "variable definitions", "directives", and "comments". Rules, variables, and directives are described at length in later chapters. * An "explicit rule" says when and how to remake one or more files, called the rule's "targets". It lists the other files that the targets depend on, called the "prerequisites" of the target, and may also give commands to use to create or update the targets. *Note Writing Rules: Rules. * An "implicit rule" says when and how to remake a class of files based on their names. It describes how a target may depend on a file with a name similar to the target and gives commands to create or update such a target. *Note Using Implicit Rules: Implicit Rules. * A "variable definition" is a line that specifies a text string value for a variable that can be substituted into the text later. The simple makefile example shows a variable definition for `objects' as a list of all object files (*note Variables Make Makefiles Simpler: Variables Simplify.). * A "directive" is a command for `make' to do something special while reading the makefile. These include: * Reading another makefile (*note Including Other Makefiles: Include.). * Deciding (based on the values of variables) whether to use or ignore a part of the makefile (*note Conditional Parts of Makefiles: Conditionals.). * Defining a variable from a verbatim string containing multiple lines (*note Defining Variables Verbatim: Defining.). * `#' in a line of a makefile starts a "comment". It and the rest of the line are ignored, except that a trailing backslash not escaped by another backslash will continue the comment across multiple lines. A line containing just a comment (with perhaps spaces before it) is effectively blank, and is ignored. If you want a literal `#', escape it with a backslash (e.g., `\#'). Comments may appear on any line in the makefile, although they are treated specially in certain situations. Within a command script (if the line begins with a TAB character) the entire line is passed to the shell, just as with any other line that begins with a TAB. The shell decides how to interpret the text: whether or not this is a comment is up to the shell. Within a `define' directive, comments are not ignored during the definition of the variable, but rather kept intact in the value of the variable. When the variable is expanded they will either be treated as `make' comments or as command script text, depending on the context in which the variable is evaluated.  File: make.info, Node: Makefile Names, Next: Include, Prev: Makefile Contents, Up: Makefiles 3.2 What Name to Give Your Makefile =================================== By default, when `make' looks for the makefile, it tries the following names, in order: `GNUmakefile', `makefile' and `Makefile'. Normally you should call your makefile either `makefile' or `Makefile'. (We recommend `Makefile' because it appears prominently near the beginning of a directory listing, right near other important files such as `README'.) The first name checked, `GNUmakefile', is not recommended for most makefiles. You should use this name if you have a makefile that is specific to GNU `make', and will not be understood by other versions of `make'. Other `make' programs look for `makefile' and `Makefile', but not `GNUmakefile'. If `make' finds none of these names, it does not use any makefile. Then you must specify a goal with a command argument, and `make' will attempt to figure out how to remake it using only its built-in implicit rules. *Note Using Implicit Rules: Implicit Rules. If you want to use a nonstandard name for your makefile, you can specify the makefile name with the `-f' or `--file' option. The arguments `-f NAME' or `--file=NAME' tell `make' to read the file NAME as the makefile. If you use more than one `-f' or `--file' option, you can specify several makefiles. All the makefiles are effectively concatenated in the order specified. The default makefile names `GNUmakefile', `makefile' and `Makefile' are not checked automatically if you specify `-f' or `--file'.  File: make.info, Node: Include, Next: MAKEFILES Variable, Prev: Makefile Names, Up: Makefiles 3.3 Including Other Makefiles ============================= The `include' directive tells `make' to suspend reading the current makefile and read one or more other makefiles before continuing. The directive is a line in the makefile that looks like this: include FILENAMES... FILENAMES can contain shell file name patterns. If FILENAMES is empty, nothing is included and no error is printed. Extra spaces are allowed and ignored at the beginning of the line, but a tab is not allowed. (If the line begins with a tab, it will be considered a command line.) Whitespace is required between `include' and the file names, and between file names; extra whitespace is ignored there and at the end of the directive. A comment starting with `#' is allowed at the end of the line. If the file names contain any variable or function references, they are expanded. *Note How to Use Variables: Using Variables. For example, if you have three `.mk' files, `a.mk', `b.mk', and `c.mk', and `$(bar)' expands to `bish bash', then the following expression include foo *.mk $(bar) is equivalent to include foo a.mk b.mk c.mk bish bash When `make' processes an `include' directive, it suspends reading of the containing makefile and reads from each listed file in turn. When that is finished, `make' resumes reading the makefile in which the directive appears. One occasion for using `include' directives is when several programs, handled by individual makefiles in various directories, need to use a common set of variable definitions (*note Setting Variables: Setting.) or pattern rules (*note Defining and Redefining Pattern Rules: Pattern Rules.). Another such occasion is when you want to generate prerequisites from source files automatically; the prerequisites can be put in a file that is included by the main makefile. This practice is generally cleaner than that of somehow appending the prerequisites to the end of the main makefile as has been traditionally done with other versions of `make'. *Note Automatic Prerequisites::. If the specified name does not start with a slash, and the file is not found in the current directory, several other directories are searched. First, any directories you have specified with the `-I' or `--include-dir' option are searched (*note Summary of Options: Options Summary.). Then the following directories (if they exist) are searched, in this order: `PREFIX/include' (normally `/usr/local/include' (1)) `/usr/gnu/include', `/usr/local/include', `/usr/include'. If an included makefile cannot be found in any of these directories, a warning message is generated, but it is not an immediately fatal error; processing of the makefile containing the `include' continues. Once it has finished reading makefiles, `make' will try to remake any that are out of date or don't exist. *Note How Makefiles Are Remade: Remaking Makefiles. Only after it has tried to find a way to remake a makefile and failed, will `make' diagnose the missing makefile as a fatal error. If you want `make' to simply ignore a makefile which does not exist and cannot be remade, with no error message, use the `-include' directive instead of `include', like this: -include FILENAMES... This acts like `include' in every way except that there is no error (not even a warning) if any of the FILENAMES do not exist. For compatibility with some other `make' implementations, `sinclude' is another name for `-include'. ---------- Footnotes ---------- (1) GNU Make compiled for MS-DOS and MS-Windows behaves as if PREFIX has been defined to be the root of the DJGPP tree hierarchy.  File: make.info, Node: MAKEFILES Variable, Next: MAKEFILE_LIST Variable, Prev: Include, Up: Makefiles 3.4 The Variable `MAKEFILES' ============================ If the environment variable `MAKEFILES' is defined, `make' considers its value as a list of names (separated by whitespace) of additional makefiles to be read before the others. This works much like the `include' directive: various directories are searched for those files (*note Including Other Makefiles: Include.). In addition, the default goal is never taken from one of these makefiles and it is not an error if the files listed in `MAKEFILES' are not found. The main use of `MAKEFILES' is in communication between recursive invocations of `make' (*note Recursive Use of `make': Recursion.). It usually is not desirable to set the environment variable before a top-level invocation of `make', because it is usually better not to mess with a makefile from outside. However, if you are running `make' without a specific makefile, a makefile in `MAKEFILES' can do useful things to help the built-in implicit rules work better, such as defining search paths (*note Directory Search::). Some users are tempted to set `MAKEFILES' in the environment automatically on login, and program makefiles to expect this to be done. This is a very bad idea, because such makefiles will fail to work if run by anyone else. It is much better to write explicit `include' directives in the makefiles. *Note Including Other Makefiles: Include.  File: make.info, Node: MAKEFILE_LIST Variable, Next: Special Variables, Prev: MAKEFILES Variable, Up: Makefiles 3.5 The Variable `MAKEFILE_LIST' ================================ As `make' reads various makefiles, including any obtained from the `MAKEFILES' variable, the command line, the default files, or from `include' directives, their names will be automatically appended to the `MAKEFILE_LIST' variable. They are added right before `make' begins to parse them. This means that if the first thing a makefile does is examine the last word in this variable, it will be the name of the current makefile. Once the current makefile has used `include', however, the last word will be the just-included makefile. If a makefile named `Makefile' has this content: name1 := $(lastword $(MAKEFILE_LIST)) include inc.mk name2 := $(lastword $(MAKEFILE_LIST)) all: @echo name1 = $(name1) @echo name2 = $(name2) then you would expect to see this output: name1 = Makefile name2 = inc.mk *Note Text Functions::, for more information on the `word' and `words' functions used above. *Note The Two Flavors of Variables: Flavors, for more information on simply-expanded (`:=') variable definitions.  File: make.info, Node: Special Variables, Next: Remaking Makefiles, Prev: MAKEFILE_LIST Variable, Up: Makefiles 3.6 Other Special Variables =========================== GNU `make' also supports other special variables. Unless otherwise documented here, these values lose their special properties if they are set by a makefile or on the command line. `.DEFAULT_GOAL' Sets the default goal to be used if no targets were specified on the command line (*note Arguments to Specify the Goals: Goals.). The `.DEFAULT_GOAL' variable allows you to discover the current default goal, restart the default goal selection algorithm by clearing its value, or to explicitly set the default goal. The following example illustrates these cases: # Query the default goal. ifeq ($(.DEFAULT_GOAL),) $(warning no default goal is set) endif .PHONY: foo foo: ; @echo $@ $(warning default goal is $(.DEFAULT_GOAL)) # Reset the default goal. .DEFAULT_GOAL := .PHONY: bar bar: ; @echo $@ $(warning default goal is $(.DEFAULT_GOAL)) # Set our own. .DEFAULT_GOAL := foo This makefile prints: no default goal is set default goal is foo default goal is bar foo Note that assigning more than one target name to `.DEFAULT_GOAL' is illegal and will result in an error. `MAKE_RESTARTS' This variable is set only if this instance of `make' has restarted (*note How Makefiles Are Remade: Remaking Makefiles.): it will contain the number of times this instance has restarted. Note this is not the same as recursion (counted by the `MAKELEVEL' variable). You should not set, modify, or export this variable. `.VARIABLES' Expands to a list of the _names_ of all global variables defined so far. This includes variables which have empty values, as well as built-in variables (*note Variables Used by Implicit Rules: Implicit Variables.), but does not include any variables which are only defined in a target-specific context. Note that any value you assign to this variable will be ignored; it will always return its special value. `.FEATURES' Expands to a list of special features supported by this version of `make'. Possible values include: `archives' Supports `ar' (archive) files using special filename syntax. *Note Using `make' to Update Archive Files: Archives. `check-symlink' Supports the `-L' (`--check-symlink-times') flag. *Note Summary of Options: Options Summary. `else-if' Supports "else if" non-nested conditionals. *Note Syntax of Conditionals: Conditional Syntax. `jobserver' Supports "job server" enhanced parallel builds. *Note Parallel Execution: Parallel. `second-expansion' Supports secondary expansion of prerequisite lists. `order-only' Supports order-only prerequisites. *Note Types of Prerequisites: Prerequisite Types. `target-specific' Supports target-specific and pattern-specific variable assignments. *Note Target-specific Variable Values: Target-specific. `.INCLUDE_DIRS' Expands to a list of directories that `make' searches for included makefiles (*note Including Other Makefiles: Include.).  File: make.info, Node: Remaking Makefiles, Next: Overriding Makefiles, Prev: Special Variables, Up: Makefiles 3.7 How Makefiles Are Remade ============================ Sometimes makefiles can be remade from other files, such as RCS or SCCS files. If a makefile can be remade from other files, you probably want `make' to get an up-to-date version of the makefile to read in. To this end, after reading in all makefiles, `make' will consider each as a goal target and attempt to update it. If a makefile has a rule which says how to update it (found either in that very makefile or in another one) or if an implicit rule applies to it (*note Using Implicit Rules: Implicit Rules.), it will be updated if necessary. After all makefiles have been checked, if any have actually been changed, `make' starts with a clean slate and reads all the makefiles over again. (It will also attempt to update each of them over again, but normally this will not change them again, since they are already up to date.) If you know that one or more of your makefiles cannot be remade and you want to keep `make' from performing an implicit rule search on them, perhaps for efficiency reasons, you can use any normal method of preventing implicit rule lookup to do so. For example, you can write an explicit rule with the makefile as the target, and an empty command string (*note Using Empty Commands: Empty Commands.). If the makefiles specify a double-colon rule to remake a file with commands but no prerequisites, that file will always be remade (*note Double-Colon::). In the case of makefiles, a makefile that has a double-colon rule with commands but no prerequisites will be remade every time `make' is run, and then again after `make' starts over and reads the makefiles in again. This would cause an infinite loop: `make' would constantly remake the makefile, and never do anything else. So, to avoid this, `make' will *not* attempt to remake makefiles which are specified as targets of a double-colon rule with commands but no prerequisites. If you do not specify any makefiles to be read with `-f' or `--file' options, `make' will try the default makefile names; *note What Name to Give Your Makefile: Makefile Names. Unlike makefiles explicitly requested with `-f' or `--file' options, `make' is not certain that these makefiles should exist. However, if a default makefile does not exist but can be created by running `make' rules, you probably want the rules to be run so that the makefile can be used. Therefore, if none of the default makefiles exists, `make' will try to make each of them in the same order in which they are searched for (*note What Name to Give Your Makefile: Makefile Names.) until it succeeds in making one, or it runs out of names to try. Note that it is not an error if `make' cannot find or make any makefile; a makefile is not always necessary. When you use the `-t' or `--touch' option (*note Instead of Executing the Commands: Instead of Execution.), you would not want to use an out-of-date makefile to decide which targets to touch. So the `-t' option has no effect on updating makefiles; they are really updated even if `-t' is specified. Likewise, `-q' (or `--question') and `-n' (or `--just-print') do not prevent updating of makefiles, because an out-of-date makefile would result in the wrong output for other targets. Thus, `make -f mfile -n foo' will update `mfile', read it in, and then print the commands to update `foo' and its prerequisites without running them. The commands printed for `foo' will be those specified in the updated contents of `mfile'. However, on occasion you might actually wish to prevent updating of even the makefiles. You can do this by specifying the makefiles as goals in the command line as well as specifying them as makefiles. When the makefile name is specified explicitly as a goal, the options `-t' and so on do apply to them. Thus, `make -f mfile -n mfile foo' would read the makefile `mfile', print the commands needed to update it without actually running them, and then print the commands needed to update `foo' without running them. The commands for `foo' will be those specified by the existing contents of `mfile'.  File: make.info, Node: Overriding Makefiles, Next: Reading Makefiles, Prev: Remaking Makefiles, Up: Makefiles 3.8 Overriding Part of Another Makefile ======================================= Sometimes it is useful to have a makefile that is mostly just like another makefile. You can often use the `include' directive to include one in the other, and add more targets or variable definitions. However, if the two makefiles give different commands for the same target, `make' will not let you just do this. But there is another way. In the containing makefile (the one that wants to include the other), you can use a match-anything pattern rule to say that to remake any target that cannot be made from the information in the containing makefile, `make' should look in another makefile. *Note Pattern Rules::, for more information on pattern rules. For example, if you have a makefile called `Makefile' that says how to make the target `foo' (and other targets), you can write a makefile called `GNUmakefile' that contains: foo: frobnicate > foo %: force @$(MAKE) -f Makefile $@ force: ; If you say `make foo', `make' will find `GNUmakefile', read it, and see that to make `foo', it needs to run the command `frobnicate > foo'. If you say `make bar', `make' will find no way to make `bar' in `GNUmakefile', so it will use the commands from the pattern rule: `make -f Makefile bar'. If `Makefile' provides a rule for updating `bar', `make' will apply the rule. And likewise for any other target that `GNUmakefile' does not say how to make. The way this works is that the pattern rule has a pattern of just `%', so it matches any target whatever. The rule specifies a prerequisite `force', to guarantee that the commands will be run even if the target file already exists. We give `force' target empty commands to prevent `make' from searching for an implicit rule to build it--otherwise it would apply the same match-anything rule to `force' itself and create a prerequisite loop!  File: make.info, Node: Reading Makefiles, Next: Secondary Expansion, Prev: Overriding Makefiles, Up: Makefiles 3.9 How `make' Reads a Makefile =============================== GNU `make' does its work in two distinct phases. During the first phase it reads all the makefiles, included makefiles, etc. and internalizes all the variables and their values, implicit and explicit rules, and constructs a dependency graph of all the targets and their prerequisites. During the second phase, `make' uses these internal structures to determine what targets will need to be rebuilt and to invoke the rules necessary to do so. It's important to understand this two-phase approach because it has a direct impact on how variable and function expansion happens; this is often a source of some confusion when writing makefiles. Here we will present a summary of the phases in which expansion happens for different constructs within the makefile. We say that expansion is "immediate" if it happens during the first phase: in this case `make' will expand any variables or functions in that section of a construct as the makefile is parsed. We say that expansion is "deferred" if expansion is not performed immediately. Expansion of deferred construct is not performed until either the construct appears later in an immediate context, or until the second phase. You may not be familiar with some of these constructs yet. You can reference this section as you become familiar with them, in later chapters. Variable Assignment ------------------- Variable definitions are parsed as follows: IMMEDIATE = DEFERRED IMMEDIATE ?= DEFERRED IMMEDIATE := IMMEDIATE IMMEDIATE += DEFERRED or IMMEDIATE define IMMEDIATE DEFERRED endef For the append operator, `+=', the right-hand side is considered immediate if the variable was previously set as a simple variable (`:='), and deferred otherwise. Conditional Statements ---------------------- All instances of conditional syntax are parsed immediately, in their entirety; this includes the `ifdef', `ifeq', `ifndef', and `ifneq' forms. Of course this means that automatic variables cannot be used in conditional statements, as automatic variables are not set until the command script for that rule is invoked. If you need to use automatic variables in a conditional you _must_ use shell conditional syntax, in your command script proper, for these tests, not `make' conditionals. Rule Definition --------------- A rule is always expanded the same way, regardless of the form: IMMEDIATE : IMMEDIATE ; DEFERRED DEFERRED That is, the target and prerequisite sections are expanded immediately, and the commands used to construct the target are always deferred. This general rule is true for explicit rules, pattern rules, suffix rules, static pattern rules, and simple prerequisite definitions.  File: make.info, Node: Secondary Expansion, Prev: Reading Makefiles, Up: Makefiles 3.10 Secondary Expansion ======================== In the previous section we learned that GNU `make' works in two distinct phases: a read-in phase and a target-update phase (*note How `make' Reads a Makefile: Reading Makefiles.). GNU make also has the ability to enable a _second expansion_ of the prerequisites (only) for some or all targets defined in the makefile. In order for this second expansion to occur, the special target `.SECONDEXPANSION' must be defined before the first prerequisite list that makes use of this feature. If that special target is defined then in between the two phases mentioned above, right at the end of the read-in phase, all the prerequisites of the targets defined after the special target are expanded a _second time_. In most circumstances this secondary expansion will have no effect, since all variable and function references will have been expanded during the initial parsing of the makefiles. In order to take advantage of the secondary expansion phase of the parser, then, it's necessary to _escape_ the variable or function reference in the makefile. In this case the first expansion merely un-escapes the reference but doesn't expand it, and expansion is left to the secondary expansion phase. For example, consider this makefile: .SECONDEXPANSION: ONEVAR = onefile TWOVAR = twofile myfile: $(ONEVAR) $$(TWOVAR) After the first expansion phase the prerequisites list of the `myfile' target will be `onefile' and `$(TWOVAR)'; the first (unescaped) variable reference to ONEVAR is expanded, while the second (escaped) variable reference is simply unescaped, without being recognized as a variable reference. Now during the secondary expansion the first word is expanded again but since it contains no variable or function references it remains the static value `onefile', while the second word is now a normal reference to the variable TWOVAR, which is expanded to the value `twofile'. The final result is that there are two prerequisites, `onefile' and `twofile'. Obviously, this is not a very interesting case since the same result could more easily have been achieved simply by having both variables appear, unescaped, in the prerequisites list. One difference becomes apparent if the variables are reset; consider this example: .SECONDEXPANSION: AVAR = top onefile: $(AVAR) twofile: $$(AVAR) AVAR = bottom Here the prerequisite of `onefile' will be expanded immediately, and resolve to the value `top', while the prerequisite of `twofile' will not be full expanded until the secondary expansion and yield a value of `bottom'. This is marginally more exciting, but the true power of this feature only becomes apparent when you discover that secondary expansions always take place within the scope of the automatic variables for that target. This means that you can use variables such as `$@', `$*', etc. during the second expansion and they will have their expected values, just as in the command script. All you have to do is defer the expansion by escaping the `$'. Also, secondary expansion occurs for both explicit and implicit (pattern) rules. Knowing this, the possible uses for this feature increase dramatically. For example: .SECONDEXPANSION: main_OBJS := main.o try.o test.o lib_OBJS := lib.o api.o main lib: $$($$@_OBJS) Here, after the initial expansion the prerequisites of both the `main' and `lib' targets will be `$($@_OBJS)'. During the secondary expansion, the `$@' variable is set to the name of the target and so the expansion for the `main' target will yield `$(main_OBJS)', or `main.o try.o test.o', while the secondary expansion for the `lib' target will yield `$(lib_OBJS)', or `lib.o api.o'. You can also mix functions here, as long as they are properly escaped: main_SRCS := main.c try.c test.c lib_SRCS := lib.c api.c .SECONDEXPANSION: main lib: $$(patsubst %.c,%.o,$$($$@_SRCS)) This version allows users to specify source files rather than object files, but gives the same resulting prerequisites list as the previous example. Evaluation of automatic variables during the secondary expansion phase, especially of the target name variable `$$@', behaves similarly to evaluation within command scripts. However, there are some subtle differences and "corner cases" which come into play for the different types of rule definitions that `make' understands. The subtleties of using the different automatic variables are described below. Secondary Expansion of Explicit Rules ------------------------------------- During the secondary expansion of explicit rules, `$$@' and `$$%' evaluate, respectively, to the file name of the target and, when the target is an archive member, the target member name. The `$$<' variable evaluates to the first prerequisite in the first rule for this target. `$$^' and `$$+' evaluate to the list of all prerequisites of rules _that have already appeared_ for the same target (`$$+' with repetitions and `$$^' without). The following example will help illustrate these behaviors: .SECONDEXPANSION: foo: foo.1 bar.1 $$< $$^ $$+ # line #1 foo: foo.2 bar.2 $$< $$^ $$+ # line #2 foo: foo.3 bar.3 $$< $$^ $$+ # line #3 In the first prerequisite list, all three variables (`$$<', `$$^', and `$$+') expand to the empty string. In the second, they will have values `foo.1', `foo.1 bar.1', and `foo.1 bar.1' respectively. In the third they will have values `foo.1', `foo.1 bar.1 foo.2 bar.2', and `foo.1 bar.1 foo.2 bar.2' respectively. Rules undergo secondary expansion in makefile order, except that the rule with the command script is always evaluated last. The variables `$$?' and `$$*' are not available and expand to the empty string. Secondary Expansion of Static Pattern Rules ------------------------------------------- Rules for secondary expansion of static pattern rules are identical to those for explicit rules, above, with one exception: for static pattern rules the `$$*' variable is set to the pattern stem. As with explicit rules, `$$?' is not available and expands to the empty string. Secondary Expansion of Implicit Rules ------------------------------------- As `make' searches for an implicit rule, it substitutes the stem and then performs secondary expansion for every rule with a matching target pattern. The value of the automatic variables is derived in the same fashion as for static pattern rules. As an example: .SECONDEXPANSION: foo: bar foo foz: fo%: bo% %oo: $$< $$^ $$+ $$* When the implicit rule is tried for target `foo', `$$<' expands to `bar', `$$^' expands to `bar boo', `$$+' also expands to `bar boo', and `$$*' expands to `f'. Note that the directory prefix (D), as described in *Note Implicit Rule Search Algorithm: Implicit Rule Search, is appended (after expansion) to all the patterns in the prerequisites list. As an example: .SECONDEXPANSION: /tmp/foo.o: %.o: $$(addsuffix /%.c,foo bar) foo.h The prerequisite list after the secondary expansion and directory prefix reconstruction will be `/tmp/foo/foo.c /tmp/var/bar/foo.c foo.h'. If you are not interested in this reconstruction, you can use `$$*' instead of `%' in the prerequisites list.  File: make.info, Node: Rules, Next: Commands, Prev: Makefiles, Up: Top 4 Writing Rules *************** A "rule" appears in the makefile and says when and how to remake certain files, called the rule's "targets" (most often only one per rule). It lists the other files that are the "prerequisites" of the target, and "commands" to use to create or update the target. The order of rules is not significant, except for determining the "default goal": the target for `make' to consider, if you do not otherwise specify one. The default goal is the target of the first rule in the first makefile. If the first rule has multiple targets, only the first target is taken as the default. There are two exceptions: a target starting with a period is not a default unless it contains one or more slashes, `/', as well; and, a target that defines a pattern rule has no effect on the default goal. (*Note Defining and Redefining Pattern Rules: Pattern Rules.) Therefore, we usually write the makefile so that the first rule is the one for compiling the entire program or all the programs described by the makefile (often with a target called `all'). *Note Arguments to Specify the Goals: Goals. * Menu: * Rule Example:: An example explained. * Rule Syntax:: General syntax explained. * Prerequisite Types:: There are two types of prerequisites. * Wildcards:: Using wildcard characters such as `*'. * Directory Search:: Searching other directories for source files. * Phony Targets:: Using a target that is not a real file's name. * Force Targets:: You can use a target without commands or prerequisites to mark other targets as phony. * Empty Targets:: When only the date matters and the files are empty. * Special Targets:: Targets with special built-in meanings. * Multiple Targets:: When to make use of several targets in a rule. * Multiple Rules:: How to use several rules with the same target. * Static Pattern:: Static pattern rules apply to multiple targets and can vary the prerequisites according to the target name. * Double-Colon:: How to use a special kind of rule to allow several independent rules for one target. * Automatic Prerequisites:: How to automatically generate rules giving prerequisites from source files themselves.  File: make.info, Node: Rule Example, Next: Rule Syntax, Prev: Rules, Up: Rules 4.1 Rule Example ================ Here is an example of a rule: foo.o : foo.c defs.h # module for twiddling the frobs cc -c -g foo.c Its target is `foo.o' and its prerequisites are `foo.c' and `defs.h'. It has one command, which is `cc -c -g foo.c'. The command line starts with a tab to identify it as a command. This rule says two things: * How to decide whether `foo.o' is out of date: it is out of date if it does not exist, or if either `foo.c' or `defs.h' is more recent than it. * How to update the file `foo.o': by running `cc' as stated. The command does not explicitly mention `defs.h', but we presume that `foo.c' includes it, and that that is why `defs.h' was added to the prerequisites.  File: make.info, Node: Rule Syntax, Next: Prerequisite Types, Prev: Rule Example, Up: Rules 4.2 Rule Syntax =============== In general, a rule looks like this: TARGETS : PREREQUISITES COMMAND ... or like this: TARGETS : PREREQUISITES ; COMMAND COMMAND ... The TARGETS are file names, separated by spaces. Wildcard characters may be used (*note Using Wildcard Characters in File Names: Wildcards.) and a name of the form `A(M)' represents member M in archive file A (*note Archive Members as Targets: Archive Members.). Usually there is only one target per rule, but occasionally there is a reason to have more (*note Multiple Targets in a Rule: Multiple Targets.). The COMMAND lines start with a tab character. The first command may appear on the line after the prerequisites, with a tab character, or may appear on the same line, with a semicolon. Either way, the effect is the same. There are other differences in the syntax of command lines. *Note Writing the Commands in Rules: Commands. Because dollar signs are used to start `make' variable references, if you really want a dollar sign in a target or prerequisite you must write two of them, `$$' (*note How to Use Variables: Using Variables.). If you have enabled secondary expansion (*note Secondary Expansion::) and you want a literal dollar sign in the prerequisites lise, you must actually write _four_ dollar signs (`$$$$'). You may split a long line by inserting a backslash followed by a newline, but this is not required, as `make' places no limit on the length of a line in a makefile. A rule tells `make' two things: when the targets are out of date, and how to update them when necessary. The criterion for being out of date is specified in terms of the PREREQUISITES, which consist of file names separated by spaces. (Wildcards and archive members (*note Archives::) are allowed here too.) A target is out of date if it does not exist or if it is older than any of the prerequisites (by comparison of last-modification times). The idea is that the contents of the target file are computed based on information in the prerequisites, so if any of the prerequisites changes, the contents of the existing target file are no longer necessarily valid. How to update is specified by COMMANDS. These are lines to be executed by the shell (normally `sh'), but with some extra features (*note Writing the Commands in Rules: Commands.).  File: make.info, Node: Prerequisite Types, Next: Wildcards, Prev: Rule Syntax, Up: Rules 4.3 Types of Prerequisites ========================== There are actually two different types of prerequisites understood by GNU `make': normal prerequisites such as described in the previous section, and "order-only" prerequisites. A normal prerequisite makes two statements: first, it imposes an order of execution of build commands: any commands necessary to build any of a target's prerequisites will be fully executed before any commands necessary to build the target. Second, it imposes a dependency relationship: if any prerequisite is newer than the target, then the target is considered out-of-date and must be rebuilt. Normally, this is exactly what you want: if a target's prerequisite is updated, then the target should also be updated. Occasionally, however, you have a situation where you want to impose a specific ordering on the rules to be invoked _without_ forcing the target to be updated if one of those rules is executed. In that case, you want to define "order-only" prerequisites. Order-only prerequisites can be specified by placing a pipe symbol (`|') in the prerequisites list: any prerequisites to the left of the pipe symbol are normal; any prerequisites to the right are order-only: TARGETS : NORMAL-PREREQUISITES | ORDER-ONLY-PREREQUISITES The normal prerequisites section may of course be empty. Also, you may still declare multiple lines of prerequisites for the same target: they are appended appropriately. Note that if you declare the same file to be both a normal and an order-only prerequisite, the normal prerequisite takes precedence (since they are a strict superset of the behavior of an order-only prerequisite).  File: make.info, Node: Wildcards, Next: Directory Search, Prev: Prerequisite Types, Up: Rules 4.4 Using Wildcard Characters in File Names =========================================== A single file name can specify many files using "wildcard characters". The wildcard characters in `make' are `*', `?' and `[...]', the same as in the Bourne shell. For example, `*.c' specifies a list of all the files (in the working directory) whose names end in `.c'. The character `~' at the beginning of a file name also has special significance. If alone, or followed by a slash, it represents your home directory. For example `~/bin' expands to `/home/you/bin'. If the `~' is followed by a word, the string represents the home directory of the user named by that word. For example `~john/bin' expands to `/home/john/bin'. On systems which don't have a home directory for each user (such as MS-DOS or MS-Windows), this functionality can be simulated by setting the environment variable HOME. Wildcard expansion is performed by `make' automatically in targets and in prerequisites. In commands the shell is responsible for wildcard expansion. In other contexts, wildcard expansion happens only if you request it explicitly with the `wildcard' function. The special significance of a wildcard character can be turned off by preceding it with a backslash. Thus, `foo\*bar' would refer to a specific file whose name consists of `foo', an asterisk, and `bar'. * Menu: * Wildcard Examples:: Several examples * Wildcard Pitfall:: Problems to avoid. * Wildcard Function:: How to cause wildcard expansion where it does not normally take place.  File: make.info, Node: Wildcard Examples, Next: Wildcard Pitfall, Prev: Wildcards, Up: Wildcards 4.4.1 Wildcard Examples ----------------------- Wildcards can be used in the commands of a rule, where they are expanded by the shell. For example, here is a rule to delete all the object files: clean: rm -f *.o Wildcards are also useful in the prerequisites of a rule. With the following rule in the makefile, `make print' will print all the `.c' files that have changed since the last time you printed them: print: *.c lpr -p $? touch print This rule uses `print' as an empty target file; see *Note Empty Target Files to Record Events: Empty Targets. (The automatic variable `$?' is used to print only those files that have changed; see *Note Automatic Variables::.) Wildcard expansion does not happen when you define a variable. Thus, if you write this: objects = *.o then the value of the variable `objects' is the actual string `*.o'. However, if you use the value of `objects' in a target, prerequisite or command, wildcard expansion will take place at that time. To set `objects' to the expansion, instead use: objects := $(wildcard *.o) *Note Wildcard Function::.  File: make.info, Node: Wildcard Pitfall, Next: Wildcard Function, Prev: Wildcard Examples, Up: Wildcards 4.4.2 Pitfalls of Using Wildcards --------------------------------- Now here is an example of a naive way of using wildcard expansion, that does not do what you would intend. Suppose you would like to say that the executable file `foo' is made from all the object files in the directory, and you write this: objects = *.o foo : $(objects) cc -o foo $(CFLAGS) $(objects) The value of `objects' is the actual string `*.o'. Wildcard expansion happens in the rule for `foo', so that each _existing_ `.o' file becomes a prerequisite of `foo' and will be recompiled if necessary. But what if you delete all the `.o' files? When a wildcard matches no files, it is left as it is, so then `foo' will depend on the oddly-named file `*.o'. Since no such file is likely to exist, `make' will give you an error saying it cannot figure out how to make `*.o'. This is not what you want! Actually it is possible to obtain the desired result with wildcard expansion, but you need more sophisticated techniques, including the `wildcard' function and string substitution. *Note The Function `wildcard': Wildcard Function. Microsoft operating systems (MS-DOS and MS-Windows) use backslashes to separate directories in pathnames, like so: c:\foo\bar\baz.c This is equivalent to the Unix-style `c:/foo/bar/baz.c' (the `c:' part is the so-called drive letter). When `make' runs on these systems, it supports backslashes as well as the Unix-style forward slashes in pathnames. However, this support does _not_ include the wildcard expansion, where backslash is a quote character. Therefore, you _must_ use Unix-style slashes in these cases.  File: make.info, Node: Wildcard Function, Prev: Wildcard Pitfall, Up: Wildcards 4.4.3 The Function `wildcard' ----------------------------- Wildcard expansion happens automatically in rules. But wildcard expansion does not normally take place when a variable is set, or inside the arguments of a function. If you want to do wildcard expansion in such places, you need to use the `wildcard' function, like this: $(wildcard PATTERN...) This string, used anywhere in a makefile, is replaced by a space-separated list of names of existing files that match one of the given file name patterns. If no existing file name matches a pattern, then that pattern is omitted from the output of the `wildcard' function. Note that this is different from how unmatched wildcards behave in rules, where they are used verbatim rather than ignored (*note Wildcard Pitfall::). One use of the `wildcard' function is to get a list of all the C source files in a directory, like this: $(wildcard *.c) We can change the list of C source files into a list of object files by replacing the `.c' suffix with `.o' in the result, like this: $(patsubst %.c,%.o,$(wildcard *.c)) (Here we have used another function, `patsubst'. *Note Functions for String Substitution and Analysis: Text Functions.) Thus, a makefile to compile all C source files in the directory and then link them together could be written as follows: objects := $(patsubst %.c,%.o,$(wildcard *.c)) foo : $(objects) cc -o foo $(objects) (This takes advantage of the implicit rule for compiling C programs, so there is no need to write explicit rules for compiling the files. *Note The Two Flavors of Variables: Flavors, for an explanation of `:=', which is a variant of `='.)  File: make.info, Node: Directory Search, Next: Phony Targets, Prev: Wildcards, Up: Rules 4.5 Searching Directories for Prerequisites =========================================== For large systems, it is often desirable to put sources in a separate directory from the binaries. The "directory search" features of `make' facilitate this by searching several directories automatically to find a prerequisite. When you redistribute the files among directories, you do not need to change the individual rules, just the search paths. * Menu: * General Search:: Specifying a search path that applies to every prerequisite. * Selective Search:: Specifying a search path for a specified class of names. * Search Algorithm:: When and how search paths are applied. * Commands/Search:: How to write shell commands that work together with search paths. * Implicit/Search:: How search paths affect implicit rules. * Libraries/Search:: Directory search for link libraries.  File: make.info, Node: General Search, Next: Selective Search, Prev: Directory Search, Up: Directory Search 4.5.1 `VPATH': Search Path for All Prerequisites ------------------------------------------------ The value of the `make' variable `VPATH' specifies a list of directories that `make' should search. Most often, the directories are expected to contain prerequisite files that are not in the current directory; however, `make' uses `VPATH' as a search list for both prerequisites and targets of rules. Thus, if a file that is listed as a target or prerequisite does not exist in the current directory, `make' searches the directories listed in `VPATH' for a file with that name. If a file is found in one of them, that file may become the prerequisite (see below). Rules may then specify the names of files in the prerequisite list as if they all existed in the current directory. *Note Writing Shell Commands with Directory Search: Commands/Search. In the `VPATH' variable, directory names are separated by colons or blanks. The order in which directories are listed is the order followed by `make' in its search. (On MS-DOS and MS-Windows, semi-colons are used as separators of directory names in `VPATH', since the colon can be used in the pathname itself, after the drive letter.) For example, VPATH = src:../headers specifies a path containing two directories, `src' and `../headers', which `make' searches in that order. With this value of `VPATH', the following rule, foo.o : foo.c is interpreted as if it were written like this: foo.o : src/foo.c assuming the file `foo.c' does not exist in the current directory but is found in the directory `src'.  File: make.info, Node: Selective Search, Next: Search Algorithm, Prev: General Search, Up: Directory Search 4.5.2 The `vpath' Directive --------------------------- Similar to the `VPATH' variable, but more selective, is the `vpath' directive (note lower case), which allows you to specify a search path for a particular class of file names: those that match a particular pattern. Thus you can supply certain search directories for one class of file names and other directories (or none) for other file names. There are three forms of the `vpath' directive: `vpath PATTERN DIRECTORIES' Specify the search path DIRECTORIES for file names that match PATTERN. The search path, DIRECTORIES, is a list of directories to be searched, separated by colons (semi-colons on MS-DOS and MS-Windows) or blanks, just like the search path used in the `VPATH' variable. `vpath PATTERN' Clear out the search path associated with PATTERN. `vpath' Clear all search paths previously specified with `vpath' directives. A `vpath' pattern is a string containing a `%' character. The string must match the file name of a prerequisite that is being searched for, the `%' character matching any sequence of zero or more characters (as in pattern rules; *note Defining and Redefining Pattern Rules: Pattern Rules.). For example, `%.h' matches files that end in `.h'. (If there is no `%', the pattern must match the prerequisite exactly, which is not useful very often.) `%' characters in a `vpath' directive's pattern can be quoted with preceding backslashes (`\'). Backslashes that would otherwise quote `%' characters can be quoted with more backslashes. Backslashes that quote `%' characters or other backslashes are removed from the pattern before it is compared to file names. Backslashes that are not in danger of quoting `%' characters go unmolested. When a prerequisite fails to exist in the current directory, if the PATTERN in a `vpath' directive matches the name of the prerequisite file, then the DIRECTORIES in that directive are searched just like (and before) the directories in the `VPATH' variable. For example, vpath %.h ../headers tells `make' to look for any prerequisite whose name ends in `.h' in the directory `../headers' if the file is not found in the current directory. If several `vpath' patterns match the prerequisite file's name, then `make' processes each matching `vpath' directive one by one, searching all the directories mentioned in each directive. `make' handles multiple `vpath' directives in the order in which they appear in the makefile; multiple directives with the same pattern are independent of each other. Thus, vpath %.c foo vpath % blish vpath %.c bar will look for a file ending in `.c' in `foo', then `blish', then `bar', while vpath %.c foo:bar vpath % blish will look for a file ending in `.c' in `foo', then `bar', then `blish'.  File: make.info, Node: Search Algorithm, Next: Commands/Search, Prev: Selective Search, Up: Directory Search 4.5.3 How Directory Searches are Performed ------------------------------------------ When a prerequisite is found through directory search, regardless of type (general or selective), the pathname located may not be the one that `make' actually provides you in the prerequisite list. Sometimes the path discovered through directory search is thrown away. The algorithm `make' uses to decide whether to keep or abandon a path found via directory search is as follows: 1. If a target file does not exist at the path specified in the makefile, directory search is performed. 2. If the directory search is successful, that path is kept and this file is tentatively stored as the target. 3. All prerequisites of this target are examined using this same method. 4. After processing the prerequisites, the target may or may not need to be rebuilt: a. If the target does _not_ need to be rebuilt, the path to the file found during directory search is used for any prerequisite lists which contain this target. In short, if `make' doesn't need to rebuild the target then you use the path found via directory search. b. If the target _does_ need to be rebuilt (is out-of-date), the pathname found during directory search is _thrown away_, and the target is rebuilt using the file name specified in the makefile. In short, if `make' must rebuild, then the target is rebuilt locally, not in the directory found via directory search. This algorithm may seem complex, but in practice it is quite often exactly what you want. Other versions of `make' use a simpler algorithm: if the file does not exist, and it is found via directory search, then that pathname is always used whether or not the target needs to be built. Thus, if the target is rebuilt it is created at the pathname discovered during directory search. If, in fact, this is the behavior you want for some or all of your directories, you can use the `GPATH' variable to indicate this to `make'. `GPATH' has the same syntax and format as `VPATH' (that is, a space- or colon-delimited list of pathnames). If an out-of-date target is found by directory search in a directory that also appears in `GPATH', then that pathname is not thrown away. The target is rebuilt using the expanded path.  File: make.info, Node: Commands/Search, Next: Implicit/Search, Prev: Search Algorithm, Up: Directory Search 4.5.4 Writing Shell Commands with Directory Search -------------------------------------------------- When a prerequisite is found in another directory through directory search, this cannot change the commands of the rule; they will execute as written. Therefore, you must write the commands with care so that they will look for the prerequisite in the directory where `make' finds it. This is done with the "automatic variables" such as `$^' (*note Automatic Variables::). For instance, the value of `$^' is a list of all the prerequisites of the rule, including the names of the directories in which they were found, and the value of `$@' is the target. Thus: foo.o : foo.c cc -c $(CFLAGS) $^ -o $@ (The variable `CFLAGS' exists so you can specify flags for C compilation by implicit rules; we use it here for consistency so it will affect all C compilations uniformly; *note Variables Used by Implicit Rules: Implicit Variables.) Often the prerequisites include header files as well, which you do not want to mention in the commands. The automatic variable `$<' is just the first prerequisite: VPATH = src:../headers foo.o : foo.c defs.h hack.h cc -c $(CFLAGS) $< -o $@  File: make.info, Node: Implicit/Search, Next: Libraries/Search, Prev: Commands/Search, Up: Directory Search 4.5.5 Directory Search and Implicit Rules ----------------------------------------- The search through the directories specified in `VPATH' or with `vpath' also happens during consideration of implicit rules (*note Using Implicit Rules: Implicit Rules.). For example, when a file `foo.o' has no explicit rule, `make' considers implicit rules, such as the built-in rule to compile `foo.c' if that file exists. If such a file is lacking in the current directory, the appropriate directories are searched for it. If `foo.c' exists (or is mentioned in the makefile) in any of the directories, the implicit rule for C compilation is applied. The commands of implicit rules normally use automatic variables as a matter of necessity; consequently they will use the file names found by directory search with no extra effort.  File: make.info, Node: Libraries/Search, Prev: Implicit/Search, Up: Directory Search 4.5.6 Directory Search for Link Libraries ----------------------------------------- Directory search applies in a special way to libraries used with the linker. This special feature comes into play when you write a prerequisite whose name is of the form `-lNAME'. (You can tell something strange is going on here because the prerequisite is normally the name of a file, and the _file name_ of a library generally looks like `libNAME.a', not like `-lNAME'.) When a prerequisite's name has the form `-lNAME', `make' handles it specially by searching for the file `libNAME.so' in the current directory, in directories specified by matching `vpath' search paths and the `VPATH' search path, and then in the directories `/lib', `/usr/lib', and `PREFIX/lib' (normally `/usr/local/lib', but MS-DOS/MS-Windows versions of `make' behave as if PREFIX is defined to be the root of the DJGPP installation tree). If that file is not found, then the file `libNAME.a' is searched for, in the same directories as above. For example, if there is a `/usr/lib/libcurses.a' library on your system (and no `/usr/lib/libcurses.so' file), then foo : foo.c -lcurses cc $^ -o $@ would cause the command `cc foo.c /usr/lib/libcurses.a -o foo' to be executed when `foo' is older than `foo.c' or than `/usr/lib/libcurses.a'. Although the default set of files to be searched for is `libNAME.so' and `libNAME.a', this is customizable via the `.LIBPATTERNS' variable. Each word in the value of this variable is a pattern string. When a prerequisite like `-lNAME' is seen, `make' will replace the percent in each pattern in the list with NAME and perform the above directory searches using that library filename. If no library is found, the next word in the list will be used. The default value for `.LIBPATTERNS' is `lib%.so lib%.a', which provides the default behavior described above. You can turn off link library expansion completely by setting this variable to an empty value.  File: make.info, Node: Phony Targets, Next: Force Targets, Prev: Directory Search, Up: Rules 4.6 Phony Targets ================= A phony target is one that is not really the name of a file. It is just a name for some commands to be executed when you make an explicit request. There are two reasons to use a phony target: to avoid a conflict with a file of the same name, and to improve performance. If you write a rule whose commands will not create the target file, the commands will be executed every time the target comes up for remaking. Here is an example: clean: rm *.o temp Because the `rm' command does not create a file named `clean', probably no such file will ever exist. Therefore, the `rm' command will be executed every time you say `make clean'. The phony target will cease to work if anything ever does create a file named `clean' in this directory. Since it has no prerequisites, the file `clean' would inevitably be considered up to date, and its commands would not be executed. To avoid this problem, you can explicitly declare the target to be phony, using the special target `.PHONY' (*note Special Built-in Target Names: Special Targets.) as follows: .PHONY : clean Once this is done, `make clean' will run the commands regardless of whether there is a file named `clean'. Since it knows that phony targets do not name actual files that could be remade from other files, `make' skips the implicit rule search for phony targets (*note Implicit Rules::). This is why declaring a target phony is good for performance, even if you are not worried about the actual file existing. Thus, you first write the line that states that `clean' is a phony target, then you write the rule, like this: .PHONY: clean clean: rm *.o temp Another example of the usefulness of phony targets is in conjunction with recursive invocations of `make' (for more information, see *Note Recursive Use of `make': Recursion.). In this case the makefile will often contain a variable which lists a number of subdirectories to be built. One way to handle this is with one rule whose command is a shell loop over the subdirectories, like this: SUBDIRS = foo bar baz subdirs: for dir in $(SUBDIRS); do \ $(MAKE) -C $$dir; \ done There are a few problems with this method, however. First, any error detected in a submake is not noted by this rule, so it will continue to build the rest of the directories even when one fails. This can be overcome by adding shell commands to note the error and exit, but then it will do so even if `make' is invoked with the `-k' option, which is unfortunate. Second, and perhaps more importantly, you cannot take advantage of `make''s ability to build targets in parallel (*note Parallel Execution: Parallel.), since there is only one rule. By declaring the subdirectories as phony targets (you must do this as the subdirectory obviously always exists; otherwise it won't be built) you can remove these problems: SUBDIRS = foo bar baz .PHONY: subdirs $(SUBDIRS) subdirs: $(SUBDIRS) $(SUBDIRS): $(MAKE) -C $@ foo: baz Here we've also declared that the `foo' subdirectory cannot be built until after the `baz' subdirectory is complete; this kind of relationship declaration is particularly important when attempting parallel builds. A phony target should not be a prerequisite of a real target file; if it is, its commands are run every time `make' goes to update that file. As long as a phony target is never a prerequisite of a real target, the phony target commands will be executed only when the phony target is a specified goal (*note Arguments to Specify the Goals: Goals.). Phony targets can have prerequisites. When one directory contains multiple programs, it is most convenient to describe all of the programs in one makefile `./Makefile'. Since the target remade by default will be the first one in the makefile, it is common to make this a phony target named `all' and give it, as prerequisites, all the individual programs. For example: all : prog1 prog2 prog3 .PHONY : all prog1 : prog1.o utils.o cc -o prog1 prog1.o utils.o prog2 : prog2.o cc -o prog2 prog2.o prog3 : prog3.o sort.o utils.o cc -o prog3 prog3.o sort.o utils.o Now you can say just `make' to remake all three programs, or specify as arguments the ones to remake (as in `make prog1 prog3'). Phoniness is not inherited: the prerequisites of a phony target are not themselves phony, unless explicitly declared to be so. When one phony target is a prerequisite of another, it serves as a subroutine of the other. For example, here `make cleanall' will delete the object files, the difference files, and the file `program': .PHONY: cleanall cleanobj cleandiff cleanall : cleanobj cleandiff rm program cleanobj : rm *.o cleandiff : rm *.diff  File: make.info, Node: Force Targets, Next: Empty Targets, Prev: Phony Targets, Up: Rules 4.7 Rules without Commands or Prerequisites =========================================== If a rule has no prerequisites or commands, and the target of the rule is a nonexistent file, then `make' imagines this target to have been updated whenever its rule is run. This implies that all targets depending on this one will always have their commands run. An example will illustrate this: clean: FORCE rm $(objects) FORCE: Here the target `FORCE' satisfies the special conditions, so the target `clean' that depends on it is forced to run its commands. There is nothing special about the name `FORCE', but that is one name commonly used this way. As you can see, using `FORCE' this way has the same results as using `.PHONY: clean'. Using `.PHONY' is more explicit and more efficient. However, other versions of `make' do not support `.PHONY'; thus `FORCE' appears in many makefiles. *Note Phony Targets::.  File: make.info, Node: Empty Targets, Next: Special Targets, Prev: Force Targets, Up: Rules 4.8 Empty Target Files to Record Events ======================================= The "empty target" is a variant of the phony target; it is used to hold commands for an action that you request explicitly from time to time. Unlike a phony target, this target file can really exist; but the file's contents do not matter, and usually are empty. The purpose of the empty target file is to record, with its last-modification time, when the rule's commands were last executed. It does so because one of the commands is a `touch' command to update the target file. The empty target file should have some prerequisites (otherwise it doesn't make sense). When you ask to remake the empty target, the commands are executed if any prerequisite is more recent than the target; in other words, if a prerequisite has changed since the last time you remade the target. Here is an example: print: foo.c bar.c lpr -p $? touch print With this rule, `make print' will execute the `lpr' command if either source file has changed since the last `make print'. The automatic variable `$?' is used to print only those files that have changed (*note Automatic Variables::).  File: make.info, Node: Special Targets, Next: Multiple Targets, Prev: Empty Targets, Up: Rules 4.9 Special Built-in Target Names ================================= Certain names have special meanings if they appear as targets. `.PHONY' The prerequisites of the special target `.PHONY' are considered to be phony targets. When it is time to consider such a target, `make' will run its commands unconditionally, regardless of whether a file with that name exists or what its last-modification time is. *Note Phony Targets: Phony Targets. `.SUFFIXES' The prerequisites of the special target `.SUFFIXES' are the list of suffixes to be used in checking for suffix rules. *Note Old-Fashioned Suffix Rules: Suffix Rules. `.DEFAULT' The commands specified for `.DEFAULT' are used for any target for which no rules are found (either explicit rules or implicit rules). *Note Last Resort::. If `.DEFAULT' commands are specified, every file mentioned as a prerequisite, but not as a target in a rule, will have these commands executed on its behalf. *Note Implicit Rule Search Algorithm: Implicit Rule Search. `.PRECIOUS' The targets which `.PRECIOUS' depends on are given the following special treatment: if `make' is killed or interrupted during the execution of their commands, the target is not deleted. *Note Interrupting or Killing `make': Interrupts. Also, if the target is an intermediate file, it will not be deleted after it is no longer needed, as is normally done. *Note Chains of Implicit Rules: Chained Rules. In this latter respect it overlaps with the `.SECONDARY' special target. You can also list the target pattern of an implicit rule (such as `%.o') as a prerequisite file of the special target `.PRECIOUS' to preserve intermediate files created by rules whose target patterns match that file's name. `.INTERMEDIATE' The targets which `.INTERMEDIATE' depends on are treated as intermediate files. *Note Chains of Implicit Rules: Chained Rules. `.INTERMEDIATE' with no prerequisites has no effect. `.SECONDARY' The targets which `.SECONDARY' depends on are treated as intermediate files, except that they are never automatically deleted. *Note Chains of Implicit Rules: Chained Rules. `.SECONDARY' with no prerequisites causes all targets to be treated as secondary (i.e., no target is removed because it is considered intermediate). `.SECONDEXPANSION' If `.SECONDEXPANSION' is mentioned as a target anywhere in the makefile, then all prerequisite lists defined _after_ it appears will be expanded a second time after all makefiles have been read in. *Note Secondary Expansion: Secondary Expansion. The prerequisites of the special target `.SUFFIXES' are the list of suffixes to be used in checking for suffix rules. *Note Old-Fashioned Suffix Rules: Suffix Rules. `.DELETE_ON_ERROR' If `.DELETE_ON_ERROR' is mentioned as a target anywhere in the makefile, then `make' will delete the target of a rule if it has changed and its commands exit with a nonzero exit status, just as it does when it receives a signal. *Note Errors in Commands: Errors. `.IGNORE' If you specify prerequisites for `.IGNORE', then `make' will ignore errors in execution of the commands run for those particular files. The commands for `.IGNORE' are not meaningful. If mentioned as a target with no prerequisites, `.IGNORE' says to ignore errors in execution of commands for all files. This usage of `.IGNORE' is supported only for historical compatibility. Since this affects every command in the makefile, it is not very useful; we recommend you use the more selective ways to ignore errors in specific commands. *Note Errors in Commands: Errors. `.LOW_RESOLUTION_TIME' If you specify prerequisites for `.LOW_RESOLUTION_TIME', `make' assumes that these files are created by commands that generate low resolution time stamps. The commands for `.LOW_RESOLUTION_TIME' are not meaningful. The high resolution file time stamps of many modern hosts lessen the chance of `make' incorrectly concluding that a file is up to date. Unfortunately, these hosts provide no way to set a high resolution file time stamp, so commands like `cp -p' that explicitly set a file's time stamp must discard its subsecond part. If a file is created by such a command, you should list it as a prerequisite of `.LOW_RESOLUTION_TIME' so that `make' does not mistakenly conclude that the file is out of date. For example: .LOW_RESOLUTION_TIME: dst dst: src cp -p src dst Since `cp -p' discards the subsecond part of `src''s time stamp, `dst' is typically slightly older than `src' even when it is up to date. The `.LOW_RESOLUTION_TIME' line causes `make' to consider `dst' to be up to date if its time stamp is at the start of the same second that `src''s time stamp is in. Due to a limitation of the archive format, archive member time stamps are always low resolution. You need not list archive members as prerequisites of `.LOW_RESOLUTION_TIME', as `make' does this automatically. `.SILENT' If you specify prerequisites for `.SILENT', then `make' will not print the commands to remake those particular files before executing them. The commands for `.SILENT' are not meaningful. If mentioned as a target with no prerequisites, `.SILENT' says not to print any commands before executing them. This usage of `.SILENT' is supported only for historical compatibility. We recommend you use the more selective ways to silence specific commands. *Note Command Echoing: Echoing. If you want to silence all commands for a particular run of `make', use the `-s' or `--silent' option (*note Options Summary::). `.EXPORT_ALL_VARIABLES' Simply by being mentioned as a target, this tells `make' to export all variables to child processes by default. *Note Communicating Variables to a Sub-`make': Variables/Recursion. `.NOTPARALLEL' If `.NOTPARALLEL' is mentioned as a target, then this invocation of `make' will be run serially, even if the `-j' option is given. Any recursively invoked `make' command will still be run in parallel (unless its makefile contains this target). Any prerequisites on this target are ignored. Any defined implicit rule suffix also counts as a special target if it appears as a target, and so does the concatenation of two suffixes, such as `.c.o'. These targets are suffix rules, an obsolete way of defining implicit rules (but a way still widely used). In principle, any target name could be special in this way if you break it in two and add both pieces to the suffix list. In practice, suffixes normally begin with `.', so these special target names also begin with `.'. *Note Old-Fashioned Suffix Rules: Suffix Rules.  File: make.info, Node: Multiple Targets, Next: Multiple Rules, Prev: Special Targets, Up: Rules 4.10 Multiple Targets in a Rule =============================== A rule with multiple targets is equivalent to writing many rules, each with one target, and all identical aside from that. The same commands apply to all the targets, but their effects may vary because you can substitute the actual target name into the command using `$@'. The rule contributes the same prerequisites to all the targets also. This is useful in two cases. * You want just prerequisites, no commands. For example: kbd.o command.o files.o: command.h gives an additional prerequisite to each of the three object files mentioned. * Similar commands work for all the targets. The commands do not need to be absolutely identical, since the automatic variable `$@' can be used to substitute the particular target to be remade into the commands (*note Automatic Variables::). For example: bigoutput littleoutput : text.g generate text.g -$(subst output,,$@) > $@ is equivalent to bigoutput : text.g generate text.g -big > bigoutput littleoutput : text.g generate text.g -little > littleoutput Here we assume the hypothetical program `generate' makes two types of output, one if given `-big' and one if given `-little'. *Note Functions for String Substitution and Analysis: Text Functions, for an explanation of the `subst' function. Suppose you would like to vary the prerequisites according to the target, much as the variable `$@' allows you to vary the commands. You cannot do this with multiple targets in an ordinary rule, but you can do it with a "static pattern rule". *Note Static Pattern Rules: Static Pattern.  File: make.info, Node: Multiple Rules, Next: Static Pattern, Prev: Multiple Targets, Up: Rules 4.11 Multiple Rules for One Target ================================== One file can be the target of several rules. All the prerequisites mentioned in all the rules are merged into one list of prerequisites for the target. If the target is older than any prerequisite from any rule, the commands are executed. There can only be one set of commands to be executed for a file. If more than one rule gives commands for the same file, `make' uses the last set given and prints an error message. (As a special case, if the file's name begins with a dot, no error message is printed. This odd behavior is only for compatibility with other implementations of `make'... you should avoid using it). Occasionally it is useful to have the same target invoke multiple commands which are defined in different parts of your makefile; you can use "double-colon rules" (*note Double-Colon::) for this. An extra rule with just prerequisites can be used to give a few extra prerequisites to many files at once. For example, makefiles often have a variable, such as `objects', containing a list of all the compiler output files in the system being made. An easy way to say that all of them must be recompiled if `config.h' changes is to write the following: objects = foo.o bar.o foo.o : defs.h bar.o : defs.h test.h $(objects) : config.h This could be inserted or taken out without changing the rules that really specify how to make the object files, making it a convenient form to use if you wish to add the additional prerequisite intermittently. Another wrinkle is that the additional prerequisites could be specified with a variable that you set with a command argument to `make' (*note Overriding Variables: Overriding.). For example, extradeps= $(objects) : $(extradeps) means that the command `make extradeps=foo.h' will consider `foo.h' as a prerequisite of each object file, but plain `make' will not. If none of the explicit rules for a target has commands, then `make' searches for an applicable implicit rule to find some commands *note Using Implicit Rules: Implicit Rules.).  File: make.info, Node: Static Pattern, Next: Double-Colon, Prev: Multiple Rules, Up: Rules 4.12 Static Pattern Rules ========================= "Static pattern rules" are rules which specify multiple targets and construct the prerequisite names for each target based on the target name. They are more general than ordinary rules with multiple targets because the targets do not have to have identical prerequisites. Their prerequisites must be _analogous_, but not necessarily _identical_. * Menu: * Static Usage:: The syntax of static pattern rules. * Static versus Implicit:: When are they better than implicit rules?  File: make.info, Node: Static Usage, Next: Static versus Implicit, Prev: Static Pattern, Up: Static Pattern 4.12.1 Syntax of Static Pattern Rules ------------------------------------- Here is the syntax of a static pattern rule: TARGETS ...: TARGET-PATTERN: PREREQ-PATTERNS ... COMMANDS ... The TARGETS list specifies the targets that the rule applies to. The targets can contain wildcard characters, just like the targets of ordinary rules (*note Using Wildcard Characters in File Names: Wildcards.). The TARGET-PATTERN and PREREQ-PATTERNS say how to compute the prerequisites of each target. Each target is matched against the TARGET-PATTERN to extract a part of the target name, called the "stem". This stem is substituted into each of the PREREQ-PATTERNS to make the prerequisite names (one from each PREREQ-PATTERN). Each pattern normally contains the character `%' just once. When the TARGET-PATTERN matches a target, the `%' can match any part of the target name; this part is called the "stem". The rest of the pattern must match exactly. For example, the target `foo.o' matches the pattern `%.o', with `foo' as the stem. The targets `foo.c' and `foo.out' do not match that pattern. The prerequisite names for each target are made by substituting the stem for the `%' in each prerequisite pattern. For example, if one prerequisite pattern is `%.c', then substitution of the stem `foo' gives the prerequisite name `foo.c'. It is legitimate to write a prerequisite pattern that does not contain `%'; then this prerequisite is the same for all targets. `%' characters in pattern rules can be quoted with preceding backslashes (`\'). Backslashes that would otherwise quote `%' characters can be quoted with more backslashes. Backslashes that quote `%' characters or other backslashes are removed from the pattern before it is compared to file names or has a stem substituted into it. Backslashes that are not in danger of quoting `%' characters go unmolested. For example, the pattern `the\%weird\\%pattern\\' has `the%weird\' preceding the operative `%' character, and `pattern\\' following it. The final two backslashes are left alone because they cannot affect any `%' character. Here is an example, which compiles each of `foo.o' and `bar.o' from the corresponding `.c' file: objects = foo.o bar.o all: $(objects) $(objects): %.o: %.c $(CC) -c $(CFLAGS) $< -o $@ Here `$<' is the automatic variable that holds the name of the prerequisite and `$@' is the automatic variable that holds the name of the target; see *Note Automatic Variables::. Each target specified must match the target pattern; a warning is issued for each target that does not. If you have a list of files, only some of which will match the pattern, you can use the `filter' function to remove nonmatching file names (*note Functions for String Substitution and Analysis: Text Functions.): files = foo.elc bar.o lose.o $(filter %.o,$(files)): %.o: %.c $(CC) -c $(CFLAGS) $< -o $@ $(filter %.elc,$(files)): %.elc: %.el emacs -f batch-byte-compile $< In this example the result of `$(filter %.o,$(files))' is `bar.o lose.o', and the first static pattern rule causes each of these object files to be updated by compiling the corresponding C source file. The result of `$(filter %.elc,$(files))' is `foo.elc', so that file is made from `foo.el'. Another example shows how to use `$*' in static pattern rules: bigoutput littleoutput : %output : text.g generate text.g -$* > $@ When the `generate' command is run, `$*' will expand to the stem, either `big' or `little'.  File: make.info, Node: Static versus Implicit, Prev: Static Usage, Up: Static Pattern 4.12.2 Static Pattern Rules versus Implicit Rules ------------------------------------------------- A static pattern rule has much in common with an implicit rule defined as a pattern rule (*note Defining and Redefining Pattern Rules: Pattern Rules.). Both have a pattern for the target and patterns for constructing the names of prerequisites. The difference is in how `make' decides _when_ the rule applies. An implicit rule _can_ apply to any target that matches its pattern, but it _does_ apply only when the target has no commands otherwise specified, and only when the prerequisites can be found. If more than one implicit rule appears applicable, only one applies; the choice depends on the order of rules. By contrast, a static pattern rule applies to the precise list of targets that you specify in the rule. It cannot apply to any other target and it invariably does apply to each of the targets specified. If two conflicting rules apply, and both have commands, that's an error. The static pattern rule can be better than an implicit rule for these reasons: * You may wish to override the usual implicit rule for a few files whose names cannot be categorized syntactically but can be given in an explicit list. * If you cannot be sure of the precise contents of the directories you are using, you may not be sure which other irrelevant files might lead `make' to use the wrong implicit rule. The choice might depend on the order in which the implicit rule search is done. With static pattern rules, there is no uncertainty: each rule applies to precisely the targets specified.  File: make.info, Node: Double-Colon, Next: Automatic Prerequisites, Prev: Static Pattern, Up: Rules 4.13 Double-Colon Rules ======================= "Double-colon" rules are rules written with `::' instead of `:' after the target names. They are handled differently from ordinary rules when the same target appears in more than one rule. When a target appears in multiple rules, all the rules must be the same type: all ordinary, or all double-colon. If they are double-colon, each of them is independent of the others. Each double-colon rule's commands are executed if the target is older than any prerequisites of that rule. If there are no prerequisites for that rule, its commands are always executed (even if the target already exists). This can result in executing none, any, or all of the double-colon rules. Double-colon rules with the same target are in fact completely separate from one another. Each double-colon rule is processed individually, just as rules with different targets are processed. The double-colon rules for a target are executed in the order they appear in the makefile. However, the cases where double-colon rules really make sense are those where the order of executing the commands would not matter. Double-colon rules are somewhat obscure and not often very useful; they provide a mechanism for cases in which the method used to update a target differs depending on which prerequisite files caused the update, and such cases are rare. Each double-colon rule should specify commands; if it does not, an implicit rule will be used if one applies. *Note Using Implicit Rules: Implicit Rules.  File: make.info, Node: Automatic Prerequisites, Prev: Double-Colon, Up: Rules 4.14 Generating Prerequisites Automatically =========================================== In the makefile for a program, many of the rules you need to write often say only that some object file depends on some header file. For example, if `main.c' uses `defs.h' via an `#include', you would write: main.o: defs.h You need this rule so that `make' knows that it must remake `main.o' whenever `defs.h' changes. You can see that for a large program you would have to write dozens of such rules in your makefile. And, you must always be very careful to update the makefile every time you add or remove an `#include'. To avoid this hassle, most modern C compilers can write these rules for you, by looking at the `#include' lines in the source files. Usually this is done with the `-M' option to the compiler. For example, the command: cc -M main.c generates the output: main.o : main.c defs.h Thus you no longer have to write all those rules yourself. The compiler will do it for you. Note that such a prerequisite constitutes mentioning `main.o' in a makefile, so it can never be considered an intermediate file by implicit rule search. This means that `make' won't ever remove the file after using it; *note Chains of Implicit Rules: Chained Rules. With old `make' programs, it was traditional practice to use this compiler feature to generate prerequisites on demand with a command like `make depend'. That command would create a file `depend' containing all the automatically-generated prerequisites; then the makefile could use `include' to read them in (*note Include::). In GNU `make', the feature of remaking makefiles makes this practice obsolete--you need never tell `make' explicitly to regenerate the prerequisites, because it always regenerates any makefile that is out of date. *Note Remaking Makefiles::. The practice we recommend for automatic prerequisite generation is to have one makefile corresponding to each source file. For each source file `NAME.c' there is a makefile `NAME.d' which lists what files the object file `NAME.o' depends on. That way only the source files that have changed need to be rescanned to produce the new prerequisites. Here is the pattern rule to generate a file of prerequisites (i.e., a makefile) called `NAME.d' from a C source file called `NAME.c': %.d: %.c @set -e; rm -f $@; \ $(CC) -M $(CPPFLAGS) $< > $@.$$$$; \ sed 's,\($*\)\.o[ :]*,\1.o $@ : ,g' < $@.$$$$ > $@; \ rm -f $@.$$$$ *Note Pattern Rules::, for information on defining pattern rules. The `-e' flag to the shell causes it to exit immediately if the `$(CC)' command (or any other command) fails (exits with a nonzero status). With the GNU C compiler, you may wish to use the `-MM' flag instead of `-M'. This omits prerequisites on system header files. *Note Options Controlling the Preprocessor: (gcc.info)Preprocessor Options, for details. The purpose of the `sed' command is to translate (for example): main.o : main.c defs.h into: main.o main.d : main.c defs.h This makes each `.d' file depend on all the source and header files that the corresponding `.o' file depends on. `make' then knows it must regenerate the prerequisites whenever any of the source or header files changes. Once you've defined the rule to remake the `.d' files, you then use the `include' directive to read them all in. *Note Include::. For example: sources = foo.c bar.c include $(sources:.c=.d) (This example uses a substitution variable reference to translate the list of source files `foo.c bar.c' into a list of prerequisite makefiles, `foo.d bar.d'. *Note Substitution Refs::, for full information on substitution references.) Since the `.d' files are makefiles like any others, `make' will remake them as necessary with no further work from you. *Note Remaking Makefiles::. Note that the `.d' files contain target definitions; you should be sure to place the `include' directive _after_ the first, default goal in your makefiles or run the risk of having a random object file become the default goal. *Note How Make Works::.  File: make.info, Node: Commands, Next: Using Variables, Prev: Rules, Up: Top 5 Writing the Commands in Rules ******************************* The commands of a rule consist of one or more shell command lines to be executed, one at a time, in the order they appear. Typically, the result of executing these commands is that the target of the rule is brought up to date. Users use many different shell programs, but commands in makefiles are always interpreted by `/bin/sh' unless the makefile specifies otherwise. *Note Command Execution: Execution. * Menu: * Command Syntax:: Command syntax features and pitfalls. * Echoing:: How to control when commands are echoed. * Execution:: How commands are executed. * Parallel:: How commands can be executed in parallel. * Errors:: What happens after a command execution error. * Interrupts:: What happens when a command is interrupted. * Recursion:: Invoking `make' from makefiles. * Sequences:: Defining canned sequences of commands. * Empty Commands:: Defining useful, do-nothing commands.  File: make.info, Node: Command Syntax, Next: Echoing, Prev: Commands, Up: Commands 5.1 Command Syntax ================== Makefiles have the unusual property that there are really two distinct syntaxes in one file. Most of the makefile uses `make' syntax (*note Writing Makefiles: Makefiles.). However, commands are meant to be interpreted by the shell and so they are written using shell syntax. The `make' program does not try to understand shell syntax: it performs only a very few specific translations on the content of the command before handing it to the shell. Each command line must start with a tab, except that the first command line may be attached to the target-and-prerequisites line with a semicolon in between. _Any_ line in the makefile that begins with a tab and appears in a "rule context" (that is, after a rule has been started until another rule or variable definition) will be considered a command line for that rule. Blank lines and lines of just comments may appear among the command lines; they are ignored. Some consequences of these rules include: * A blank line that begins with a tab is not blank: it's an empty command (*note Empty Commands::). * A comment in a command line is not a `make' comment; it will be passed to the shell as-is. Whether the shell treats it as a comment or not depends on your shell. * A variable definition in a "rule context" which is indented by a tab as the first character on the line, will be considered a command line, not a `make' variable definition, and passed to the shell. * A conditional expression (`ifdef', `ifeq', etc. *note Syntax of Conditionals: Conditional Syntax.) in a "rule context" which is indented by a tab as the first character on the line, will be considered a command line and be passed to the shell. * Menu: * Splitting Lines:: Breaking long command lines for readability. * Variables in Commands:: Using `make' variables in commands.  File: make.info, Node: Splitting Lines, Next: Variables in Commands, Prev: Command Syntax, Up: Command Syntax 5.1.1 Splitting Command Lines ----------------------------- One of the few ways in which `make' does interpret command lines is checking for a backslash just before the newline. As in normal makefile syntax, a single command can be split into multiple lines in the makefile by placing a backslash before each newline. A sequence of lines like this is considered a single command, and one instance of the shell will be invoked to run it. However, in contrast to how they are treated in other places in a makefile, backslash-newline pairs are _not_ removed from the command. Both the backslash and the newline characters are preserved and passed to the shell. How the backslash-newline is interpreted depends on your shell. If the first character of the next line after the backslash-newline is a tab, then that tab (and only that tab) is removed. Whitespace is never added to the command. For example, this makefile: all : @echo no\ space @echo no\ space @echo one \ space @echo one\ space consists of four separate shell commands where the output is: nospace nospace one space one space As a more complex example, this makefile: all : ; @echo 'hello \ world' ; echo "hello \ world" will run one shell with a command script of: echo 'hello \ world' ; echo "hello \ world" which, according to shell quoting rules, will yield the following output: hello \ world hello world Notice how the backslash/newline pair was removed inside the string quoted with double quotes (`"..."'), but not from the string quoted with single quotes (`'...''). This is the way the default shell (`/bin/sh') handles backslash/newline pairs. If you specify a different shell in your makefiles it may treat them differently. Sometimes you want to split a long line inside of single quotes, but you don't want the backslash-newline to appear in the quoted content. This is often the case when passing scripts to languages such as Perl, where extraneous backslashes inside the script can change its meaning or even be a syntax error. One simple way of handling this is to place the quoted string, or even the entire command, into a `make' variable then use the variable in the command. In this situation the newline quoting rules for makefiles will be used, and the backslash-newline will be removed. If we rewrite our example above using this method: HELLO = 'hello \ world' all : ; @echo $(HELLO) we will get output like this: hello world If you like, you can also use target-specific variables (*note Target-specific Variable Values: Target-specific.) to obtain a tighter correspondence between the variable and the command that uses it.  File: make.info, Node: Variables in Commands, Prev: Splitting Lines, Up: Command Syntax 5.1.2 Using Variables in Commands --------------------------------- The other way in which `make' processes commands is by expanding any variable references in them (*note Basics of Variable References: Reference.). This occurs after make has finished reading all the makefiles and the target is determined to be out of date; so, the commands for targets which are not rebuilt are never expanded. Variable and function references in commands have identical syntax and semantics to references elsewhere in the makefile. They also have the same quoting rules: if you want a dollar sign to appear in your command, you must double it (`$$'). For shells like the default shell, that use dollar signs to introduce variables, it's important to keep clear in your mind whether the variable you want to reference is a `make' variable (use a single dollar sign) or a shell variable (use two dollar signs). For example: LIST = one two three all: for i in $(LIST); do \ echo $$i; \ done results in the following command being passed to the shell: for i in one two three; do \ echo $i; \ done which generates the expected result: one two three  File: make.info, Node: Echoing, Next: Execution, Prev: Command Syntax, Up: Commands 5.2 Command Echoing =================== Normally `make' prints each command line before it is executed. We call this "echoing" because it gives the appearance that you are typing the commands yourself. When a line starts with `@', the echoing of that line is suppressed. The `@' is discarded before the command is passed to the shell. Typically you would use this for a command whose only effect is to print something, such as an `echo' command to indicate progress through the makefile: @echo About to make distribution files When `make' is given the flag `-n' or `--just-print' it only echoes commands, it won't execute them. *Note Summary of Options: Options Summary. In this case and only this case, even the commands starting with `@' are printed. This flag is useful for finding out which commands `make' thinks are necessary without actually doing them. The `-s' or `--silent' flag to `make' prevents all echoing, as if all commands started with `@'. A rule in the makefile for the special target `.SILENT' without prerequisites has the same effect (*note Special Built-in Target Names: Special Targets.). `.SILENT' is essentially obsolete since `@' is more flexible.  File: make.info, Node: Execution, Next: Parallel, Prev: Echoing, Up: Commands 5.3 Command Execution ===================== When it is time to execute commands to update a target, they are executed by invoking a new subshell for each command line. (In practice, `make' may take shortcuts that do not affect the results.) *Please note:* this implies that setting shell variables and invoking shell commands such as `cd' that set a context local to each process will not affect the following command lines.(1) If you want to use `cd' to affect the next statement, put both statements in a single command line. Then `make' will invoke one shell to run the entire line, and the shell will execute the statements in sequence. For example: foo : bar/lose cd $(@D) && gobble $(@F) > ../$@ Here we use the shell AND operator (`&&') so that if the `cd' command fails, the script will fail without trying to invoke the `gobble' command in the wrong directory, which could cause problems (in this case it would certainly cause `../foo' to be truncated, at least). * Menu: * Choosing the Shell:: How `make' chooses the shell used to run commands. ---------- Footnotes ---------- (1) On MS-DOS, the value of current working directory is *global*, so changing it _will_ affect the following command lines on those systems.  File: make.info, Node: Choosing the Shell, Prev: Execution, Up: Execution 5.3.1 Choosing the Shell ------------------------ The program used as the shell is taken from the variable `SHELL'. If this variable is not set in your makefile, the program `/bin/sh' is used as the shell. Unlike most variables, the variable `SHELL' is never set from the environment. This is because the `SHELL' environment variable is used to specify your personal choice of shell program for interactive use. It would be very bad for personal choices like this to affect the functioning of makefiles. *Note Variables from the Environment: Environment. Furthermore, when you do set `SHELL' in your makefile that value is _not_ exported in the environment to commands that `make' invokes. Instead, the value inherited from the user's environment, if any, is exported. You can override this behavior by explicitly exporting `SHELL' (*note Communicating Variables to a Sub-`make': Variables/Recursion.), forcing it to be passed in the environment to commands. However, on MS-DOS and MS-Windows the value of `SHELL' in the environment *is* used, since on those systems most users do not set this variable, and therefore it is most likely set specifically to be used by `make'. On MS-DOS, if the setting of `SHELL' is not suitable for `make', you can set the variable `MAKESHELL' to the shell that `make' should use; if set it will be used as the shell instead of the value of `SHELL'. Choosing a Shell in DOS and Windows ................................... Choosing a shell in MS-DOS and MS-Windows is much more complex than on other systems. On MS-DOS, if `SHELL' is not set, the value of the variable `COMSPEC' (which is always set) is used instead. The processing of lines that set the variable `SHELL' in Makefiles is different on MS-DOS. The stock shell, `command.com', is ridiculously limited in its functionality and many users of `make' tend to install a replacement shell. Therefore, on MS-DOS, `make' examines the value of `SHELL', and changes its behavior based on whether it points to a Unix-style or DOS-style shell. This allows reasonable functionality even if `SHELL' points to `command.com'. If `SHELL' points to a Unix-style shell, `make' on MS-DOS additionally checks whether that shell can indeed be found; if not, it ignores the line that sets `SHELL'. In MS-DOS, GNU `make' searches for the shell in the following places: 1. In the precise place pointed to by the value of `SHELL'. For example, if the makefile specifies `SHELL = /bin/sh', `make' will look in the directory `/bin' on the current drive. 2. In the current directory. 3. In each of the directories in the `PATH' variable, in order. In every directory it examines, `make' will first look for the specific file (`sh' in the example above). If this is not found, it will also look in that directory for that file with one of the known extensions which identify executable files. For example `.exe', `.com', `.bat', `.btm', `.sh', and some others. If any of these attempts is successful, the value of `SHELL' will be set to the full pathname of the shell as found. However, if none of these is found, the value of `SHELL' will not be changed, and thus the line that sets it will be effectively ignored. This is so `make' will only support features specific to a Unix-style shell if such a shell is actually installed on the system where `make' runs. Note that this extended search for the shell is limited to the cases where `SHELL' is set from the Makefile; if it is set in the environment or command line, you are expected to set it to the full pathname of the shell, exactly as things are on Unix. The effect of the above DOS-specific processing is that a Makefile that contains `SHELL = /bin/sh' (as many Unix makefiles do), will work on MS-DOS unaltered if you have e.g. `sh.exe' installed in some directory along your `PATH'.  File: make.info, Node: Parallel, Next: Errors, Prev: Execution, Up: Commands 5.4 Parallel Execution ====================== GNU `make' knows how to execute several commands at once. Normally, `make' will execute only one command at a time, waiting for it to finish before executing the next. However, the `-j' or `--jobs' option tells `make' to execute many commands simultaneously. On MS-DOS, the `-j' option has no effect, since that system doesn't support multi-processing. If the `-j' option is followed by an integer, this is the number of commands to execute at once; this is called the number of "job slots". If there is nothing looking like an integer after the `-j' option, there is no limit on the number of job slots. The default number of job slots is one, which means serial execution (one thing at a time). One unpleasant consequence of running several commands simultaneously is that output generated by the commands appears whenever each command sends it, so messages from different commands may be interspersed. Another problem is that two processes cannot both take input from the same device; so to make sure that only one command tries to take input from the terminal at once, `make' will invalidate the standard input streams of all but one running command. This means that attempting to read from standard input will usually be a fatal error (a `Broken pipe' signal) for most child processes if there are several. It is unpredictable which command will have a valid standard input stream (which will come from the terminal, or wherever you redirect the standard input of `make'). The first command run will always get it first, and the first command started after that one finishes will get it next, and so on. We will change how this aspect of `make' works if we find a better alternative. In the mean time, you should not rely on any command using standard input at all if you are using the parallel execution feature; but if you are not using this feature, then standard input works normally in all commands. Finally, handling recursive `make' invocations raises issues. For more information on this, see *Note Communicating Options to a Sub-`make': Options/Recursion. If a command fails (is killed by a signal or exits with a nonzero status), and errors are not ignored for that command (*note Errors in Commands: Errors.), the remaining command lines to remake the same target will not be run. If a command fails and the `-k' or `--keep-going' option was not given (*note Summary of Options: Options Summary.), `make' aborts execution. If make terminates for any reason (including a signal) with child processes running, it waits for them to finish before actually exiting. When the system is heavily loaded, you will probably want to run fewer jobs than when it is lightly loaded. You can use the `-l' option to tell `make' to limit the number of jobs to run at once, based on the load average. The `-l' or `--max-load' option is followed by a floating-point number. For example, -l 2.5 will not let `make' start more than one job if the load average is above 2.5. The `-l' option with no following number removes the load limit, if one was given with a previous `-l' option. More precisely, when `make' goes to start up a job, and it already has at least one job running, it checks the current load average; if it is not lower than the limit given with `-l', `make' waits until the load average goes below that limit, or until all the other jobs finish. By default, there is no load limit.  File: make.info, Node: Errors, Next: Interrupts, Prev: Parallel, Up: Commands 5.5 Errors in Commands ====================== After each shell command returns, `make' looks at its exit status. If the command completed successfully, the next command line is executed in a new shell; after the last command line is finished, the rule is finished. If there is an error (the exit status is nonzero), `make' gives up on the current rule, and perhaps on all rules. Sometimes the failure of a certain command does not indicate a problem. For example, you may use the `mkdir' command to ensure that a directory exists. If the directory already exists, `mkdir' will report an error, but you probably want `make' to continue regardless. To ignore errors in a command line, write a `-' at the beginning of the line's text (after the initial tab). The `-' is discarded before the command is passed to the shell for execution. For example, clean: -rm -f *.o This causes `rm' to continue even if it is unable to remove a file. When you run `make' with the `-i' or `--ignore-errors' flag, errors are ignored in all commands of all rules. A rule in the makefile for the special target `.IGNORE' has the same effect, if there are no prerequisites. These ways of ignoring errors are obsolete because `-' is more flexible. When errors are to be ignored, because of either a `-' or the `-i' flag, `make' treats an error return just like success, except that it prints out a message that tells you the status code the command exited with, and says that the error has been ignored. When an error happens that `make' has not been told to ignore, it implies that the current target cannot be correctly remade, and neither can any other that depends on it either directly or indirectly. No further commands will be executed for these targets, since their preconditions have not been achieved. Normally `make' gives up immediately in this circumstance, returning a nonzero status. However, if the `-k' or `--keep-going' flag is specified, `make' continues to consider the other prerequisites of the pending targets, remaking them if necessary, before it gives up and returns nonzero status. For example, after an error in compiling one object file, `make -k' will continue compiling other object files even though it already knows that linking them will be impossible. *Note Summary of Options: Options Summary. The usual behavior assumes that your purpose is to get the specified targets up to date; once `make' learns that this is impossible, it might as well report the failure immediately. The `-k' option says that the real purpose is to test as many of the changes made in the program as possible, perhaps to find several independent problems so that you can correct them all before the next attempt to compile. This is why Emacs' `compile' command passes the `-k' flag by default. Usually when a command fails, if it has changed the target file at all, the file is corrupted and cannot be used--or at least it is not completely updated. Yet the file's time stamp says that it is now up to date, so the next time `make' runs, it will not try to update that file. The situation is just the same as when the command is killed by a signal; *note Interrupts::. So generally the right thing to do is to delete the target file if the command fails after beginning to change the file. `make' will do this if `.DELETE_ON_ERROR' appears as a target. This is almost always what you want `make' to do, but it is not historical practice; so for compatibility, you must explicitly request it.  File: make.info, Node: Interrupts, Next: Recursion, Prev: Errors, Up: Commands 5.6 Interrupting or Killing `make' ================================== If `make' gets a fatal signal while a command is executing, it may delete the target file that the command was supposed to update. This is done if the target file's last-modification time has changed since `make' first checked it. The purpose of deleting the target is to make sure that it is remade from scratch when `make' is next run. Why is this? Suppose you type `Ctrl-c' while a compiler is running, and it has begun to write an object file `foo.o'. The `Ctrl-c' kills the compiler, resulting in an incomplete file whose last-modification time is newer than the source file `foo.c'. But `make' also receives the `Ctrl-c' signal and deletes this incomplete file. If `make' did not do this, the next invocation of `make' would think that `foo.o' did not require updating--resulting in a strange error message from the linker when it tries to link an object file half of which is missing. You can prevent the deletion of a target file in this way by making the special target `.PRECIOUS' depend on it. Before remaking a target, `make' checks to see whether it appears on the prerequisites of `.PRECIOUS', and thereby decides whether the target should be deleted if a signal happens. Some reasons why you might do this are that the target is updated in some atomic fashion, or exists only to record a modification-time (its contents do not matter), or must exist at all times to prevent other sorts of trouble.  File: make.info, Node: Recursion, Next: Sequences, Prev: Interrupts, Up: Commands 5.7 Recursive Use of `make' =========================== Recursive use of `make' means using `make' as a command in a makefile. This technique is useful when you want separate makefiles for various subsystems that compose a larger system. For example, suppose you have a subdirectory `subdir' which has its own makefile, and you would like the containing directory's makefile to run `make' on the subdirectory. You can do it by writing this: subsystem: cd subdir && $(MAKE) or, equivalently, this (*note Summary of Options: Options Summary.): subsystem: $(MAKE) -C subdir You can write recursive `make' commands just by copying this example, but there are many things to know about how they work and why, and about how the sub-`make' relates to the top-level `make'. You may also find it useful to declare targets that invoke recursive `make' commands as `.PHONY' (for more discussion on when this is useful, see *Note Phony Targets::). For your convenience, when GNU `make' starts (after it has processed any `-C' options) it sets the variable `CURDIR' to the pathname of the current working directory. This value is never touched by `make' again: in particular note that if you include files from other directories the value of `CURDIR' does not change. The value has the same precedence it would have if it were set in the makefile (by default, an environment variable `CURDIR' will not override this value). Note that setting this variable has no impact on the operation of `make' (it does not cause `make' to change its working directory, for example). * Menu: * MAKE Variable:: The special effects of using `$(MAKE)'. * Variables/Recursion:: How to communicate variables to a sub-`make'. * Options/Recursion:: How to communicate options to a sub-`make'. * -w Option:: How the `-w' or `--print-directory' option helps debug use of recursive `make' commands.  File: make.info, Node: MAKE Variable, Next: Variables/Recursion, Prev: Recursion, Up: Recursion 5.7.1 How the `MAKE' Variable Works ----------------------------------- Recursive `make' commands should always use the variable `MAKE', not the explicit command name `make', as shown here: subsystem: cd subdir && $(MAKE) The value of this variable is the file name with which `make' was invoked. If this file name was `/bin/make', then the command executed is `cd subdir && /bin/make'. If you use a special version of `make' to run the top-level makefile, the same special version will be executed for recursive invocations. As a special feature, using the variable `MAKE' in the commands of a rule alters the effects of the `-t' (`--touch'), `-n' (`--just-print'), or `-q' (`--question') option. Using the `MAKE' variable has the same effect as using a `+' character at the beginning of the command line. *Note Instead of Executing the Commands: Instead of Execution. This special feature is only enabled if the `MAKE' variable appears directly in the command script: it does not apply if the `MAKE' variable is referenced through expansion of another variable. In the latter case you must use the `+' token to get these special effects. Consider the command `make -t' in the above example. (The `-t' option marks targets as up to date without actually running any commands; see *Note Instead of Execution::.) Following the usual definition of `-t', a `make -t' command in the example would create a file named `subsystem' and do nothing else. What you really want it to do is run `cd subdir && make -t'; but that would require executing the command, and `-t' says not to execute commands. The special feature makes this do what you want: whenever a command line of a rule contains the variable `MAKE', the flags `-t', `-n' and `-q' do not apply to that line. Command lines containing `MAKE' are executed normally despite the presence of a flag that causes most commands not to be run. The usual `MAKEFLAGS' mechanism passes the flags to the sub-`make' (*note Communicating Options to a Sub-`make': Options/Recursion.), so your request to touch the files, or print the commands, is propagated to the subsystem.  File: make.info, Node: Variables/Recursion, Next: Options/Recursion, Prev: MAKE Variable, Up: Recursion 5.7.2 Communicating Variables to a Sub-`make' --------------------------------------------- Variable values of the top-level `make' can be passed to the sub-`make' through the environment by explicit request. These variables are defined in the sub-`make' as defaults, but do not override what is specified in the makefile used by the sub-`make' makefile unless you use the `-e' switch (*note Summary of Options: Options Summary.). To pass down, or "export", a variable, `make' adds the variable and its value to the environment for running each command. The sub-`make', in turn, uses the environment to initialize its table of variable values. *Note Variables from the Environment: Environment. Except by explicit request, `make' exports a variable only if it is either defined in the environment initially or set on the command line, and if its name consists only of letters, numbers, and underscores. Some shells cannot cope with environment variable names consisting of characters other than letters, numbers, and underscores. The value of the `make' variable `SHELL' is not exported. Instead, the value of the `SHELL' variable from the invoking environment is passed to the sub-`make'. You can force `make' to export its value for `SHELL' by using the `export' directive, described below. *Note Choosing the Shell::. The special variable `MAKEFLAGS' is always exported (unless you unexport it). `MAKEFILES' is exported if you set it to anything. `make' automatically passes down variable values that were defined on the command line, by putting them in the `MAKEFLAGS' variable. *Note Options/Recursion::. Variables are _not_ normally passed down if they were created by default by `make' (*note Variables Used by Implicit Rules: Implicit Variables.). The sub-`make' will define these for itself. If you want to export specific variables to a sub-`make', use the `export' directive, like this: export VARIABLE ... If you want to _prevent_ a variable from being exported, use the `unexport' directive, like this: unexport VARIABLE ... In both of these forms, the arguments to `export' and `unexport' are expanded, and so could be variables or functions which expand to a (list of) variable names to be (un)exported. As a convenience, you can define a variable and export it at the same time by doing: export VARIABLE = value has the same result as: VARIABLE = value export VARIABLE and export VARIABLE := value has the same result as: VARIABLE := value export VARIABLE Likewise, export VARIABLE += value is just like: VARIABLE += value export VARIABLE *Note Appending More Text to Variables: Appending. You may notice that the `export' and `unexport' directives work in `make' in the same way they work in the shell, `sh'. If you want all variables to be exported by default, you can use `export' by itself: export This tells `make' that variables which are not explicitly mentioned in an `export' or `unexport' directive should be exported. Any variable given in an `unexport' directive will still _not_ be exported. If you use `export' by itself to export variables by default, variables whose names contain characters other than alphanumerics and underscores will not be exported unless specifically mentioned in an `export' directive. The behavior elicited by an `export' directive by itself was the default in older versions of GNU `make'. If your makefiles depend on this behavior and you want to be compatible with old versions of `make', you can write a rule for the special target `.EXPORT_ALL_VARIABLES' instead of using the `export' directive. This will be ignored by old `make's, while the `export' directive will cause a syntax error. Likewise, you can use `unexport' by itself to tell `make' _not_ to export variables by default. Since this is the default behavior, you would only need to do this if `export' had been used by itself earlier (in an included makefile, perhaps). You *cannot* use `export' and `unexport' by themselves to have variables exported for some commands and not for others. The last `export' or `unexport' directive that appears by itself determines the behavior for the entire run of `make'. As a special feature, the variable `MAKELEVEL' is changed when it is passed down from level to level. This variable's value is a string which is the depth of the level as a decimal number. The value is `0' for the top-level `make'; `1' for a sub-`make', `2' for a sub-sub-`make', and so on. The incrementation happens when `make' sets up the environment for a command. The main use of `MAKELEVEL' is to test it in a conditional directive (*note Conditional Parts of Makefiles: Conditionals.); this way you can write a makefile that behaves one way if run recursively and another way if run directly by you. You can use the variable `MAKEFILES' to cause all sub-`make' commands to use additional makefiles. The value of `MAKEFILES' is a whitespace-separated list of file names. This variable, if defined in the outer-level makefile, is passed down through the environment; then it serves as a list of extra makefiles for the sub-`make' to read before the usual or specified ones. *Note The Variable `MAKEFILES': MAKEFILES Variable.  File: make.info, Node: Options/Recursion, Next: -w Option, Prev: Variables/Recursion, Up: Recursion 5.7.3 Communicating Options to a Sub-`make' ------------------------------------------- Flags such as `-s' and `-k' are passed automatically to the sub-`make' through the variable `MAKEFLAGS'. This variable is set up automatically by `make' to contain the flag letters that `make' received. Thus, if you do `make -ks' then `MAKEFLAGS' gets the value `ks'. As a consequence, every sub-`make' gets a value for `MAKEFLAGS' in its environment. In response, it takes the flags from that value and processes them as if they had been given as arguments. *Note Summary of Options: Options Summary. Likewise variables defined on the command line are passed to the sub-`make' through `MAKEFLAGS'. Words in the value of `MAKEFLAGS' that contain `=', `make' treats as variable definitions just as if they appeared on the command line. *Note Overriding Variables: Overriding. The options `-C', `-f', `-o', and `-W' are not put into `MAKEFLAGS'; these options are not passed down. The `-j' option is a special case (*note Parallel Execution: Parallel.). If you set it to some numeric value `N' and your operating system supports it (most any UNIX system will; others typically won't), the parent `make' and all the sub-`make's will communicate to ensure that there are only `N' jobs running at the same time between them all. Note that any job that is marked recursive (*note Instead of Executing the Commands: Instead of Execution.) doesn't count against the total jobs (otherwise we could get `N' sub-`make's running and have no slots left over for any real work!) If your operating system doesn't support the above communication, then `-j 1' is always put into `MAKEFLAGS' instead of the value you specified. This is because if the `-j' option were passed down to sub-`make's, you would get many more jobs running in parallel than you asked for. If you give `-j' with no numeric argument, meaning to run as many jobs as possible in parallel, this is passed down, since multiple infinities are no more than one. If you do not want to pass the other flags down, you must change the value of `MAKEFLAGS', like this: subsystem: cd subdir && $(MAKE) MAKEFLAGS= The command line variable definitions really appear in the variable `MAKEOVERRIDES', and `MAKEFLAGS' contains a reference to this variable. If you do want to pass flags down normally, but don't want to pass down the command line variable definitions, you can reset `MAKEOVERRIDES' to empty, like this: MAKEOVERRIDES = This is not usually useful to do. However, some systems have a small fixed limit on the size of the environment, and putting so much information into the value of `MAKEFLAGS' can exceed it. If you see the error message `Arg list too long', this may be the problem. (For strict compliance with POSIX.2, changing `MAKEOVERRIDES' does not affect `MAKEFLAGS' if the special target `.POSIX' appears in the makefile. You probably do not care about this.) A similar variable `MFLAGS' exists also, for historical compatibility. It has the same value as `MAKEFLAGS' except that it does not contain the command line variable definitions, and it always begins with a hyphen unless it is empty (`MAKEFLAGS' begins with a hyphen only when it begins with an option that has no single-letter version, such as `--warn-undefined-variables'). `MFLAGS' was traditionally used explicitly in the recursive `make' command, like this: subsystem: cd subdir && $(MAKE) $(MFLAGS) but now `MAKEFLAGS' makes this usage redundant. If you want your makefiles to be compatible with old `make' programs, use this technique; it will work fine with more modern `make' versions too. The `MAKEFLAGS' variable can also be useful if you want to have certain options, such as `-k' (*note Summary of Options: Options Summary.), set each time you run `make'. You simply put a value for `MAKEFLAGS' in your environment. You can also set `MAKEFLAGS' in a makefile, to specify additional flags that should also be in effect for that makefile. (Note that you cannot use `MFLAGS' this way. That variable is set only for compatibility; `make' does not interpret a value you set for it in any way.) When `make' interprets the value of `MAKEFLAGS' (either from the environment or from a makefile), it first prepends a hyphen if the value does not already begin with one. Then it chops the value into words separated by blanks, and parses these words as if they were options given on the command line (except that `-C', `-f', `-h', `-o', `-W', and their long-named versions are ignored; and there is no error for an invalid option). If you do put `MAKEFLAGS' in your environment, you should be sure not to include any options that will drastically affect the actions of `make' and undermine the purpose of makefiles and of `make' itself. For instance, the `-t', `-n', and `-q' options, if put in one of these variables, could have disastrous consequences and would certainly have at least surprising and probably annoying effects.  File: make.info, Node: -w Option, Prev: Options/Recursion, Up: Recursion 5.7.4 The `--print-directory' Option ------------------------------------ If you use several levels of recursive `make' invocations, the `-w' or `--print-directory' option can make the output a lot easier to understand by showing each directory as `make' starts processing it and as `make' finishes processing it. For example, if `make -w' is run in the directory `/u/gnu/make', `make' will print a line of the form: make: Entering directory `/u/gnu/make'. before doing anything else, and a line of the form: make: Leaving directory `/u/gnu/make'. when processing is completed. Normally, you do not need to specify this option because `make' does it for you: `-w' is turned on automatically when you use the `-C' option, and in sub-`make's. `make' will not automatically turn on `-w' if you also use `-s', which says to be silent, or if you use `--no-print-directory' to explicitly disable it.  File: make.info, Node: Sequences, Next: Empty Commands, Prev: Recursion, Up: Commands 5.8 Defining Canned Command Sequences ===================================== When the same sequence of commands is useful in making various targets, you can define it as a canned sequence with the `define' directive, and refer to the canned sequence from the rules for those targets. The canned sequence is actually a variable, so the name must not conflict with other variable names. Here is an example of defining a canned sequence of commands: define run-yacc yacc $(firstword $^) mv y.tab.c $@ endef Here `run-yacc' is the name of the variable being defined; `endef' marks the end of the definition; the lines in between are the commands. The `define' directive does not expand variable references and function calls in the canned sequence; the `$' characters, parentheses, variable names, and so on, all become part of the value of the variable you are defining. *Note Defining Variables Verbatim: Defining, for a complete explanation of `define'. The first command in this example runs Yacc on the first prerequisite of whichever rule uses the canned sequence. The output file from Yacc is always named `y.tab.c'. The second command moves the output to the rule's target file name. To use the canned sequence, substitute the variable into the commands of a rule. You can substitute it like any other variable (*note Basics of Variable References: Reference.). Because variables defined by `define' are recursively expanded variables, all the variable references you wrote inside the `define' are expanded now. For example: foo.c : foo.y $(run-yacc) `foo.y' will be substituted for the variable `$^' when it occurs in `run-yacc''s value, and `foo.c' for `$@'. This is a realistic example, but this particular one is not needed in practice because `make' has an implicit rule to figure out these commands based on the file names involved (*note Using Implicit Rules: Implicit Rules.). In command execution, each line of a canned sequence is treated just as if the line appeared on its own in the rule, preceded by a tab. In particular, `make' invokes a separate subshell for each line. You can use the special prefix characters that affect command lines (`@', `-', and `+') on each line of a canned sequence. *Note Writing the Commands in Rules: Commands. For example, using this canned sequence: define frobnicate @echo "frobnicating target $@" frob-step-1 $< -o $@-step-1 frob-step-2 $@-step-1 -o $@ endef `make' will not echo the first line, the `echo' command. But it _will_ echo the following two command lines. On the other hand, prefix characters on the command line that refers to a canned sequence apply to every line in the sequence. So the rule: frob.out: frob.in @$(frobnicate) does not echo _any_ commands. (*Note Command Echoing: Echoing, for a full explanation of `@'.)  File: make.info, Node: Empty Commands, Prev: Sequences, Up: Commands 5.9 Using Empty Commands ======================== It is sometimes useful to define commands which do nothing. This is done simply by giving a command that consists of nothing but whitespace. For example: target: ; defines an empty command string for `target'. You could also use a line beginning with a tab character to define an empty command string, but this would be confusing because such a line looks empty. You may be wondering why you would want to define a command string that does nothing. The only reason this is useful is to prevent a target from getting implicit commands (from implicit rules or the `.DEFAULT' special target; *note Implicit Rules:: and *note Defining Last-Resort Default Rules: Last Resort.). You may be inclined to define empty command strings for targets that are not actual files, but only exist so that their prerequisites can be remade. However, this is not the best way to do that, because the prerequisites may not be remade properly if the target file actually does exist. *Note Phony Targets: Phony Targets, for a better way to do this.  File: make.info, Node: Using Variables, Next: Conditionals, Prev: Commands, Up: Top 6 How to Use Variables ********************** A "variable" is a name defined in a makefile to represent a string of text, called the variable's "value". These values are substituted by explicit request into targets, prerequisites, commands, and other parts of the makefile. (In some other versions of `make', variables are called "macros".) Variables and functions in all parts of a makefile are expanded when read, except for the shell commands in rules, the right-hand sides of variable definitions using `=', and the bodies of variable definitions using the `define' directive. Variables can represent lists of file names, options to pass to compilers, programs to run, directories to look in for source files, directories to write output in, or anything else you can imagine. A variable name may be any sequence of characters not containing `:', `#', `=', or leading or trailing whitespace. However, variable names containing characters other than letters, numbers, and underscores should be avoided, as they may be given special meanings in the future, and with some shells they cannot be passed through the environment to a sub-`make' (*note Communicating Variables to a Sub-`make': Variables/Recursion.). Variable names are case-sensitive. The names `foo', `FOO', and `Foo' all refer to different variables. It is traditional to use upper case letters in variable names, but we recommend using lower case letters for variable names that serve internal purposes in the makefile, and reserving upper case for parameters that control implicit rules or for parameters that the user should override with command options (*note Overriding Variables: Overriding.). A few variables have names that are a single punctuation character or just a few characters. These are the "automatic variables", and they have particular specialized uses. *Note Automatic Variables::. * Menu: * Reference:: How to use the value of a variable. * Flavors:: Variables come in two flavors. * Advanced:: Advanced features for referencing a variable. * Values:: All the ways variables get their values. * Setting:: How to set a variable in the makefile. * Appending:: How to append more text to the old value of a variable. * Override Directive:: How to set a variable in the makefile even if the user has set it with a command argument. * Defining:: An alternate way to set a variable to a verbatim string. * Environment:: Variable values can come from the environment. * Target-specific:: Variable values can be defined on a per-target basis. * Pattern-specific:: Target-specific variable values can be applied to a group of targets that match a pattern.  File: make.info, Node: Reference, Next: Flavors, Prev: Using Variables, Up: Using Variables 6.1 Basics of Variable References ================================= To substitute a variable's value, write a dollar sign followed by the name of the variable in parentheses or braces: either `$(foo)' or `${foo}' is a valid reference to the variable `foo'. This special significance of `$' is why you must write `$$' to have the effect of a single dollar sign in a file name or command. Variable references can be used in any context: targets, prerequisites, commands, most directives, and new variable values. Here is an example of a common case, where a variable holds the names of all the object files in a program: objects = program.o foo.o utils.o program : $(objects) cc -o program $(objects) $(objects) : defs.h Variable references work by strict textual substitution. Thus, the rule foo = c prog.o : prog.$(foo) $(foo)$(foo) -$(foo) prog.$(foo) could be used to compile a C program `prog.c'. Since spaces before the variable value are ignored in variable assignments, the value of `foo' is precisely `c'. (Don't actually write your makefiles this way!) A dollar sign followed by a character other than a dollar sign, open-parenthesis or open-brace treats that single character as the variable name. Thus, you could reference the variable `x' with `$x'. However, this practice is strongly discouraged, except in the case of the automatic variables (*note Automatic Variables::).  File: make.info, Node: Flavors, Next: Advanced, Prev: Reference, Up: Using Variables 6.2 The Two Flavors of Variables ================================ There are two ways that a variable in GNU `make' can have a value; we call them the two "flavors" of variables. The two flavors are distinguished in how they are defined and in what they do when expanded. The first flavor of variable is a "recursively expanded" variable. Variables of this sort are defined by lines using `=' (*note Setting Variables: Setting.) or by the `define' directive (*note Defining Variables Verbatim: Defining.). The value you specify is installed verbatim; if it contains references to other variables, these references are expanded whenever this variable is substituted (in the course of expanding some other string). When this happens, it is called "recursive expansion". For example, foo = $(bar) bar = $(ugh) ugh = Huh? all:;echo $(foo) will echo `Huh?': `$(foo)' expands to `$(bar)' which expands to `$(ugh)' which finally expands to `Huh?'. This flavor of variable is the only sort supported by other versions of `make'. It has its advantages and its disadvantages. An advantage (most would say) is that: CFLAGS = $(include_dirs) -O include_dirs = -Ifoo -Ibar will do what was intended: when `CFLAGS' is expanded in a command, it will expand to `-Ifoo -Ibar -O'. A major disadvantage is that you cannot append something on the end of a variable, as in CFLAGS = $(CFLAGS) -O because it will cause an infinite loop in the variable expansion. (Actually `make' detects the infinite loop and reports an error.) Another disadvantage is that any functions (*note Functions for Transforming Text: Functions.) referenced in the definition will be executed every time the variable is expanded. This makes `make' run slower; worse, it causes the `wildcard' and `shell' functions to give unpredictable results because you cannot easily control when they are called, or even how many times. To avoid all the problems and inconveniences of recursively expanded variables, there is another flavor: simply expanded variables. "Simply expanded variables" are defined by lines using `:=' (*note Setting Variables: Setting.). The value of a simply expanded variable is scanned once and for all, expanding any references to other variables and functions, when the variable is defined. The actual value of the simply expanded variable is the result of expanding the text that you write. It does not contain any references to other variables; it contains their values _as of the time this variable was defined_. Therefore, x := foo y := $(x) bar x := later is equivalent to y := foo bar x := later When a simply expanded variable is referenced, its value is substituted verbatim. Here is a somewhat more complicated example, illustrating the use of `:=' in conjunction with the `shell' function. (*Note The `shell' Function: Shell Function.) This example also shows use of the variable `MAKELEVEL', which is changed when it is passed down from level to level. (*Note Communicating Variables to a Sub-`make': Variables/Recursion, for information about `MAKELEVEL'.) ifeq (0,${MAKELEVEL}) whoami := $(shell whoami) host-type := $(shell arch) MAKE := ${MAKE} host-type=${host-type} whoami=${whoami} endif An advantage of this use of `:=' is that a typical `descend into a directory' command then looks like this: ${subdirs}: ${MAKE} -C $@ all Simply expanded variables generally make complicated makefile programming more predictable because they work like variables in most programming languages. They allow you to redefine a variable using its own value (or its value processed in some way by one of the expansion functions) and to use the expansion functions much more efficiently (*note Functions for Transforming Text: Functions.). You can also use them to introduce controlled leading whitespace into variable values. Leading whitespace characters are discarded from your input before substitution of variable references and function calls; this means you can include leading spaces in a variable value by protecting them with variable references, like this: nullstring := space := $(nullstring) # end of the line Here the value of the variable `space' is precisely one space. The comment `# end of the line' is included here just for clarity. Since trailing space characters are _not_ stripped from variable values, just a space at the end of the line would have the same effect (but be rather hard to read). If you put whitespace at the end of a variable value, it is a good idea to put a comment like that at the end of the line to make your intent clear. Conversely, if you do _not_ want any whitespace characters at the end of your variable value, you must remember not to put a random comment on the end of the line after some whitespace, such as this: dir := /foo/bar # directory to put the frobs in Here the value of the variable `dir' is `/foo/bar ' (with four trailing spaces), which was probably not the intention. (Imagine something like `$(dir)/file' with this definition!) There is another assignment operator for variables, `?='. This is called a conditional variable assignment operator, because it only has an effect if the variable is not yet defined. This statement: FOO ?= bar is exactly equivalent to this (*note The `origin' Function: Origin Function.): ifeq ($(origin FOO), undefined) FOO = bar endif Note that a variable set to an empty value is still defined, so `?=' will not set that variable.  File: make.info, Node: Advanced, Next: Values, Prev: Flavors, Up: Using Variables 6.3 Advanced Features for Reference to Variables ================================================ This section describes some advanced features you can use to reference variables in more flexible ways. * Menu: * Substitution Refs:: Referencing a variable with substitutions on the value. * Computed Names:: Computing the name of the variable to refer to.  File: make.info, Node: Substitution Refs, Next: Computed Names, Prev: Advanced, Up: Advanced 6.3.1 Substitution References ----------------------------- A "substitution reference" substitutes the value of a variable with alterations that you specify. It has the form `$(VAR:A=B)' (or `${VAR:A=B}') and its meaning is to take the value of the variable VAR, replace every A at the end of a word with B in that value, and substitute the resulting string. When we say "at the end of a word", we mean that A must appear either followed by whitespace or at the end of the value in order to be replaced; other occurrences of A in the value are unaltered. For example: foo := a.o b.o c.o bar := $(foo:.o=.c) sets `bar' to `a.c b.c c.c'. *Note Setting Variables: Setting. A substitution reference is actually an abbreviation for use of the `patsubst' expansion function (*note Functions for String Substitution and Analysis: Text Functions.). We provide substitution references as well as `patsubst' for compatibility with other implementations of `make'. Another type of substitution reference lets you use the full power of the `patsubst' function. It has the same form `$(VAR:A=B)' described above, except that now A must contain a single `%' character. This case is equivalent to `$(patsubst A,B,$(VAR))'. *Note Functions for String Substitution and Analysis: Text Functions, for a description of the `patsubst' function. For example: foo := a.o b.o c.o bar := $(foo:%.o=%.c) sets `bar' to `a.c b.c c.c'.  File: make.info, Node: Computed Names, Prev: Substitution Refs, Up: Advanced 6.3.2 Computed Variable Names ----------------------------- Computed variable names are a complicated concept needed only for sophisticated makefile programming. For most purposes you need not consider them, except to know that making a variable with a dollar sign in its name might have strange results. However, if you are the type that wants to understand everything, or you are actually interested in what they do, read on. Variables may be referenced inside the name of a variable. This is called a "computed variable name" or a "nested variable reference". For example, x = y y = z a := $($(x)) defines `a' as `z': the `$(x)' inside `$($(x))' expands to `y', so `$($(x))' expands to `$(y)' which in turn expands to `z'. Here the name of the variable to reference is not stated explicitly; it is computed by expansion of `$(x)'. The reference `$(x)' here is nested within the outer variable reference. The previous example shows two levels of nesting, but any number of levels is possible. For example, here are three levels: x = y y = z z = u a := $($($(x))) Here the innermost `$(x)' expands to `y', so `$($(x))' expands to `$(y)' which in turn expands to `z'; now we have `$(z)', which becomes `u'. References to recursively-expanded variables within a variable name are reexpanded in the usual fashion. For example: x = $(y) y = z z = Hello a := $($(x)) defines `a' as `Hello': `$($(x))' becomes `$($(y))' which becomes `$(z)' which becomes `Hello'. Nested variable references can also contain modified references and function invocations (*note Functions for Transforming Text: Functions.), just like any other reference. For example, using the `subst' function (*note Functions for String Substitution and Analysis: Text Functions.): x = variable1 variable2 := Hello y = $(subst 1,2,$(x)) z = y a := $($($(z))) eventually defines `a' as `Hello'. It is doubtful that anyone would ever want to write a nested reference as convoluted as this one, but it works: `$($($(z)))' expands to `$($(y))' which becomes `$($(subst 1,2,$(x)))'. This gets the value `variable1' from `x' and changes it by substitution to `variable2', so that the entire string becomes `$(variable2)', a simple variable reference whose value is `Hello'. A computed variable name need not consist entirely of a single variable reference. It can contain several variable references, as well as some invariant text. For example, a_dirs := dira dirb 1_dirs := dir1 dir2 a_files := filea fileb 1_files := file1 file2 ifeq "$(use_a)" "yes" a1 := a else a1 := 1 endif ifeq "$(use_dirs)" "yes" df := dirs else df := files endif dirs := $($(a1)_$(df)) will give `dirs' the same value as `a_dirs', `1_dirs', `a_files' or `1_files' depending on the settings of `use_a' and `use_dirs'. Computed variable names can also be used in substitution references: a_objects := a.o b.o c.o 1_objects := 1.o 2.o 3.o sources := $($(a1)_objects:.o=.c) defines `sources' as either `a.c b.c c.c' or `1.c 2.c 3.c', depending on the value of `a1'. The only restriction on this sort of use of nested variable references is that they cannot specify part of the name of a function to be called. This is because the test for a recognized function name is done before the expansion of nested references. For example, ifdef do_sort func := sort else func := strip endif bar := a d b g q c foo := $($(func) $(bar)) attempts to give `foo' the value of the variable `sort a d b g q c' or `strip a d b g q c', rather than giving `a d b g q c' as the argument to either the `sort' or the `strip' function. This restriction could be removed in the future if that change is shown to be a good idea. You can also use computed variable names in the left-hand side of a variable assignment, or in a `define' directive, as in: dir = foo $(dir)_sources := $(wildcard $(dir)/*.c) define $(dir)_print lpr $($(dir)_sources) endef This example defines the variables `dir', `foo_sources', and `foo_print'. Note that "nested variable references" are quite different from "recursively expanded variables" (*note The Two Flavors of Variables: Flavors.), though both are used together in complex ways when doing makefile programming.  File: make.info, Node: Values, Next: Setting, Prev: Advanced, Up: Using Variables 6.4 How Variables Get Their Values ================================== Variables can get values in several different ways: * You can specify an overriding value when you run `make'. *Note Overriding Variables: Overriding. * You can specify a value in the makefile, either with an assignment (*note Setting Variables: Setting.) or with a verbatim definition (*note Defining Variables Verbatim: Defining.). * Variables in the environment become `make' variables. *Note Variables from the Environment: Environment. * Several "automatic" variables are given new values for each rule. Each of these has a single conventional use. *Note Automatic Variables::. * Several variables have constant initial values. *Note Variables Used by Implicit Rules: Implicit Variables.  File: make.info, Node: Setting, Next: Appending, Prev: Values, Up: Using Variables 6.5 Setting Variables ===================== To set a variable from the makefile, write a line starting with the variable name followed by `=' or `:='. Whatever follows the `=' or `:=' on the line becomes the value. For example, objects = main.o foo.o bar.o utils.o defines a variable named `objects'. Whitespace around the variable name and immediately after the `=' is ignored. Variables defined with `=' are "recursively expanded" variables. Variables defined with `:=' are "simply expanded" variables; these definitions can contain variable references which will be expanded before the definition is made. *Note The Two Flavors of Variables: Flavors. The variable name may contain function and variable references, which are expanded when the line is read to find the actual variable name to use. There is no limit on the length of the value of a variable except the amount of swapping space on the computer. When a variable definition is long, it is a good idea to break it into several lines by inserting backslash-newline at convenient places in the definition. This will not affect the functioning of `make', but it will make the makefile easier to read. Most variable names are considered to have the empty string as a value if you have never set them. Several variables have built-in initial values that are not empty, but you can set them in the usual ways (*note Variables Used by Implicit Rules: Implicit Variables.). Several special variables are set automatically to a new value for each rule; these are called the "automatic" variables (*note Automatic Variables::). If you'd like a variable to be set to a value only if it's not already set, then you can use the shorthand operator `?=' instead of `='. These two settings of the variable `FOO' are identical (*note The `origin' Function: Origin Function.): FOO ?= bar and ifeq ($(origin FOO), undefined) FOO = bar endif  File: make.info, Node: Appending, Next: Override Directive, Prev: Setting, Up: Using Variables 6.6 Appending More Text to Variables ==================================== Often it is useful to add more text to the value of a variable already defined. You do this with a line containing `+=', like this: objects += another.o This takes the value of the variable `objects', and adds the text `another.o' to it (preceded by a single space). Thus: objects = main.o foo.o bar.o utils.o objects += another.o sets `objects' to `main.o foo.o bar.o utils.o another.o'. Using `+=' is similar to: objects = main.o foo.o bar.o utils.o objects := $(objects) another.o but differs in ways that become important when you use more complex values. When the variable in question has not been defined before, `+=' acts just like normal `=': it defines a recursively-expanded variable. However, when there _is_ a previous definition, exactly what `+=' does depends on what flavor of variable you defined originally. *Note The Two Flavors of Variables: Flavors, for an explanation of the two flavors of variables. When you add to a variable's value with `+=', `make' acts essentially as if you had included the extra text in the initial definition of the variable. If you defined it first with `:=', making it a simply-expanded variable, `+=' adds to that simply-expanded definition, and expands the new text before appending it to the old value just as `:=' does (see *Note Setting Variables: Setting, for a full explanation of `:='). In fact, variable := value variable += more is exactly equivalent to: variable := value variable := $(variable) more On the other hand, when you use `+=' with a variable that you defined first to be recursively-expanded using plain `=', `make' does something a bit different. Recall that when you define a recursively-expanded variable, `make' does not expand the value you set for variable and function references immediately. Instead it stores the text verbatim, and saves these variable and function references to be expanded later, when you refer to the new variable (*note The Two Flavors of Variables: Flavors.). When you use `+=' on a recursively-expanded variable, it is this unexpanded text to which `make' appends the new text you specify. variable = value variable += more is roughly equivalent to: temp = value variable = $(temp) more except that of course it never defines a variable called `temp'. The importance of this comes when the variable's old value contains variable references. Take this common example: CFLAGS = $(includes) -O ... CFLAGS += -pg # enable profiling The first line defines the `CFLAGS' variable with a reference to another variable, `includes'. (`CFLAGS' is used by the rules for C compilation; *note Catalogue of Implicit Rules: Catalogue of Rules.) Using `=' for the definition makes `CFLAGS' a recursively-expanded variable, meaning `$(includes) -O' is _not_ expanded when `make' processes the definition of `CFLAGS'. Thus, `includes' need not be defined yet for its value to take effect. It only has to be defined before any reference to `CFLAGS'. If we tried to append to the value of `CFLAGS' without using `+=', we might do it like this: CFLAGS := $(CFLAGS) -pg # enable profiling This is pretty close, but not quite what we want. Using `:=' redefines `CFLAGS' as a simply-expanded variable; this means `make' expands the text `$(CFLAGS) -pg' before setting the variable. If `includes' is not yet defined, we get ` -O -pg', and a later definition of `includes' will have no effect. Conversely, by using `+=' we set `CFLAGS' to the _unexpanded_ value `$(includes) -O -pg'. Thus we preserve the reference to `includes', so if that variable gets defined at any later point, a reference like `$(CFLAGS)' still uses its value.  File: make.info, Node: Override Directive, Next: Defining, Prev: Appending, Up: Using Variables 6.7 The `override' Directive ============================ If a variable has been set with a command argument (*note Overriding Variables: Overriding.), then ordinary assignments in the makefile are ignored. If you want to set the variable in the makefile even though it was set with a command argument, you can use an `override' directive, which is a line that looks like this: override VARIABLE = VALUE or override VARIABLE := VALUE To append more text to a variable defined on the command line, use: override VARIABLE += MORE TEXT *Note Appending More Text to Variables: Appending. The `override' directive was not invented for escalation in the war between makefiles and command arguments. It was invented so you can alter and add to values that the user specifies with command arguments. For example, suppose you always want the `-g' switch when you run the C compiler, but you would like to allow the user to specify the other switches with a command argument just as usual. You could use this `override' directive: override CFLAGS += -g You can also use `override' directives with `define' directives. This is done as you might expect: override define foo bar endef *Note Defining Variables Verbatim: Defining.  File: make.info, Node: Defining, Next: Environment, Prev: Override Directive, Up: Using Variables 6.8 Defining Variables Verbatim =============================== Another way to set the value of a variable is to use the `define' directive. This directive has an unusual syntax which allows newline characters to be included in the value, which is convenient for defining both canned sequences of commands (*note Defining Canned Command Sequences: Sequences.), and also sections of makefile syntax to use with `eval' (*note Eval Function::). The `define' directive is followed on the same line by the name of the variable and nothing more. The value to give the variable appears on the following lines. The end of the value is marked by a line containing just the word `endef'. Aside from this difference in syntax, `define' works just like `=': it creates a recursively-expanded variable (*note The Two Flavors of Variables: Flavors.). The variable name may contain function and variable references, which are expanded when the directive is read to find the actual variable name to use. You may nest `define' directives: `make' will keep track of nested directives and report an error if they are not all properly closed with `endef'. Note that lines beginning with tab characters are considered part of a command script, so any `define' or `endef' strings appearing on such a line will not be considered `make' operators. define two-lines echo foo echo $(bar) endef The value in an ordinary assignment cannot contain a newline; but the newlines that separate the lines of the value in a `define' become part of the variable's value (except for the final newline which precedes the `endef' and is not considered part of the value). When used in a command script, the previous example is functionally equivalent to this: two-lines = echo foo; echo $(bar) since two commands separated by semicolon behave much like two separate shell commands. However, note that using two separate lines means `make' will invoke the shell twice, running an independent subshell for each line. *Note Command Execution: Execution. If you want variable definitions made with `define' to take precedence over command-line variable definitions, you can use the `override' directive together with `define': override define two-lines foo $(bar) endef *Note The `override' Directive: Override Directive.  File: make.info, Node: Environment, Next: Target-specific, Prev: Defining, Up: Using Variables 6.9 Variables from the Environment ================================== Variables in `make' can come from the environment in which `make' is run. Every environment variable that `make' sees when it starts up is transformed into a `make' variable with the same name and value. However, an explicit assignment in the makefile, or with a command argument, overrides the environment. (If the `-e' flag is specified, then values from the environment override assignments in the makefile. *Note Summary of Options: Options Summary. But this is not recommended practice.) Thus, by setting the variable `CFLAGS' in your environment, you can cause all C compilations in most makefiles to use the compiler switches you prefer. This is safe for variables with standard or conventional meanings because you know that no makefile will use them for other things. (Note this is not totally reliable; some makefiles set `CFLAGS' explicitly and therefore are not affected by the value in the environment.) When `make' runs a command script, variables defined in the makefile are placed into the environment of that command. This allows you to pass values to sub-`make' invocations (*note Recursive Use of `make': Recursion.). By default, only variables that came from the environment or the command line are passed to recursive invocations. You can use the `export' directive to pass other variables. *Note Communicating Variables to a Sub-`make': Variables/Recursion, for full details. Other use of variables from the environment is not recommended. It is not wise for makefiles to depend for their functioning on environment variables set up outside their control, since this would cause different users to get different results from the same makefile. This is against the whole purpose of most makefiles. Such problems would be especially likely with the variable `SHELL', which is normally present in the environment to specify the user's choice of interactive shell. It would be very undesirable for this choice to affect `make'; so, `make' handles the `SHELL' environment variable in a special way; see *Note Choosing the Shell::.  File: make.info, Node: Target-specific, Next: Pattern-specific, Prev: Environment, Up: Using Variables 6.10 Target-specific Variable Values ==================================== Variable values in `make' are usually global; that is, they are the same regardless of where they are evaluated (unless they're reset, of course). One exception to that is automatic variables (*note Automatic Variables::). The other exception is "target-specific variable values". This feature allows you to define different values for the same variable, based on the target that `make' is currently building. As with automatic variables, these values are only available within the context of a target's command script (and in other target-specific assignments). Set a target-specific variable value like this: TARGET ... : VARIABLE-ASSIGNMENT or like this: TARGET ... : override VARIABLE-ASSIGNMENT or like this: TARGET ... : export VARIABLE-ASSIGNMENT Multiple TARGET values create a target-specific variable value for each member of the target list individually. The VARIABLE-ASSIGNMENT can be any valid form of assignment; recursive (`='), static (`:='), appending (`+='), or conditional (`?='). All variables that appear within the VARIABLE-ASSIGNMENT are evaluated within the context of the target: thus, any previously-defined target-specific variable values will be in effect. Note that this variable is actually distinct from any "global" value: the two variables do not have to have the same flavor (recursive vs. static). Target-specific variables have the same priority as any other makefile variable. Variables provided on the command-line (and in the environment if the `-e' option is in force) will take precedence. Specifying the `override' directive will allow the target-specific variable value to be preferred. There is one more special feature of target-specific variables: when you define a target-specific variable that variable value is also in effect for all prerequisites of this target, and all their prerequisites, etc. (unless those prerequisites override that variable with their own target-specific variable value). So, for example, a statement like this: prog : CFLAGS = -g prog : prog.o foo.o bar.o will set `CFLAGS' to `-g' in the command script for `prog', but it will also set `CFLAGS' to `-g' in the command scripts that create `prog.o', `foo.o', and `bar.o', and any command scripts which create their prerequisites. Be aware that a given prerequisite will only be built once per invocation of make, at most. If the same file is a prerequisite of multiple targets, and each of those targets has a different value for the same target-specific variable, then the first target to be built will cause that prerequisite to be built and the prerequisite will inherit the target-specific value from the first target. It will ignore the target-specific values from any other targets.  File: make.info, Node: Pattern-specific, Prev: Target-specific, Up: Using Variables 6.11 Pattern-specific Variable Values ===================================== In addition to target-specific variable values (*note Target-specific Variable Values: Target-specific.), GNU `make' supports pattern-specific variable values. In this form, the variable is defined for any target that matches the pattern specified. If a target matches more than one pattern, all the matching pattern-specific variables are interpreted in the order in which they were defined in the makefile, and collected together into one set. Variables defined in this way are searched after any target-specific variables defined explicitly for that target, and before target-specific variables defined for the parent target. Set a pattern-specific variable value like this: PATTERN ... : VARIABLE-ASSIGNMENT or like this: PATTERN ... : override VARIABLE-ASSIGNMENT where PATTERN is a %-pattern. As with target-specific variable values, multiple PATTERN values create a pattern-specific variable value for each pattern individually. The VARIABLE-ASSIGNMENT can be any valid form of assignment. Any command-line variable setting will take precedence, unless `override' is specified. For example: %.o : CFLAGS = -O will assign `CFLAGS' the value of `-O' for all targets matching the pattern `%.o'.  File: make.info, Node: Conditionals, Next: Functions, Prev: Using Variables, Up: Top 7 Conditional Parts of Makefiles ******************************** A "conditional" causes part of a makefile to be obeyed or ignored depending on the values of variables. Conditionals can compare the value of one variable to another, or the value of a variable to a constant string. Conditionals control what `make' actually "sees" in the makefile, so they _cannot_ be used to control shell commands at the time of execution. * Menu: * Conditional Example:: Example of a conditional * Conditional Syntax:: The syntax of conditionals. * Testing Flags:: Conditionals that test flags.  File: make.info, Node: Conditional Example, Next: Conditional Syntax, Prev: Conditionals, Up: Conditionals 7.1 Example of a Conditional ============================ The following example of a conditional tells `make' to use one set of libraries if the `CC' variable is `gcc', and a different set of libraries otherwise. It works by controlling which of two command lines will be used as the command for a rule. The result is that `CC=gcc' as an argument to `make' changes not only which compiler is used but also which libraries are linked. libs_for_gcc = -lgnu normal_libs = foo: $(objects) ifeq ($(CC),gcc) $(CC) -o foo $(objects) $(libs_for_gcc) else $(CC) -o foo $(objects) $(normal_libs) endif This conditional uses three directives: one `ifeq', one `else' and one `endif'. The `ifeq' directive begins the conditional, and specifies the condition. It contains two arguments, separated by a comma and surrounded by parentheses. Variable substitution is performed on both arguments and then they are compared. The lines of the makefile following the `ifeq' are obeyed if the two arguments match; otherwise they are ignored. The `else' directive causes the following lines to be obeyed if the previous conditional failed. In the example above, this means that the second alternative linking command is used whenever the first alternative is not used. It is optional to have an `else' in a conditional. The `endif' directive ends the conditional. Every conditional must end with an `endif'. Unconditional makefile text follows. As this example illustrates, conditionals work at the textual level: the lines of the conditional are treated as part of the makefile, or ignored, according to the condition. This is why the larger syntactic units of the makefile, such as rules, may cross the beginning or the end of the conditional. When the variable `CC' has the value `gcc', the above example has this effect: foo: $(objects) $(CC) -o foo $(objects) $(libs_for_gcc) When the variable `CC' has any other value, the effect is this: foo: $(objects) $(CC) -o foo $(objects) $(normal_libs) Equivalent results can be obtained in another way by conditionalizing a variable assignment and then using the variable unconditionally: libs_for_gcc = -lgnu normal_libs = ifeq ($(CC),gcc) libs=$(libs_for_gcc) else libs=$(normal_libs) endif foo: $(objects) $(CC) -o foo $(objects) $(libs)  File: make.info, Node: Conditional Syntax, Next: Testing Flags, Prev: Conditional Example, Up: Conditionals 7.2 Syntax of Conditionals ========================== The syntax of a simple conditional with no `else' is as follows: CONDITIONAL-DIRECTIVE TEXT-IF-TRUE endif The TEXT-IF-TRUE may be any lines of text, to be considered as part of the makefile if the condition is true. If the condition is false, no text is used instead. The syntax of a complex conditional is as follows: CONDITIONAL-DIRECTIVE TEXT-IF-TRUE else TEXT-IF-FALSE endif or: CONDITIONAL-DIRECTIVE TEXT-IF-ONE-IS-TRUE else CONDITIONAL-DIRECTIVE TEXT-IF-TRUE else TEXT-IF-FALSE endif There can be as many "`else' CONDITIONAL-DIRECTIVE" clauses as necessary. Once a given condition is true, TEXT-IF-TRUE is used and no other clause is used; if no condition is true then TEXT-IF-FALSE is used. The TEXT-IF-TRUE and TEXT-IF-FALSE can be any number of lines of text. The syntax of the CONDITIONAL-DIRECTIVE is the same whether the conditional is simple or complex; after an `else' or not. There are four different directives that test different conditions. Here is a table of them: `ifeq (ARG1, ARG2)' `ifeq 'ARG1' 'ARG2'' `ifeq "ARG1" "ARG2"' `ifeq "ARG1" 'ARG2'' `ifeq 'ARG1' "ARG2"' Expand all variable references in ARG1 and ARG2 and compare them. If they are identical, the TEXT-IF-TRUE is effective; otherwise, the TEXT-IF-FALSE, if any, is effective. Often you want to test if a variable has a non-empty value. When the value results from complex expansions of variables and functions, expansions you would consider empty may actually contain whitespace characters and thus are not seen as empty. However, you can use the `strip' function (*note Text Functions::) to avoid interpreting whitespace as a non-empty value. For example: ifeq ($(strip $(foo)),) TEXT-IF-EMPTY endif will evaluate TEXT-IF-EMPTY even if the expansion of `$(foo)' contains whitespace characters. `ifneq (ARG1, ARG2)' `ifneq 'ARG1' 'ARG2'' `ifneq "ARG1" "ARG2"' `ifneq "ARG1" 'ARG2'' `ifneq 'ARG1' "ARG2"' Expand all variable references in ARG1 and ARG2 and compare them. If they are different, the TEXT-IF-TRUE is effective; otherwise, the TEXT-IF-FALSE, if any, is effective. `ifdef VARIABLE-NAME' The `ifdef' form takes the _name_ of a variable as its argument, not a reference to a variable. The value of that variable has a non-empty value, the TEXT-IF-TRUE is effective; otherwise, the TEXT-IF-FALSE, if any, is effective. Variables that have never been defined have an empty value. The text VARIABLE-NAME is expanded, so it could be a variable or function that expands to the name of a variable. For example: bar = true foo = bar ifdef $(foo) frobozz = yes endif The variable reference `$(foo)' is expanded, yielding `bar', which is considered to be the name of a variable. The variable `bar' is not expanded, but its value is examined to determine if it is non-empty. Note that `ifdef' only tests whether a variable has a value. It does not expand the variable to see if that value is nonempty. Consequently, tests using `ifdef' return true for all definitions except those like `foo ='. To test for an empty value, use `ifeq ($(foo),)'. For example, bar = foo = $(bar) ifdef foo frobozz = yes else frobozz = no endif sets `frobozz' to `yes', while: foo = ifdef foo frobozz = yes else frobozz = no endif sets `frobozz' to `no'. `ifndef VARIABLE-NAME' If the variable VARIABLE-NAME has an empty value, the TEXT-IF-TRUE is effective; otherwise, the TEXT-IF-FALSE, if any, is effective. The rules for expansion and testing of VARIABLE-NAME are identical to the `ifdef' directive. Extra spaces are allowed and ignored at the beginning of the conditional directive line, but a tab is not allowed. (If the line begins with a tab, it will be considered a command for a rule.) Aside from this, extra spaces or tabs may be inserted with no effect anywhere except within the directive name or within an argument. A comment starting with `#' may appear at the end of the line. The other two directives that play a part in a conditional are `else' and `endif'. Each of these directives is written as one word, with no arguments. Extra spaces are allowed and ignored at the beginning of the line, and spaces or tabs at the end. A comment starting with `#' may appear at the end of the line. Conditionals affect which lines of the makefile `make' uses. If the condition is true, `make' reads the lines of the TEXT-IF-TRUE as part of the makefile; if the condition is false, `make' ignores those lines completely. It follows that syntactic units of the makefile, such as rules, may safely be split across the beginning or the end of the conditional. `make' evaluates conditionals when it reads a makefile. Consequently, you cannot use automatic variables in the tests of conditionals because they are not defined until commands are run (*note Automatic Variables::). To prevent intolerable confusion, it is not permitted to start a conditional in one makefile and end it in another. However, you may write an `include' directive within a conditional, provided you do not attempt to terminate the conditional inside the included file.  File: make.info, Node: Testing Flags, Prev: Conditional Syntax, Up: Conditionals 7.3 Conditionals that Test Flags ================================ You can write a conditional that tests `make' command flags such as `-t' by using the variable `MAKEFLAGS' together with the `findstring' function (*note Functions for String Substitution and Analysis: Text Functions.). This is useful when `touch' is not enough to make a file appear up to date. The `findstring' function determines whether one string appears as a substring of another. If you want to test for the `-t' flag, use `t' as the first string and the value of `MAKEFLAGS' as the other. For example, here is how to arrange to use `ranlib -t' to finish marking an archive file up to date: archive.a: ... ifneq (,$(findstring t,$(MAKEFLAGS))) +touch archive.a +ranlib -t archive.a else ranlib archive.a endif The `+' prefix marks those command lines as "recursive" so that they will be executed despite use of the `-t' flag. *Note Recursive Use of `make': Recursion.  File: make.info, Node: Functions, Next: Running, Prev: Conditionals, Up: Top 8 Functions for Transforming Text ********************************* "Functions" allow you to do text processing in the makefile to compute the files to operate on or the commands to use. You use a function in a "function call", where you give the name of the function and some text (the "arguments") for the function to operate on. The result of the function's processing is substituted into the makefile at the point of the call, just as a variable might be substituted. * Menu: * Syntax of Functions:: How to write a function call. * Text Functions:: General-purpose text manipulation functions. * File Name Functions:: Functions for manipulating file names. * Conditional Functions:: Functions that implement conditions. * Foreach Function:: Repeat some text with controlled variation. * Call Function:: Expand a user-defined function. * Value Function:: Return the un-expanded value of a variable. * Eval Function:: Evaluate the arguments as makefile syntax. * Origin Function:: Find where a variable got its value. * Flavor Function:: Find out the flavor of a variable. * Shell Function:: Substitute the output of a shell command. * Make Control Functions:: Functions that control how make runs.  File: make.info, Node: Syntax of Functions, Next: Text Functions, Prev: Functions, Up: Functions 8.1 Function Call Syntax ======================== A function call resembles a variable reference. It looks like this: $(FUNCTION ARGUMENTS) or like this: ${FUNCTION ARGUMENTS} Here FUNCTION is a function name; one of a short list of names that are part of `make'. You can also essentially create your own functions by using the `call' builtin function. The ARGUMENTS are the arguments of the function. They are separated from the function name by one or more spaces or tabs, and if there is more than one argument, then they are separated by commas. Such whitespace and commas are not part of an argument's value. The delimiters which you use to surround the function call, whether parentheses or braces, can appear in an argument only in matching pairs; the other kind of delimiters may appear singly. If the arguments themselves contain other function calls or variable references, it is wisest to use the same kind of delimiters for all the references; write `$(subst a,b,$(x))', not `$(subst a,b,${x})'. This is because it is clearer, and because only one type of delimiter is matched to find the end of the reference. The text written for each argument is processed by substitution of variables and function calls to produce the argument value, which is the text on which the function acts. The substitution is done in the order in which the arguments appear. Commas and unmatched parentheses or braces cannot appear in the text of an argument as written; leading spaces cannot appear in the text of the first argument as written. These characters can be put into the argument value by variable substitution. First define variables `comma' and `space' whose values are isolated comma and space characters, then substitute these variables where such characters are wanted, like this: comma:= , empty:= space:= $(empty) $(empty) foo:= a b c bar:= $(subst $(space),$(comma),$(foo)) # bar is now `a,b,c'. Here the `subst' function replaces each space with a comma, through the value of `foo', and substitutes the result.  File: make.info, Node: Text Functions, Next: File Name Functions, Prev: Syntax of Functions, Up: Functions 8.2 Functions for String Substitution and Analysis ================================================== Here are some functions that operate on strings: `$(subst FROM,TO,TEXT)' Performs a textual replacement on the text TEXT: each occurrence of FROM is replaced by TO. The result is substituted for the function call. For example, $(subst ee,EE,feet on the street) substitutes the string `fEEt on the strEEt'. `$(patsubst PATTERN,REPLACEMENT,TEXT)' Finds whitespace-separated words in TEXT that match PATTERN and replaces them with REPLACEMENT. Here PATTERN may contain a `%' which acts as a wildcard, matching any number of any characters within a word. If REPLACEMENT also contains a `%', the `%' is replaced by the text that matched the `%' in PATTERN. Only the first `%' in the PATTERN and REPLACEMENT is treated this way; any subsequent `%' is unchanged. `%' characters in `patsubst' function invocations can be quoted with preceding backslashes (`\'). Backslashes that would otherwise quote `%' characters can be quoted with more backslashes. Backslashes that quote `%' characters or other backslashes are removed from the pattern before it is compared file names or has a stem substituted into it. Backslashes that are not in danger of quoting `%' characters go unmolested. For example, the pattern `the\%weird\\%pattern\\' has `the%weird\' preceding the operative `%' character, and `pattern\\' following it. The final two backslashes are left alone because they cannot affect any `%' character. Whitespace between words is folded into single space characters; leading and trailing whitespace is discarded. For example, $(patsubst %.c,%.o,x.c.c bar.c) produces the value `x.c.o bar.o'. Substitution references (*note Substitution References: Substitution Refs.) are a simpler way to get the effect of the `patsubst' function: $(VAR:PATTERN=REPLACEMENT) is equivalent to $(patsubst PATTERN,REPLACEMENT,$(VAR)) The second shorthand simplifies one of the most common uses of `patsubst': replacing the suffix at the end of file names. $(VAR:SUFFIX=REPLACEMENT) is equivalent to $(patsubst %SUFFIX,%REPLACEMENT,$(VAR)) For example, you might have a list of object files: objects = foo.o bar.o baz.o To get the list of corresponding source files, you could simply write: $(objects:.o=.c) instead of using the general form: $(patsubst %.o,%.c,$(objects)) `$(strip STRING)' Removes leading and trailing whitespace from STRING and replaces each internal sequence of one or more whitespace characters with a single space. Thus, `$(strip a b c )' results in `a b c'. The function `strip' can be very useful when used in conjunction with conditionals. When comparing something with the empty string `' using `ifeq' or `ifneq', you usually want a string of just whitespace to match the empty string (*note Conditionals::). Thus, the following may fail to have the desired results: .PHONY: all ifneq "$(needs_made)" "" all: $(needs_made) else all:;@echo 'Nothing to make!' endif Replacing the variable reference `$(needs_made)' with the function call `$(strip $(needs_made))' in the `ifneq' directive would make it more robust. `$(findstring FIND,IN)' Searches IN for an occurrence of FIND. If it occurs, the value is FIND; otherwise, the value is empty. You can use this function in a conditional to test for the presence of a specific substring in a given string. Thus, the two examples, $(findstring a,a b c) $(findstring a,b c) produce the values `a' and `' (the empty string), respectively. *Note Testing Flags::, for a practical application of `findstring'. `$(filter PATTERN...,TEXT)' Returns all whitespace-separated words in TEXT that _do_ match any of the PATTERN words, removing any words that _do not_ match. The patterns are written using `%', just like the patterns used in the `patsubst' function above. The `filter' function can be used to separate out different types of strings (such as file names) in a variable. For example: sources := foo.c bar.c baz.s ugh.h foo: $(sources) cc $(filter %.c %.s,$(sources)) -o foo says that `foo' depends of `foo.c', `bar.c', `baz.s' and `ugh.h' but only `foo.c', `bar.c' and `baz.s' should be specified in the command to the compiler. `$(filter-out PATTERN...,TEXT)' Returns all whitespace-separated words in TEXT that _do not_ match any of the PATTERN words, removing the words that _do_ match one or more. This is the exact opposite of the `filter' function. For example, given: objects=main1.o foo.o main2.o bar.o mains=main1.o main2.o the following generates a list which contains all the object files not in `mains': $(filter-out $(mains),$(objects)) `$(sort LIST)' Sorts the words of LIST in lexical order, removing duplicate words. The output is a list of words separated by single spaces. Thus, $(sort foo bar lose) returns the value `bar foo lose'. Incidentally, since `sort' removes duplicate words, you can use it for this purpose even if you don't care about the sort order. `$(word N,TEXT)' Returns the Nth word of TEXT. The legitimate values of N start from 1. If N is bigger than the number of words in TEXT, the value is empty. For example, $(word 2, foo bar baz) returns `bar'. `$(wordlist S,E,TEXT)' Returns the list of words in TEXT starting with word S and ending with word E (inclusive). The legitimate values of S start from 1; E may start from 0. If S is bigger than the number of words in TEXT, the value is empty. If E is bigger than the number of words in TEXT, words up to the end of TEXT are returned. If S is greater than E, nothing is returned. For example, $(wordlist 2, 3, foo bar baz) returns `bar baz'. `$(words TEXT)' Returns the number of words in TEXT. Thus, the last word of TEXT is `$(word $(words TEXT),TEXT)'. `$(firstword NAMES...)' The argument NAMES is regarded as a series of names, separated by whitespace. The value is the first name in the series. The rest of the names are ignored. For example, $(firstword foo bar) produces the result `foo'. Although `$(firstword TEXT)' is the same as `$(word 1,TEXT)', the `firstword' function is retained for its simplicity. `$(lastword NAMES...)' The argument NAMES is regarded as a series of names, separated by whitespace. The value is the last name in the series. For example, $(lastword foo bar) produces the result `bar'. Although `$(lastword TEXT)' is the same as `$(word $(words TEXT),TEXT)', the `lastword' function was added for its simplicity and better performance. Here is a realistic example of the use of `subst' and `patsubst'. Suppose that a makefile uses the `VPATH' variable to specify a list of directories that `make' should search for prerequisite files (*note `VPATH' Search Path for All Prerequisites: General Search.). This example shows how to tell the C compiler to search for header files in the same list of directories. The value of `VPATH' is a list of directories separated by colons, such as `src:../headers'. First, the `subst' function is used to change the colons to spaces: $(subst :, ,$(VPATH)) This produces `src ../headers'. Then `patsubst' is used to turn each directory name into a `-I' flag. These can be added to the value of the variable `CFLAGS', which is passed automatically to the C compiler, like this: override CFLAGS += $(patsubst %,-I%,$(subst :, ,$(VPATH))) The effect is to append the text `-Isrc -I../headers' to the previously given value of `CFLAGS'. The `override' directive is used so that the new value is assigned even if the previous value of `CFLAGS' was specified with a command argument (*note The `override' Directive: Override Directive.).  File: make.info, Node: File Name Functions, Next: Conditional Functions, Prev: Text Functions, Up: Functions 8.3 Functions for File Names ============================ Several of the built-in expansion functions relate specifically to taking apart file names or lists of file names. Each of the following functions performs a specific transformation on a file name. The argument of the function is regarded as a series of file names, separated by whitespace. (Leading and trailing whitespace is ignored.) Each file name in the series is transformed in the same way and the results are concatenated with single spaces between them. `$(dir NAMES...)' Extracts the directory-part of each file name in NAMES. The directory-part of the file name is everything up through (and including) the last slash in it. If the file name contains no slash, the directory part is the string `./'. For example, $(dir src/foo.c hacks) produces the result `src/ ./'. `$(notdir NAMES...)' Extracts all but the directory-part of each file name in NAMES. If the file name contains no slash, it is left unchanged. Otherwise, everything through the last slash is removed from it. A file name that ends with a slash becomes an empty string. This is unfortunate, because it means that the result does not always have the same number of whitespace-separated file names as the argument had; but we do not see any other valid alternative. For example, $(notdir src/foo.c hacks) produces the result `foo.c hacks'. `$(suffix NAMES...)' Extracts the suffix of each file name in NAMES. If the file name contains a period, the suffix is everything starting with the last period. Otherwise, the suffix is the empty string. This frequently means that the result will be empty when NAMES is not, and if NAMES contains multiple file names, the result may contain fewer file names. For example, $(suffix src/foo.c src-1.0/bar.c hacks) produces the result `.c .c'. `$(basename NAMES...)' Extracts all but the suffix of each file name in NAMES. If the file name contains a period, the basename is everything starting up to (and not including) the last period. Periods in the directory part are ignored. If there is no period, the basename is the entire file name. For example, $(basename src/foo.c src-1.0/bar hacks) produces the result `src/foo src-1.0/bar hacks'. `$(addsuffix SUFFIX,NAMES...)' The argument NAMES is regarded as a series of names, separated by whitespace; SUFFIX is used as a unit. The value of SUFFIX is appended to the end of each individual name and the resulting larger names are concatenated with single spaces between them. For example, $(addsuffix .c,foo bar) produces the result `foo.c bar.c'. `$(addprefix PREFIX,NAMES...)' The argument NAMES is regarded as a series of names, separated by whitespace; PREFIX is used as a unit. The value of PREFIX is prepended to the front of each individual name and the resulting larger names are concatenated with single spaces between them. For example, $(addprefix src/,foo bar) produces the result `src/foo src/bar'. `$(join LIST1,LIST2)' Concatenates the two arguments word by word: the two first words (one from each argument) concatenated form the first word of the result, the two second words form the second word of the result, and so on. So the Nth word of the result comes from the Nth word of each argument. If one argument has more words that the other, the extra words are copied unchanged into the result. For example, `$(join a b,.c .o)' produces `a.c b.o'. Whitespace between the words in the lists is not preserved; it is replaced with a single space. This function can merge the results of the `dir' and `notdir' functions, to produce the original list of files which was given to those two functions. `$(wildcard PATTERN)' The argument PATTERN is a file name pattern, typically containing wildcard characters (as in shell file name patterns). The result of `wildcard' is a space-separated list of the names of existing files that match the pattern. *Note Using Wildcard Characters in File Names: Wildcards. `$(realpath NAMES...)' For each file name in NAMES return the canonical absolute name. A canonical name does not contain any `.' or `..' components, nor any repeated path separators (`/') or symlinks. In case of a failure the empty string is returned. Consult the `realpath(3)' documentation for a list of possible failure causes. `$(abspath NAMES...)' For each file name in NAMES return an absolute name that does not contain any `.' or `..' components, nor any repeated path separators (`/'). Note that, in contrast to `realpath' function, `abspath' does not resolve symlinks and does not require the file names to refer to an existing file or directory. Use the `wildcard' function to test for existence.  File: make.info, Node: Conditional Functions, Next: Foreach Function, Prev: File Name Functions, Up: Functions 8.4 Functions for Conditionals ============================== There are three functions that provide conditional expansion. A key aspect of these functions is that not all of the arguments are expanded initially. Only those arguments which need to be expanded, will be expanded. `$(if CONDITION,THEN-PART[,ELSE-PART])' The `if' function provides support for conditional expansion in a functional context (as opposed to the GNU `make' makefile conditionals such as `ifeq' (*note Syntax of Conditionals: Conditional Syntax.). The first argument, CONDITION, first has all preceding and trailing whitespace stripped, then is expanded. If it expands to any non-empty string, then the condition is considered to be true. If it expands to an empty string, the condition is considered to be false. If the condition is true then the second argument, THEN-PART, is evaluated and this is used as the result of the evaluation of the entire `if' function. If the condition is false then the third argument, ELSE-PART, is evaluated and this is the result of the `if' function. If there is no third argument, the `if' function evaluates to nothing (the empty string). Note that only one of the THEN-PART or the ELSE-PART will be evaluated, never both. Thus, either can contain side-effects (such as `shell' function calls, etc.) `$(or CONDITION1[,CONDITION2[,CONDITION3...]])' The `or' function provides a "short-circuiting" OR operation. Each argument is expanded, in order. If an argument expands to a non-empty string the processing stops and the result of the expansion is that string. If, after all arguments are expanded, all of them are false (empty), then the result of the expansion is the empty string. `$(and CONDITION1[,CONDITION2[,CONDITION3...]])' The `and' function provides a "short-circuiting" AND operation. Each argument is expanded, in order. If an argument expands to an empty string the processing stops and the result of the expansion is the empty string. If all arguments expand to a non-empty string then the result of the expansion is the expansion of the last argument.  File: make.info, Node: Foreach Function, Next: Call Function, Prev: Conditional Functions, Up: Functions 8.5 The `foreach' Function ========================== The `foreach' function is very different from other functions. It causes one piece of text to be used repeatedly, each time with a different substitution performed on it. It resembles the `for' command in the shell `sh' and the `foreach' command in the C-shell `csh'. The syntax of the `foreach' function is: $(foreach VAR,LIST,TEXT) The first two arguments, VAR and LIST, are expanded before anything else is done; note that the last argument, TEXT, is *not* expanded at the same time. Then for each word of the expanded value of LIST, the variable named by the expanded value of VAR is set to that word, and TEXT is expanded. Presumably TEXT contains references to that variable, so its expansion will be different each time. The result is that TEXT is expanded as many times as there are whitespace-separated words in LIST. The multiple expansions of TEXT are concatenated, with spaces between them, to make the result of `foreach'. This simple example sets the variable `files' to the list of all files in the directories in the list `dirs': dirs := a b c d files := $(foreach dir,$(dirs),$(wildcard $(dir)/*)) Here TEXT is `$(wildcard $(dir)/*)'. The first repetition finds the value `a' for `dir', so it produces the same result as `$(wildcard a/*)'; the second repetition produces the result of `$(wildcard b/*)'; and the third, that of `$(wildcard c/*)'. This example has the same result (except for setting `dirs') as the following example: files := $(wildcard a/* b/* c/* d/*) When TEXT is complicated, you can improve readability by giving it a name, with an additional variable: find_files = $(wildcard $(dir)/*) dirs := a b c d files := $(foreach dir,$(dirs),$(find_files)) Here we use the variable `find_files' this way. We use plain `=' to define a recursively-expanding variable, so that its value contains an actual function call to be reexpanded under the control of `foreach'; a simply-expanded variable would not do, since `wildcard' would be called only once at the time of defining `find_files'. The `foreach' function has no permanent effect on the variable VAR; its value and flavor after the `foreach' function call are the same as they were beforehand. The other values which are taken from LIST are in effect only temporarily, during the execution of `foreach'. The variable VAR is a simply-expanded variable during the execution of `foreach'. If VAR was undefined before the `foreach' function call, it is undefined after the call. *Note The Two Flavors of Variables: Flavors. You must take care when using complex variable expressions that result in variable names because many strange things are valid variable names, but are probably not what you intended. For example, files := $(foreach Esta escrito en espanol!,b c ch,$(find_files)) might be useful if the value of `find_files' references the variable whose name is `Esta escrito en espanol!' (es un nombre bastante largo, no?), but it is more likely to be a mistake.  File: make.info, Node: Call Function, Next: Value Function, Prev: Foreach Function, Up: Functions 8.6 The `call' Function ======================= The `call' function is unique in that it can be used to create new parameterized functions. You can write a complex expression as the value of a variable, then use `call' to expand it with different values. The syntax of the `call' function is: $(call VARIABLE,PARAM,PARAM,...) When `make' expands this function, it assigns each PARAM to temporary variables `$(1)', `$(2)', etc. The variable `$(0)' will contain VARIABLE. There is no maximum number of parameter arguments. There is no minimum, either, but it doesn't make sense to use `call' with no parameters. Then VARIABLE is expanded as a `make' variable in the context of these temporary assignments. Thus, any reference to `$(1)' in the value of VARIABLE will resolve to the first PARAM in the invocation of `call'. Note that VARIABLE is the _name_ of a variable, not a _reference_ to that variable. Therefore you would not normally use a `$' or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.) If VARIABLE is the name of a builtin function, the builtin function is always invoked (even if a `make' variable by that name also exists). The `call' function expands the PARAM arguments before assigning them to temporary variables. This means that VARIABLE values containing references to builtin functions that have special expansion rules, like `foreach' or `if', may not work as you expect. Some examples may make this clearer. This macro simply reverses its arguments: reverse = $(2) $(1) foo = $(call reverse,a,b) Here FOO will contain `b a'. This one is slightly more interesting: it defines a macro to search for the first instance of a program in `PATH': pathsearch = $(firstword $(wildcard $(addsuffix /$(1),$(subst :, ,$(PATH))))) LS := $(call pathsearch,ls) Now the variable LS contains `/bin/ls' or similar. The `call' function can be nested. Each recursive invocation gets its own local values for `$(1)', etc. that mask the values of higher-level `call'. For example, here is an implementation of a "map" function: map = $(foreach a,$(2),$(call $(1),$(a))) Now you can MAP a function that normally takes only one argument, such as `origin', to multiple values in one step: o = $(call map,origin,o map MAKE) and end up with O containing something like `file file default'. A final caution: be careful when adding whitespace to the arguments to `call'. As with other functions, any whitespace contained in the second and subsequent arguments is kept; this can cause strange effects. It's generally safest to remove all extraneous whitespace when providing parameters to `call'.  File: make.info, Node: Value Function, Next: Eval Function, Prev: Call Function, Up: Functions 8.7 The `value' Function ======================== The `value' function provides a way for you to use the value of a variable _without_ having it expanded. Please note that this does not undo expansions which have already occurred; for example if you create a simply expanded variable its value is expanded during the definition; in that case the `value' function will return the same result as using the variable directly. The syntax of the `value' function is: $(value VARIABLE) Note that VARIABLE is the _name_ of a variable; not a _reference_ to that variable. Therefore you would not normally use a `$' or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.) The result of this function is a string containing the value of VARIABLE, without any expansion occurring. For example, in this makefile: FOO = $PATH all: @echo $(FOO) @echo $(value FOO) The first output line would be `ATH', since the "$P" would be expanded as a `make' variable, while the second output line would be the current value of your `$PATH' environment variable, since the `value' function avoided the expansion. The `value' function is most often used in conjunction with the `eval' function (*note Eval Function::).  File: make.info, Node: Eval Function, Next: Origin Function, Prev: Value Function, Up: Functions 8.8 The `eval' Function ======================= The `eval' function is very special: it allows you to define new makefile constructs that are not constant; which are the result of evaluating other variables and functions. The argument to the `eval' function is expanded, then the results of that expansion are parsed as makefile syntax. The expanded results can define new `make' variables, targets, implicit or explicit rules, etc. The result of the `eval' function is always the empty string; thus, it can be placed virtually anywhere in a makefile without causing syntax errors. It's important to realize that the `eval' argument is expanded _twice_; first by the `eval' function, then the results of that expansion are expanded again when they are parsed as makefile syntax. This means you may need to provide extra levels of escaping for "$" characters when using `eval'. The `value' function (*note Value Function::) can sometimes be useful in these situations, to circumvent unwanted expansions. Here is an example of how `eval' can be used; this example combines a number of concepts and other functions. Although it might seem overly complex to use `eval' in this example, rather than just writing out the rules, consider two things: first, the template definition (in `PROGRAM_template') could need to be much more complex than it is here; and second, you might put the complex, "generic" part of this example into another makefile, then include it in all the individual makefiles. Now your individual makefiles are quite straightforward. PROGRAMS = server client server_OBJS = server.o server_priv.o server_access.o server_LIBS = priv protocol client_OBJS = client.o client_api.o client_mem.o client_LIBS = protocol # Everything after this is generic .PHONY: all all: $(PROGRAMS) define PROGRAM_template $(1): $$($(1)_OBJS) $$($(1)_LIBS:%=-l%) ALL_OBJS += $$($(1)_OBJS) endef $(foreach prog,$(PROGRAMS),$(eval $(call PROGRAM_template,$(prog)))) $(PROGRAMS): $(LINK.o) $^ $(LDLIBS) -o $@ clean: rm -f $(ALL_OBJS) $(PROGRAMS)  File: make.info, Node: Origin Function, Next: Flavor Function, Prev: Eval Function, Up: Functions 8.9 The `origin' Function ========================= The `origin' function is unlike most other functions in that it does not operate on the values of variables; it tells you something _about_ a variable. Specifically, it tells you where it came from. The syntax of the `origin' function is: $(origin VARIABLE) Note that VARIABLE is the _name_ of a variable to inquire about; not a _reference_ to that variable. Therefore you would not normally use a `$' or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.) The result of this function is a string telling you how the variable VARIABLE was defined: `undefined' if VARIABLE was never defined. `default' if VARIABLE has a default definition, as is usual with `CC' and so on. *Note Variables Used by Implicit Rules: Implicit Variables. Note that if you have redefined a default variable, the `origin' function will return the origin of the later definition. `environment' if VARIABLE was defined as an environment variable and the `-e' option is _not_ turned on (*note Summary of Options: Options Summary.). `environment override' if VARIABLE was defined as an environment variable and the `-e' option _is_ turned on (*note Summary of Options: Options Summary.). `file' if VARIABLE was defined in a makefile. `command line' if VARIABLE was defined on the command line. `override' if VARIABLE was defined with an `override' directive in a makefile (*note The `override' Directive: Override Directive.). `automatic' if VARIABLE is an automatic variable defined for the execution of the commands for each rule (*note Automatic Variables::). This information is primarily useful (other than for your curiosity) to determine if you want to believe the value of a variable. For example, suppose you have a makefile `foo' that includes another makefile `bar'. You want a variable `bletch' to be defined in `bar' if you run the command `make -f bar', even if the environment contains a definition of `bletch'. However, if `foo' defined `bletch' before including `bar', you do not want to override that definition. This could be done by using an `override' directive in `foo', giving that definition precedence over the later definition in `bar'; unfortunately, the `override' directive would also override any command line definitions. So, `bar' could include: ifdef bletch ifeq "$(origin bletch)" "environment" bletch = barf, gag, etc. endif endif If `bletch' has been defined from the environment, this will redefine it. If you want to override a previous definition of `bletch' if it came from the environment, even under `-e', you could instead write: ifneq "$(findstring environment,$(origin bletch))" "" bletch = barf, gag, etc. endif Here the redefinition takes place if `$(origin bletch)' returns either `environment' or `environment override'. *Note Functions for String Substitution and Analysis: Text Functions.  File: make.info, Node: Flavor Function, Next: Shell Function, Prev: Origin Function, Up: Functions 8.10 The `flavor' Function ========================== The `flavor' function is unlike most other functions (and like `origin' function) in that it does not operate on the values of variables; it tells you something _about_ a variable. Specifically, it tells you the flavor of a variable (*note The Two Flavors of Variables: Flavors.). The syntax of the `flavor' function is: $(flavor VARIABLE) Note that VARIABLE is the _name_ of a variable to inquire about; not a _reference_ to that variable. Therefore you would not normally use a `$' or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.) The result of this function is a string that identifies the flavor of the variable VARIABLE: `undefined' if VARIABLE was never defined. `recursive' if VARIABLE is a recursively expanded variable. `simple' if VARIABLE is a simply expanded variable.  File: make.info, Node: Shell Function, Next: Make Control Functions, Prev: Flavor Function, Up: Functions 8.11 The `shell' Function ========================= The `shell' function is unlike any other function other than the `wildcard' function (*note The Function `wildcard': Wildcard Function.) in that it communicates with the world outside of `make'. The `shell' function performs the same function that backquotes (``') perform in most shells: it does "command expansion". This means that it takes as an argument a shell command and evaluates to the output of the command. The only processing `make' does on the result is to convert each newline (or carriage-return / newline pair) to a single space. If there is a trailing (carriage-return and) newline it will simply be removed. The commands run by calls to the `shell' function are run when the function calls are expanded (*note How `make' Reads a Makefile: Reading Makefiles.). Because this function involves spawning a new shell, you should carefully consider the performance implications of using the `shell' function within recursively expanded variables vs. simply expanded variables (*note The Two Flavors of Variables: Flavors.). Here are some examples of the use of the `shell' function: contents := $(shell cat foo) sets `contents' to the contents of the file `foo', with a space (rather than a newline) separating each line. files := $(shell echo *.c) sets `files' to the expansion of `*.c'. Unless `make' is using a very strange shell, this has the same result as `$(wildcard *.c)' (as long as at least one `.c' file exists).  File: make.info, Node: Make Control Functions, Prev: Shell Function, Up: Functions 8.12 Functions That Control Make ================================ These functions control the way make runs. Generally, they are used to provide information to the user of the makefile or to cause make to stop if some sort of environmental error is detected. `$(error TEXT...)' Generates a fatal error where the message is TEXT. Note that the error is generated whenever this function is evaluated. So, if you put it inside a command script or on the right side of a recursive variable assignment, it won't be evaluated until later. The TEXT will be expanded before the error is generated. For example, ifdef ERROR1 $(error error is $(ERROR1)) endif will generate a fatal error during the read of the makefile if the `make' variable `ERROR1' is defined. Or, ERR = $(error found an error!) .PHONY: err err: ; $(ERR) will generate a fatal error while `make' is running, if the `err' target is invoked. `$(warning TEXT...)' This function works similarly to the `error' function, above, except that `make' doesn't exit. Instead, TEXT is expanded and the resulting message is displayed, but processing of the makefile continues. The result of the expansion of this function is the empty string. `$(info TEXT...)' This function does nothing more than print its (expanded) argument(s) to standard output. No makefile name or line number is added. The result of the expansion of this function is the empty string.  File: make.info, Node: Running, Next: Implicit Rules, Prev: Functions, Up: Top 9 How to Run `make' ******************* A makefile that says how to recompile a program can be used in more than one way. The simplest use is to recompile every file that is out of date. Usually, makefiles are written so that if you run `make' with no arguments, it does just that. But you might want to update only some of the files; you might want to use a different compiler or different compiler options; you might want just to find out which files are out of date without changing them. By giving arguments when you run `make', you can do any of these things and many others. The exit status of `make' is always one of three values: `0' The exit status is zero if `make' is successful. `2' The exit status is two if `make' encounters any errors. It will print messages describing the particular errors. `1' The exit status is one if you use the `-q' flag and `make' determines that some target is not already up to date. *Note Instead of Executing the Commands: Instead of Execution. * Menu: * Makefile Arguments:: How to specify which makefile to use. * Goals:: How to use goal arguments to specify which parts of the makefile to use. * Instead of Execution:: How to use mode flags to specify what kind of thing to do with the commands in the makefile other than simply execute them. * Avoiding Compilation:: How to avoid recompiling certain files. * Overriding:: How to override a variable to specify an alternate compiler and other things. * Testing:: How to proceed past some errors, to test compilation. * Options Summary:: Summary of Options  File: make.info, Node: Makefile Arguments, Next: Goals, Prev: Running, Up: Running 9.1 Arguments to Specify the Makefile ===================================== The way to specify the name of the makefile is with the `-f' or `--file' option (`--makefile' also works). For example, `-f altmake' says to use the file `altmake' as the makefile. If you use the `-f' flag several times and follow each `-f' with an argument, all the specified files are used jointly as makefiles. If you do not use the `-f' or `--file' flag, the default is to try `GNUmakefile', `makefile', and `Makefile', in that order, and use the first of these three which exists or can be made (*note Writing Makefiles: Makefiles.).  File: make.info, Node: Goals, Next: Instead of Execution, Prev: Makefile Arguments, Up: Running 9.2 Arguments to Specify the Goals ================================== The "goals" are the targets that `make' should strive ultimately to update. Other targets are updated as well if they appear as prerequisites of goals, or prerequisites of prerequisites of goals, etc. By default, the goal is the first target in the makefile (not counting targets that start with a period). Therefore, makefiles are usually written so that the first target is for compiling the entire program or programs they describe. If the first rule in the makefile has several targets, only the first target in the rule becomes the default goal, not the whole list. You can manage the selection of the default goal from within your makefile using the `.DEFAULT_GOAL' variable (*note Other Special Variables: Special Variables.). You can also specify a different goal or goals with command-line arguments to `make'. Use the name of the goal as an argument. If you specify several goals, `make' processes each of them in turn, in the order you name them. Any target in the makefile may be specified as a goal (unless it starts with `-' or contains an `=', in which case it will be parsed as a switch or variable definition, respectively). Even targets not in the makefile may be specified, if `make' can find implicit rules that say how to make them. `Make' will set the special variable `MAKECMDGOALS' to the list of goals you specified on the command line. If no goals were given on the command line, this variable is empty. Note that this variable should be used only in special circumstances. An example of appropriate use is to avoid including `.d' files during `clean' rules (*note Automatic Prerequisites::), so `make' won't create them only to immediately remove them again: sources = foo.c bar.c ifneq ($(MAKECMDGOALS),clean) include $(sources:.c=.d) endif One use of specifying a goal is if you want to compile only a part of the program, or only one of several programs. Specify as a goal each file that you wish to remake. For example, consider a directory containing several programs, with a makefile that starts like this: .PHONY: all all: size nm ld ar as If you are working on the program `size', you might want to say `make size' so that only the files of that program are recompiled. Another use of specifying a goal is to make files that are not normally made. For example, there may be a file of debugging output, or a version of the program that is compiled specially for testing, which has a rule in the makefile but is not a prerequisite of the default goal. Another use of specifying a goal is to run the commands associated with a phony target (*note Phony Targets::) or empty target (*note Empty Target Files to Record Events: Empty Targets.). Many makefiles contain a phony target named `clean' which deletes everything except source files. Naturally, this is done only if you request it explicitly with `make clean'. Following is a list of typical phony and empty target names. *Note Standard Targets::, for a detailed list of all the standard target names which GNU software packages use. `all' Make all the top-level targets the makefile knows about. `clean' Delete all files that are normally created by running `make'. `mostlyclean' Like `clean', but may refrain from deleting a few files that people normally don't want to recompile. For example, the `mostlyclean' target for GCC does not delete `libgcc.a', because recompiling it is rarely necessary and takes a lot of time. `distclean' `realclean' `clobber' Any of these targets might be defined to delete _more_ files than `clean' does. For example, this would delete configuration files or links that you would normally create as preparation for compilation, even if the makefile itself cannot create these files. `install' Copy the executable file into a directory that users typically search for commands; copy any auxiliary files that the executable uses into the directories where it will look for them. `print' Print listings of the source files that have changed. `tar' Create a tar file of the source files. `shar' Create a shell archive (shar file) of the source files. `dist' Create a distribution file of the source files. This might be a tar file, or a shar file, or a compressed version of one of the above, or even more than one of the above. `TAGS' Update a tags table for this program. `check' `test' Perform self tests on the program this makefile builds.  File: make.info, Node: Instead of Execution, Next: Avoiding Compilation, Prev: Goals, Up: Running 9.3 Instead of Executing the Commands ===================================== The makefile tells `make' how to tell whether a target is up to date, and how to update each target. But updating the targets is not always what you want. Certain options specify other activities for `make'. `-n' `--just-print' `--dry-run' `--recon' "No-op". The activity is to print what commands would be used to make the targets up to date, but not actually execute them. `-t' `--touch' "Touch". The activity is to mark the targets as up to date without actually changing them. In other words, `make' pretends to compile the targets but does not really change their contents. `-q' `--question' "Question". The activity is to find out silently whether the targets are up to date already; but execute no commands in either case. In other words, neither compilation nor output will occur. `-W FILE' `--what-if=FILE' `--assume-new=FILE' `--new-file=FILE' "What if". Each `-W' flag is followed by a file name. The given files' modification times are recorded by `make' as being the present time, although the actual modification times remain the same. You can use the `-W' flag in conjunction with the `-n' flag to see what would happen if you were to modify specific files. With the `-n' flag, `make' prints the commands that it would normally execute but does not execute them. With the `-t' flag, `make' ignores the commands in the rules and uses (in effect) the command `touch' for each target that needs to be remade. The `touch' command is also printed, unless `-s' or `.SILENT' is used. For speed, `make' does not actually invoke the program `touch'. It does the work directly. With the `-q' flag, `make' prints nothing and executes no commands, but the exit status code it returns is zero if and only if the targets to be considered are already up to date. If the exit status is one, then some updating needs to be done. If `make' encounters an error, the exit status is two, so you can distinguish an error from a target that is not up to date. It is an error to use more than one of these three flags in the same invocation of `make'. The `-n', `-t', and `-q' options do not affect command lines that begin with `+' characters or contain the strings `$(MAKE)' or `${MAKE}'. Note that only the line containing the `+' character or the strings `$(MAKE)' or `${MAKE}' is run regardless of these options. Other lines in the same rule are not run unless they too begin with `+' or contain `$(MAKE)' or `${MAKE}' (*Note How the `MAKE' Variable Works: MAKE Variable.) The `-W' flag provides two features: * If you also use the `-n' or `-q' flag, you can see what `make' would do if you were to modify some files. * Without the `-n' or `-q' flag, when `make' is actually executing commands, the `-W' flag can direct `make' to act as if some files had been modified, without actually modifying the files. Note that the options `-p' and `-v' allow you to obtain other information about `make' or about the makefiles in use (*note Summary of Options: Options Summary.).  File: make.info, Node: Avoiding Compilation, Next: Overriding, Prev: Instead of Execution, Up: Running 9.4 Avoiding Recompilation of Some Files ======================================== Sometimes you may have changed a source file but you do not want to recompile all the files that depend on it. For example, suppose you add a macro or a declaration to a header file that many other files depend on. Being conservative, `make' assumes that any change in the header file requires recompilation of all dependent files, but you know that they do not need to be recompiled and you would rather not waste the time waiting for them to compile. If you anticipate the problem before changing the header file, you can use the `-t' flag. This flag tells `make' not to run the commands in the rules, but rather to mark the target up to date by changing its last-modification date. You would follow this procedure: 1. Use the command `make' to recompile the source files that really need recompilation, ensuring that the object files are up-to-date before you begin. 2. Make the changes in the header files. 3. Use the command `make -t' to mark all the object files as up to date. The next time you run `make', the changes in the header files will not cause any recompilation. If you have already changed the header file at a time when some files do need recompilation, it is too late to do this. Instead, you can use the `-o FILE' flag, which marks a specified file as "old" (*note Summary of Options: Options Summary.). This means that the file itself will not be remade, and nothing else will be remade on its account. Follow this procedure: 1. Recompile the source files that need compilation for reasons independent of the particular header file, with `make -o HEADERFILE'. If several header files are involved, use a separate `-o' option for each header file. 2. Touch all the object files with `make -t'.  File: make.info, Node: Overriding, Next: Testing, Prev: Avoiding Compilation, Up: Running 9.5 Overriding Variables ======================== An argument that contains `=' specifies the value of a variable: `V=X' sets the value of the variable V to X. If you specify a value in this way, all ordinary assignments of the same variable in the makefile are ignored; we say they have been "overridden" by the command line argument. The most common way to use this facility is to pass extra flags to compilers. For example, in a properly written makefile, the variable `CFLAGS' is included in each command that runs the C compiler, so a file `foo.c' would be compiled something like this: cc -c $(CFLAGS) foo.c Thus, whatever value you set for `CFLAGS' affects each compilation that occurs. The makefile probably specifies the usual value for `CFLAGS', like this: CFLAGS=-g Each time you run `make', you can override this value if you wish. For example, if you say `make CFLAGS='-g -O'', each C compilation will be done with `cc -c -g -O'. (This also illustrates how you can use quoting in the shell to enclose spaces and other special characters in the value of a variable when you override it.) The variable `CFLAGS' is only one of many standard variables that exist just so that you can change them this way. *Note Variables Used by Implicit Rules: Implicit Variables, for a complete list. You can also program the makefile to look at additional variables of your own, giving the user the ability to control other aspects of how the makefile works by changing the variables. When you override a variable with a command argument, you can define either a recursively-expanded variable or a simply-expanded variable. The examples shown above make a recursively-expanded variable; to make a simply-expanded variable, write `:=' instead of `='. But, unless you want to include a variable reference or function call in the _value_ that you specify, it makes no difference which kind of variable you create. There is one way that the makefile can change a variable that you have overridden. This is to use the `override' directive, which is a line that looks like this: `override VARIABLE = VALUE' (*note The `override' Directive: Override Directive.).  File: make.info, Node: Testing, Next: Options Summary, Prev: Overriding, Up: Running 9.6 Testing the Compilation of a Program ======================================== Normally, when an error happens in executing a shell command, `make' gives up immediately, returning a nonzero status. No further commands are executed for any target. The error implies that the goal cannot be correctly remade, and `make' reports this as soon as it knows. When you are compiling a program that you have just changed, this is not what you want. Instead, you would rather that `make' try compiling every file that can be tried, to show you as many compilation errors as possible. On these occasions, you should use the `-k' or `--keep-going' flag. This tells `make' to continue to consider the other prerequisites of the pending targets, remaking them if necessary, before it gives up and returns nonzero status. For example, after an error in compiling one object file, `make -k' will continue compiling other object files even though it already knows that linking them will be impossible. In addition to continuing after failed shell commands, `make -k' will continue as much as possible after discovering that it does not know how to make a target or prerequisite file. This will always cause an error message, but without `-k', it is a fatal error (*note Summary of Options: Options Summary.). The usual behavior of `make' assumes that your purpose is to get the goals up to date; once `make' learns that this is impossible, it might as well report the failure immediately. The `-k' flag says that the real purpose is to test as much as possible of the changes made in the program, perhaps to find several independent problems so that you can correct them all before the next attempt to compile. This is why Emacs' `M-x compile' command passes the `-k' flag by default.  File: make.info, Node: Options Summary, Prev: Testing, Up: Running 9.7 Summary of Options ====================== Here is a table of all the options `make' understands: `-b' `-m' These options are ignored for compatibility with other versions of `make'. `-B' `--always-make' Consider all targets out-of-date. GNU `make' proceeds to consider targets and their prerequisites using the normal algorithms; however, all targets so considered are always remade regardless of the status of their prerequisites. To avoid infinite recursion, if `MAKE_RESTARTS' (*note Other Special Variables: Special Variables.) is set to a number greater than 0 this option is disabled when considering whether to remake makefiles (*note How Makefiles Are Remade: Remaking Makefiles.). `-C DIR' `--directory=DIR' Change to directory DIR before reading the makefiles. If multiple `-C' options are specified, each is interpreted relative to the previous one: `-C / -C etc' is equivalent to `-C /etc'. This is typically used with recursive invocations of `make' (*note Recursive Use of `make': Recursion.). `-d' Print debugging information in addition to normal processing. The debugging information says which files are being considered for remaking, which file-times are being compared and with what results, which files actually need to be remade, which implicit rules are considered and which are applied--everything interesting about how `make' decides what to do. The `-d' option is equivalent to `--debug=a' (see below). `--debug[=OPTIONS]' Print debugging information in addition to normal processing. Various levels and types of output can be chosen. With no arguments, print the "basic" level of debugging. Possible arguments are below; only the first character is considered, and values must be comma- or space-separated. `a (all)' All types of debugging output are enabled. This is equivalent to using `-d'. `b (basic)' Basic debugging prints each target that was found to be out-of-date, and whether the build was successful or not. `v (verbose)' A level above `basic'; includes messages about which makefiles were parsed, prerequisites that did not need to be rebuilt, etc. This option also enables `basic' messages. `i (implicit)' Prints messages describing the implicit rule searches for each target. This option also enables `basic' messages. `j (jobs)' Prints messages giving details on the invocation of specific subcommands. `m (makefile)' By default, the above messages are not enabled while trying to remake the makefiles. This option enables messages while rebuilding makefiles, too. Note that the `all' option does enable this option. This option also enables `basic' messages. `-e' `--environment-overrides' Give variables taken from the environment precedence over variables from makefiles. *Note Variables from the Environment: Environment. `-f FILE' `--file=FILE' `--makefile=FILE' Read the file named FILE as a makefile. *Note Writing Makefiles: Makefiles. `-h' `--help' Remind you of the options that `make' understands and then exit. `-i' `--ignore-errors' Ignore all errors in commands executed to remake files. *Note Errors in Commands: Errors. `-I DIR' `--include-dir=DIR' Specifies a directory DIR to search for included makefiles. *Note Including Other Makefiles: Include. If several `-I' options are used to specify several directories, the directories are searched in the order specified. `-j [JOBS]' `--jobs[=JOBS]' Specifies the number of jobs (commands) to run simultaneously. With no argument, `make' runs as many jobs simultaneously as possible. If there is more than one `-j' option, the last one is effective. *Note Parallel Execution: Parallel, for more information on how commands are run. Note that this option is ignored on MS-DOS. `-k' `--keep-going' Continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other prerequisites of these targets can be processed all the same. *Note Testing the Compilation of a Program: Testing. `-l [LOAD]' `--load-average[=LOAD]' `--max-load[=LOAD]' Specifies that no new jobs (commands) should be started if there are other jobs running and the load average is at least LOAD (a floating-point number). With no argument, removes a previous load limit. *Note Parallel Execution: Parallel. `-L' `--check-symlink-times' On systems that support symbolic links, this option causes `make' to consider the timestamps on any symbolic links in addition to the timestamp on the file referenced by those links. When this option is provided, the most recent timestamp among the file and the symbolic links is taken as the modification time for this target file. `-n' `--just-print' `--dry-run' `--recon' Print the commands that would be executed, but do not execute them. *Note Instead of Executing the Commands: Instead of Execution. `-o FILE' `--old-file=FILE' `--assume-old=FILE' Do not remake the file FILE even if it is older than its prerequisites, and do not remake anything on account of changes in FILE. Essentially the file is treated as very old and its rules are ignored. *Note Avoiding Recompilation of Some Files: Avoiding Compilation. `-p' `--print-data-base' Print the data base (rules and variable values) that results from reading the makefiles; then execute as usual or as otherwise specified. This also prints the version information given by the `-v' switch (see below). To print the data base without trying to remake any files, use `make -qp'. To print the data base of predefined rules and variables, use `make -p -f /dev/null'. The data base output contains filename and linenumber information for command and variable definitions, so it can be a useful debugging tool in complex environments. `-q' `--question' "Question mode". Do not run any commands, or print anything; just return an exit status that is zero if the specified targets are already up to date, one if any remaking is required, or two if an error is encountered. *Note Instead of Executing the Commands: Instead of Execution. `-r' `--no-builtin-rules' Eliminate use of the built-in implicit rules (*note Using Implicit Rules: Implicit Rules.). You can still define your own by writing pattern rules (*note Defining and Redefining Pattern Rules: Pattern Rules.). The `-r' option also clears out the default list of suffixes for suffix rules (*note Old-Fashioned Suffix Rules: Suffix Rules.). But you can still define your own suffixes with a rule for `.SUFFIXES', and then define your own suffix rules. Note that only _rules_ are affected by the `-r' option; default variables remain in effect (*note Variables Used by Implicit Rules: Implicit Variables.); see the `-R' option below. `-R' `--no-builtin-variables' Eliminate use of the built-in rule-specific variables (*note Variables Used by Implicit Rules: Implicit Variables.). You can still define your own, of course. The `-R' option also automatically enables the `-r' option (see above), since it doesn't make sense to have implicit rules without any definitions for the variables that they use. `-s' `--silent' `--quiet' Silent operation; do not print the commands as they are executed. *Note Command Echoing: Echoing. `-S' `--no-keep-going' `--stop' Cancel the effect of the `-k' option. This is never necessary except in a recursive `make' where `-k' might be inherited from the top-level `make' via `MAKEFLAGS' (*note Recursive Use of `make': Recursion.) or if you set `-k' in `MAKEFLAGS' in your environment. `-t' `--touch' Touch files (mark them up to date without really changing them) instead of running their commands. This is used to pretend that the commands were done, in order to fool future invocations of `make'. *Note Instead of Executing the Commands: Instead of Execution. `-v' `--version' Print the version of the `make' program plus a copyright, a list of authors, and a notice that there is no warranty; then exit. `-w' `--print-directory' Print a message containing the working directory both before and after executing the makefile. This may be useful for tracking down errors from complicated nests of recursive `make' commands. *Note Recursive Use of `make': Recursion. (In practice, you rarely need to specify this option since `make' does it for you; see *Note The `--print-directory' Option: -w Option.) `--no-print-directory' Disable printing of the working directory under `-w'. This option is useful when `-w' is turned on automatically, but you do not want to see the extra messages. *Note The `--print-directory' Option: -w Option. `-W FILE' `--what-if=FILE' `--new-file=FILE' `--assume-new=FILE' Pretend that the target FILE has just been modified. When used with the `-n' flag, this shows you what would happen if you were to modify that file. Without `-n', it is almost the same as running a `touch' command on the given file before running `make', except that the modification time is changed only in the imagination of `make'. *Note Instead of Executing the Commands: Instead of Execution. `--warn-undefined-variables' Issue a warning message whenever `make' sees a reference to an undefined variable. This can be helpful when you are trying to debug makefiles which use variables in complex ways.  File: make.info, Node: Implicit Rules, Next: Archives, Prev: Running, Up: Top 10 Using Implicit Rules *********************** Certain standard ways of remaking target files are used very often. For example, one customary way to make an object file is from a C source file using the C compiler, `cc'. "Implicit rules" tell `make' how to use customary techniques so that you do not have to specify them in detail when you want to use them. For example, there is an implicit rule for C compilation. File names determine which implicit rules are run. For example, C compilation typically takes a `.c' file and makes a `.o' file. So `make' applies the implicit rule for C compilation when it sees this combination of file name endings. A chain of implicit rules can apply in sequence; for example, `make' will remake a `.o' file from a `.y' file by way of a `.c' file. The built-in implicit rules use several variables in their commands so that, by changing the values of the variables, you can change the way the implicit rule works. For example, the variable `CFLAGS' controls the flags given to the C compiler by the implicit rule for C compilation. You can define your own implicit rules by writing "pattern rules". "Suffix rules" are a more limited way to define implicit rules. Pattern rules are more general and clearer, but suffix rules are retained for compatibility. * Menu: * Using Implicit:: How to use an existing implicit rule to get the commands for updating a file. * Catalogue of Rules:: A list of built-in implicit rules. * Implicit Variables:: How to change what predefined rules do. * Chained Rules:: How to use a chain of implicit rules. * Pattern Rules:: How to define new implicit rules. * Last Resort:: How to define commands for rules which cannot find any. * Suffix Rules:: The old-fashioned style of implicit rule. * Implicit Rule Search:: The precise algorithm for applying implicit rules.  File: make.info, Node: Using Implicit, Next: Catalogue of Rules, Prev: Implicit Rules, Up: Implicit Rules 10.1 Using Implicit Rules ========================= To allow `make' to find a customary method for updating a target file, all you have to do is refrain from specifying commands yourself. Either write a rule with no command lines, or don't write a rule at all. Then `make' will figure out which implicit rule to use based on which kind of source file exists or can be made. For example, suppose the makefile looks like this: foo : foo.o bar.o cc -o foo foo.o bar.o $(CFLAGS) $(LDFLAGS) Because you mention `foo.o' but do not give a rule for it, `make' will automatically look for an implicit rule that tells how to update it. This happens whether or not the file `foo.o' currently exists. If an implicit rule is found, it can supply both commands and one or more prerequisites (the source files). You would want to write a rule for `foo.o' with no command lines if you need to specify additional prerequisites, such as header files, that the implicit rule cannot supply. Each implicit rule has a target pattern and prerequisite patterns. There may be many implicit rules with the same target pattern. For example, numerous rules make `.o' files: one, from a `.c' file with the C compiler; another, from a `.p' file with the Pascal compiler; and so on. The rule that actually applies is the one whose prerequisites exist or can be made. So, if you have a file `foo.c', `make' will run the C compiler; otherwise, if you have a file `foo.p', `make' will run the Pascal compiler; and so on. Of course, when you write the makefile, you know which implicit rule you want `make' to use, and you know it will choose that one because you know which possible prerequisite files are supposed to exist. *Note Catalogue of Implicit Rules: Catalogue of Rules, for a catalogue of all the predefined implicit rules. Above, we said an implicit rule applies if the required prerequisites "exist or can be made". A file "can be made" if it is mentioned explicitly in the makefile as a target or a prerequisite, or if an implicit rule can be recursively found for how to make it. When an implicit prerequisite is the result of another implicit rule, we say that "chaining" is occurring. *Note Chains of Implicit Rules: Chained Rules. In general, `make' searches for an implicit rule for each target, and for each double-colon rule, that has no commands. A file that is mentioned only as a prerequisite is considered a target whose rule specifies nothing, so implicit rule search happens for it. *Note Implicit Rule Search Algorithm: Implicit Rule Search, for the details of how the search is done. Note that explicit prerequisites do not influence implicit rule search. For example, consider this explicit rule: foo.o: foo.p The prerequisite on `foo.p' does not necessarily mean that `make' will remake `foo.o' according to the implicit rule to make an object file, a `.o' file, from a Pascal source file, a `.p' file. For example, if `foo.c' also exists, the implicit rule to make an object file from a C source file is used instead, because it appears before the Pascal rule in the list of predefined implicit rules (*note Catalogue of Implicit Rules: Catalogue of Rules.). If you do not want an implicit rule to be used for a target that has no commands, you can give that target empty commands by writing a semicolon (*note Defining Empty Commands: Empty Commands.).  File: make.info, Node: Catalogue of Rules, Next: Implicit Variables, Prev: Using Implicit, Up: Implicit Rules 10.2 Catalogue of Implicit Rules ================================ Here is a catalogue of predefined implicit rules which are always available unless the makefile explicitly overrides or cancels them. *Note Canceling Implicit Rules: Canceling Rules, for information on canceling or overriding an implicit rule. The `-r' or `--no-builtin-rules' option cancels all predefined rules. This manual only documents the default rules available on POSIX-based operating systems. Other operating systems, such as VMS, Windows, OS/2, etc. may have different sets of default rules. To see the full list of default rules and variables available in your version of GNU `make', run `make -p' in a directory with no makefile. Not all of these rules will always be defined, even when the `-r' option is not given. Many of the predefined implicit rules are implemented in `make' as suffix rules, so which ones will be defined depends on the "suffix list" (the list of prerequisites of the special target `.SUFFIXES'). The default suffix list is: `.out', `.a', `.ln', `.o', `.c', `.cc', `.C', `.cpp', `.p', `.f', `.F', `.r', `.y', `.l', `.s', `.S', `.mod', `.sym', `.def', `.h', `.info', `.dvi', `.tex', `.texinfo', `.texi', `.txinfo', `.w', `.ch' `.web', `.sh', `.elc', `.el'. All of the implicit rules described below whose prerequisites have one of these suffixes are actually suffix rules. If you modify the suffix list, the only predefined suffix rules in effect will be those named by one or two of the suffixes that are on the list you specify; rules whose suffixes fail to be on the list are disabled. *Note Old-Fashioned Suffix Rules: Suffix Rules, for full details on suffix rules. Compiling C programs `N.o' is made automatically from `N.c' with a command of the form `$(CC) -c $(CPPFLAGS) $(CFLAGS)'. Compiling C++ programs `N.o' is made automatically from `N.cc', `N.cpp', or `N.C' with a command of the form `$(CXX) -c $(CPPFLAGS) $(CXXFLAGS)'. We encourage you to use the suffix `.cc' for C++ source files instead of `.C'. Compiling Pascal programs `N.o' is made automatically from `N.p' with the command `$(PC) -c $(PFLAGS)'. Compiling Fortran and Ratfor programs `N.o' is made automatically from `N.r', `N.F' or `N.f' by running the Fortran compiler. The precise command used is as follows: `.f' `$(FC) -c $(FFLAGS)'. `.F' `$(FC) -c $(FFLAGS) $(CPPFLAGS)'. `.r' `$(FC) -c $(FFLAGS) $(RFLAGS)'. Preprocessing Fortran and Ratfor programs `N.f' is made automatically from `N.r' or `N.F'. This rule runs just the preprocessor to convert a Ratfor or preprocessable Fortran program into a strict Fortran program. The precise command used is as follows: `.F' `$(FC) -F $(CPPFLAGS) $(FFLAGS)'. `.r' `$(FC) -F $(FFLAGS) $(RFLAGS)'. Compiling Modula-2 programs `N.sym' is made from `N.def' with a command of the form `$(M2C) $(M2FLAGS) $(DEFFLAGS)'. `N.o' is made from `N.mod'; the form is: `$(M2C) $(M2FLAGS) $(MODFLAGS)'. Assembling and preprocessing assembler programs `N.o' is made automatically from `N.s' by running the assembler, `as'. The precise command is `$(AS) $(ASFLAGS)'. `N.s' is made automatically from `N.S' by running the C preprocessor, `cpp'. The precise command is `$(CPP) $(CPPFLAGS)'. Linking a single object file `N' is made automatically from `N.o' by running the linker (usually called `ld') via the C compiler. The precise command used is `$(CC) $(LDFLAGS) N.o $(LOADLIBES) $(LDLIBS)'. This rule does the right thing for a simple program with only one source file. It will also do the right thing if there are multiple object files (presumably coming from various other source files), one of which has a name matching that of the executable file. Thus, x: y.o z.o when `x.c', `y.c' and `z.c' all exist will execute: cc -c x.c -o x.o cc -c y.c -o y.o cc -c z.c -o z.o cc x.o y.o z.o -o x rm -f x.o rm -f y.o rm -f z.o In more complicated cases, such as when there is no object file whose name derives from the executable file name, you must write an explicit command for linking. Each kind of file automatically made into `.o' object files will be automatically linked by using the compiler (`$(CC)', `$(FC)' or `$(PC)'; the C compiler `$(CC)' is used to assemble `.s' files) without the `-c' option. This could be done by using the `.o' object files as intermediates, but it is faster to do the compiling and linking in one step, so that's how it's done. Yacc for C programs `N.c' is made automatically from `N.y' by running Yacc with the command `$(YACC) $(YFLAGS)'. Lex for C programs `N.c' is made automatically from `N.l' by running Lex. The actual command is `$(LEX) $(LFLAGS)'. Lex for Ratfor programs `N.r' is made automatically from `N.l' by running Lex. The actual command is `$(LEX) $(LFLAGS)'. The convention of using the same suffix `.l' for all Lex files regardless of whether they produce C code or Ratfor code makes it impossible for `make' to determine automatically which of the two languages you are using in any particular case. If `make' is called upon to remake an object file from a `.l' file, it must guess which compiler to use. It will guess the C compiler, because that is more common. If you are using Ratfor, make sure `make' knows this by mentioning `N.r' in the makefile. Or, if you are using Ratfor exclusively, with no C files, remove `.c' from the list of implicit rule suffixes with: .SUFFIXES: .SUFFIXES: .o .r .f .l ... Making Lint Libraries from C, Yacc, or Lex programs `N.ln' is made from `N.c' by running `lint'. The precise command is `$(LINT) $(LINTFLAGS) $(CPPFLAGS) -i'. The same command is used on the C code produced from `N.y' or `N.l'. TeX and Web `N.dvi' is made from `N.tex' with the command `$(TEX)'. `N.tex' is made from `N.web' with `$(WEAVE)', or from `N.w' (and from `N.ch' if it exists or can be made) with `$(CWEAVE)'. `N.p' is made from `N.web' with `$(TANGLE)' and `N.c' is made from `N.w' (and from `N.ch' if it exists or can be made) with `$(CTANGLE)'. Texinfo and Info `N.dvi' is made from `N.texinfo', `N.texi', or `N.txinfo', with the command `$(TEXI2DVI) $(TEXI2DVI_FLAGS)'. `N.info' is made from `N.texinfo', `N.texi', or `N.txinfo', with the command `$(MAKEINFO) $(MAKEINFO_FLAGS)'. RCS Any file `N' is extracted if necessary from an RCS file named either `N,v' or `RCS/N,v'. The precise command used is `$(CO) $(COFLAGS)'. `N' will not be extracted from RCS if it already exists, even if the RCS file is newer. The rules for RCS are terminal (*note Match-Anything Pattern Rules: Match-Anything Rules.), so RCS files cannot be generated from another source; they must actually exist. SCCS Any file `N' is extracted if necessary from an SCCS file named either `s.N' or `SCCS/s.N'. The precise command used is `$(GET) $(GFLAGS)'. The rules for SCCS are terminal (*note Match-Anything Pattern Rules: Match-Anything Rules.), so SCCS files cannot be generated from another source; they must actually exist. For the benefit of SCCS, a file `N' is copied from `N.sh' and made executable (by everyone). This is for shell scripts that are checked into SCCS. Since RCS preserves the execution permission of a file, you do not need to use this feature with RCS. We recommend that you avoid using of SCCS. RCS is widely held to be superior, and is also free. By choosing free software in place of comparable (or inferior) proprietary software, you support the free software movement. Usually, you want to change only the variables listed in the table above, which are documented in the following section. However, the commands in built-in implicit rules actually use variables such as `COMPILE.c', `LINK.p', and `PREPROCESS.S', whose values contain the commands listed above. `make' follows the convention that the rule to compile a `.X' source file uses the variable `COMPILE.X'. Similarly, the rule to produce an executable from a `.X' file uses `LINK.X'; and the rule to preprocess a `.X' file uses `PREPROCESS.X'. Every rule that produces an object file uses the variable `OUTPUT_OPTION'. `make' defines this variable either to contain `-o $@', or to be empty, depending on a compile-time option. You need the `-o' option to ensure that the output goes into the right file when the source file is in a different directory, as when using `VPATH' (*note Directory Search::). However, compilers on some systems do not accept a `-o' switch for object files. If you use such a system, and use `VPATH', some compilations will put their output in the wrong place. A possible workaround for this problem is to give `OUTPUT_OPTION' the value `; mv $*.o $@'.  File: make.info, Node: Implicit Variables, Next: Chained Rules, Prev: Catalogue of Rules, Up: Implicit Rules 10.3 Variables Used by Implicit Rules ===================================== The commands in built-in implicit rules make liberal use of certain predefined variables. You can alter the values of these variables in the makefile, with arguments to `make', or in the environment to alter how the implicit rules work without redefining the rules themselves. You can cancel all variables used by implicit rules with the `-R' or `--no-builtin-variables' option. For example, the command used to compile a C source file actually says `$(CC) -c $(CFLAGS) $(CPPFLAGS)'. The default values of the variables used are `cc' and nothing, resulting in the command `cc -c'. By redefining `CC' to `ncc', you could cause `ncc' to be used for all C compilations performed by the implicit rule. By redefining `CFLAGS' to be `-g', you could pass the `-g' option to each compilation. _All_ implicit rules that do C compilation use `$(CC)' to get the program name for the compiler and _all_ include `$(CFLAGS)' among the arguments given to the compiler. The variables used in implicit rules fall into two classes: those that are names of programs (like `CC') and those that contain arguments for the programs (like `CFLAGS'). (The "name of a program" may also contain some command arguments, but it must start with an actual executable program name.) If a variable value contains more than one argument, separate them with spaces. The following tables describe of some of the more commonly-used predefined variables. This list is not exhaustive, and the default values shown here may not be what are selected by `make' for your environment. To see the complete list of predefined variables for your instance of GNU `make' you can run `make -p' in a directory with no makefiles. Here is a table of some of the more common variables used as names of programs in built-in rules: makefiles. `AR' Archive-maintaining program; default `ar'. `AS' Program for compiling assembly files; default `as'. `CC' Program for compiling C programs; default `cc'. `CO' Program for checking out files from RCS; default `co'. `CXX' Program for compiling C++ programs; default `g++'. `CO' Program for extracting a file from RCS; default `co'. `CPP' Program for running the C preprocessor, with results to standard output; default `$(CC) -E'. `FC' Program for compiling or preprocessing Fortran and Ratfor programs; default `f77'. `GET' Program for extracting a file from SCCS; default `get'. `LEX' Program to use to turn Lex grammars into source code; default `lex'. `YACC' Program to use to turn Yacc grammars into source code; default `yacc'. `LINT' Program to use to run lint on source code; default `lint'. `M2C' Program to use to compile Modula-2 source code; default `m2c'. `PC' Program for compiling Pascal programs; default `pc'. `MAKEINFO' Program to convert a Texinfo source file into an Info file; default `makeinfo'. `TEX' Program to make TeX DVI files from TeX source; default `tex'. `TEXI2DVI' Program to make TeX DVI files from Texinfo source; default `texi2dvi'. `WEAVE' Program to translate Web into TeX; default `weave'. `CWEAVE' Program to translate C Web into TeX; default `cweave'. `TANGLE' Program to translate Web into Pascal; default `tangle'. `CTANGLE' Program to translate C Web into C; default `ctangle'. `RM' Command to remove a file; default `rm -f'. Here is a table of variables whose values are additional arguments for the programs above. The default values for all of these is the empty string, unless otherwise noted. `ARFLAGS' Flags to give the archive-maintaining program; default `rv'. `ASFLAGS' Extra flags to give to the assembler (when explicitly invoked on a `.s' or `.S' file). `CFLAGS' Extra flags to give to the C compiler. `CXXFLAGS' Extra flags to give to the C++ compiler. `COFLAGS' Extra flags to give to the RCS `co' program. `CPPFLAGS' Extra flags to give to the C preprocessor and programs that use it (the C and Fortran compilers). `FFLAGS' Extra flags to give to the Fortran compiler. `GFLAGS' Extra flags to give to the SCCS `get' program. `LDFLAGS' Extra flags to give to compilers when they are supposed to invoke the linker, `ld'. `LFLAGS' Extra flags to give to Lex. `YFLAGS' Extra flags to give to Yacc. `PFLAGS' Extra flags to give to the Pascal compiler. `RFLAGS' Extra flags to give to the Fortran compiler for Ratfor programs. `LINTFLAGS' Extra flags to give to lint.  File: make.info, Node: Chained Rules, Next: Pattern Rules, Prev: Implicit Variables, Up: Implicit Rules 10.4 Chains of Implicit Rules ============================= Sometimes a file can be made by a sequence of implicit rules. For example, a file `N.o' could be made from `N.y' by running first Yacc and then `cc'. Such a sequence is called a "chain". If the file `N.c' exists, or is mentioned in the makefile, no special searching is required: `make' finds that the object file can be made by C compilation from `N.c'; later on, when considering how to make `N.c', the rule for running Yacc is used. Ultimately both `N.c' and `N.o' are updated. However, even if `N.c' does not exist and is not mentioned, `make' knows how to envision it as the missing link between `N.o' and `N.y'! In this case, `N.c' is called an "intermediate file". Once `make' has decided to use the intermediate file, it is entered in the data base as if it had been mentioned in the makefile, along with the implicit rule that says how to create it. Intermediate files are remade using their rules just like all other files. But intermediate files are treated differently in two ways. The first difference is what happens if the intermediate file does not exist. If an ordinary file B does not exist, and `make' considers a target that depends on B, it invariably creates B and then updates the target from B. But if B is an intermediate file, then `make' can leave well enough alone. It won't bother updating B, or the ultimate target, unless some prerequisite of B is newer than that target or there is some other reason to update that target. The second difference is that if `make' _does_ create B in order to update something else, it deletes B later on after it is no longer needed. Therefore, an intermediate file which did not exist before `make' also does not exist after `make'. `make' reports the deletion to you by printing a `rm -f' command showing which file it is deleting. Ordinarily, a file cannot be intermediate if it is mentioned in the makefile as a target or prerequisite. However, you can explicitly mark a file as intermediate by listing it as a prerequisite of the special target `.INTERMEDIATE'. This takes effect even if the file is mentioned explicitly in some other way. You can prevent automatic deletion of an intermediate file by marking it as a "secondary" file. To do this, list it as a prerequisite of the special target `.SECONDARY'. When a file is secondary, `make' will not create the file merely because it does not already exist, but `make' does not automatically delete the file. Marking a file as secondary also marks it as intermediate. You can list the target pattern of an implicit rule (such as `%.o') as a prerequisite of the special target `.PRECIOUS' to preserve intermediate files made by implicit rules whose target patterns match that file's name; see *Note Interrupts::. A chain can involve more than two implicit rules. For example, it is possible to make a file `foo' from `RCS/foo.y,v' by running RCS, Yacc and `cc'. Then both `foo.y' and `foo.c' are intermediate files that are deleted at the end. No single implicit rule can appear more than once in a chain. This means that `make' will not even consider such a ridiculous thing as making `foo' from `foo.o.o' by running the linker twice. This constraint has the added benefit of preventing any infinite loop in the search for an implicit rule chain. There are some special implicit rules to optimize certain cases that would otherwise be handled by rule chains. For example, making `foo' from `foo.c' could be handled by compiling and linking with separate chained rules, using `foo.o' as an intermediate file. But what actually happens is that a special rule for this case does the compilation and linking with a single `cc' command. The optimized rule is used in preference to the step-by-step chain because it comes earlier in the ordering of rules. make-doc-non-dfsg-3.81.orig/doc/make.info-20000644000175000017500000066076610416557457020610 0ustar srivastasrivastaThis is make.info, produced by makeinfo version 4.8 from make.texi. This file documents the GNU `make' utility, which determines automatically which pieces of a large program need to be recompiled, and issues the commands to recompile them. This is Edition 0.70, last updated 1 April 2006, of `The GNU Make Manual', for GNU `make' version 3.81. Copyright (C) 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with the Front-Cover Texts being "A GNU Manual," and with the Back-Cover Texts as in (a) below. A copy of the license is included in the section entitled "GNU Free Documentation License." (a) The FSF's Back-Cover Text is: "You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development." INFO-DIR-SECTION GNU Packages START-INFO-DIR-ENTRY * Make: (make). Remake files automatically. END-INFO-DIR-ENTRY  File: make.info, Node: Pattern Rules, Next: Last Resort, Prev: Chained Rules, Up: Implicit Rules 10.5 Defining and Redefining Pattern Rules ========================================== You define an implicit rule by writing a "pattern rule". A pattern rule looks like an ordinary rule, except that its target contains the character `%' (exactly one of them). The target is considered a pattern for matching file names; the `%' can match any nonempty substring, while other characters match only themselves. The prerequisites likewise use `%' to show how their names relate to the target name. Thus, a pattern rule `%.o : %.c' says how to make any file `STEM.o' from another file `STEM.c'. Note that expansion using `%' in pattern rules occurs *after* any variable or function expansions, which take place when the makefile is read. *Note How to Use Variables: Using Variables, and *Note Functions for Transforming Text: Functions. * Menu: * Pattern Intro:: An introduction to pattern rules. * Pattern Examples:: Examples of pattern rules. * Automatic Variables:: How to use automatic variables in the commands of implicit rules. * Pattern Match:: How patterns match. * Match-Anything Rules:: Precautions you should take prior to defining rules that can match any target file whatever. * Canceling Rules:: How to override or cancel built-in rules.  File: make.info, Node: Pattern Intro, Next: Pattern Examples, Prev: Pattern Rules, Up: Pattern Rules 10.5.1 Introduction to Pattern Rules ------------------------------------ A pattern rule contains the character `%' (exactly one of them) in the target; otherwise, it looks exactly like an ordinary rule. The target is a pattern for matching file names; the `%' matches any nonempty substring, while other characters match only themselves. For example, `%.c' as a pattern matches any file name that ends in `.c'. `s.%.c' as a pattern matches any file name that starts with `s.', ends in `.c' and is at least five characters long. (There must be at least one character to match the `%'.) The substring that the `%' matches is called the "stem". `%' in a prerequisite of a pattern rule stands for the same stem that was matched by the `%' in the target. In order for the pattern rule to apply, its target pattern must match the file name under consideration and all of its prerequisites (after pattern substitution) must name files that exist or can be made. These files become prerequisites of the target. Thus, a rule of the form %.o : %.c ; COMMAND... specifies how to make a file `N.o', with another file `N.c' as its prerequisite, provided that `N.c' exists or can be made. There may also be prerequisites that do not use `%'; such a prerequisite attaches to every file made by this pattern rule. These unvarying prerequisites are useful occasionally. A pattern rule need not have any prerequisites that contain `%', or in fact any prerequisites at all. Such a rule is effectively a general wildcard. It provides a way to make any file that matches the target pattern. *Note Last Resort::. Pattern rules may have more than one target. Unlike normal rules, this does not act as many different rules with the same prerequisites and commands. If a pattern rule has multiple targets, `make' knows that the rule's commands are responsible for making all of the targets. The commands are executed only once to make all the targets. When searching for a pattern rule to match a target, the target patterns of a rule other than the one that matches the target in need of a rule are incidental: `make' worries only about giving commands and prerequisites to the file presently in question. However, when this file's commands are run, the other targets are marked as having been updated themselves. The order in which pattern rules appear in the makefile is important since this is the order in which they are considered. Of equally applicable rules, only the first one found is used. The rules you write take precedence over those that are built in. Note however, that a rule whose prerequisites actually exist or are mentioned always takes priority over a rule with prerequisites that must be made by chaining other implicit rules.  File: make.info, Node: Pattern Examples, Next: Automatic Variables, Prev: Pattern Intro, Up: Pattern Rules 10.5.2 Pattern Rule Examples ---------------------------- Here are some examples of pattern rules actually predefined in `make'. First, the rule that compiles `.c' files into `.o' files: %.o : %.c $(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@ defines a rule that can make any file `X.o' from `X.c'. The command uses the automatic variables `$@' and `$<' to substitute the names of the target file and the source file in each case where the rule applies (*note Automatic Variables::). Here is a second built-in rule: % :: RCS/%,v $(CO) $(COFLAGS) $< defines a rule that can make any file `X' whatsoever from a corresponding file `X,v' in the subdirectory `RCS'. Since the target is `%', this rule will apply to any file whatever, provided the appropriate prerequisite file exists. The double colon makes the rule "terminal", which means that its prerequisite may not be an intermediate file (*note Match-Anything Pattern Rules: Match-Anything Rules.). This pattern rule has two targets: %.tab.c %.tab.h: %.y bison -d $< This tells `make' that the command `bison -d X.y' will make both `X.tab.c' and `X.tab.h'. If the file `foo' depends on the files `parse.tab.o' and `scan.o' and the file `scan.o' depends on the file `parse.tab.h', when `parse.y' is changed, the command `bison -d parse.y' will be executed only once, and the prerequisites of both `parse.tab.o' and `scan.o' will be satisfied. (Presumably the file `parse.tab.o' will be recompiled from `parse.tab.c' and the file `scan.o' from `scan.c', while `foo' is linked from `parse.tab.o', `scan.o', and its other prerequisites, and it will execute happily ever after.)  File: make.info, Node: Automatic Variables, Next: Pattern Match, Prev: Pattern Examples, Up: Pattern Rules 10.5.3 Automatic Variables -------------------------- Suppose you are writing a pattern rule to compile a `.c' file into a `.o' file: how do you write the `cc' command so that it operates on the right source file name? You cannot write the name in the command, because the name is different each time the implicit rule is applied. What you do is use a special feature of `make', the "automatic variables". These variables have values computed afresh for each rule that is executed, based on the target and prerequisites of the rule. In this example, you would use `$@' for the object file name and `$<' for the source file name. It's very important that you recognize the limited scope in which automatic variable values are available: they only have values within the command script. In particular, you cannot use them anywhere within the target list of a rule; they have no value there and will expand to the empty string. Also, they cannot be accessed directly within the prerequisite list of a rule. A common mistake is attempting to use `$@' within the prerequisites list; this will not work. However, there is a special feature of GNU `make', secondary expansion (*note Secondary Expansion::), which will allow automatic variable values to be used in prerequisite lists. Here is a table of automatic variables: `$@' The file name of the target of the rule. If the target is an archive member, then `$@' is the name of the archive file. In a pattern rule that has multiple targets (*note Introduction to Pattern Rules: Pattern Intro.), `$@' is the name of whichever target caused the rule's commands to be run. `$%' The target member name, when the target is an archive member. *Note Archives::. For example, if the target is `foo.a(bar.o)' then `$%' is `bar.o' and `$@' is `foo.a'. `$%' is empty when the target is not an archive member. `$<' The name of the first prerequisite. If the target got its commands from an implicit rule, this will be the first prerequisite added by the implicit rule (*note Implicit Rules::). `$?' The names of all the prerequisites that are newer than the target, with spaces between them. For prerequisites which are archive members, only the member named is used (*note Archives::). `$^' The names of all the prerequisites, with spaces between them. For prerequisites which are archive members, only the member named is used (*note Archives::). A target has only one prerequisite on each other file it depends on, no matter how many times each file is listed as a prerequisite. So if you list a prerequisite more than once for a target, the value of `$^' contains just one copy of the name. This list does *not* contain any of the order-only prerequisites; for those see the `$|' variable, below. `$+' This is like `$^', but prerequisites listed more than once are duplicated in the order they were listed in the makefile. This is primarily useful for use in linking commands where it is meaningful to repeat library file names in a particular order. `$|' The names of all the order-only prerequisites, with spaces between them. `$*' The stem with which an implicit rule matches (*note How Patterns Match: Pattern Match.). If the target is `dir/a.foo.b' and the target pattern is `a.%.b' then the stem is `dir/foo'. The stem is useful for constructing names of related files. In a static pattern rule, the stem is part of the file name that matched the `%' in the target pattern. In an explicit rule, there is no stem; so `$*' cannot be determined in that way. Instead, if the target name ends with a recognized suffix (*note Old-Fashioned Suffix Rules: Suffix Rules.), `$*' is set to the target name minus the suffix. For example, if the target name is `foo.c', then `$*' is set to `foo', since `.c' is a suffix. GNU `make' does this bizarre thing only for compatibility with other implementations of `make'. You should generally avoid using `$*' except in implicit rules or static pattern rules. If the target name in an explicit rule does not end with a recognized suffix, `$*' is set to the empty string for that rule. `$?' is useful even in explicit rules when you wish to operate on only the prerequisites that have changed. For example, suppose that an archive named `lib' is supposed to contain copies of several object files. This rule copies just the changed object files into the archive: lib: foo.o bar.o lose.o win.o ar r lib $? Of the variables listed above, four have values that are single file names, and three have values that are lists of file names. These seven have variants that get just the file's directory name or just the file name within the directory. The variant variables' names are formed by appending `D' or `F', respectively. These variants are semi-obsolete in GNU `make' since the functions `dir' and `notdir' can be used to get a similar effect (*note Functions for File Names: File Name Functions.). Note, however, that the `D' variants all omit the trailing slash which always appears in the output of the `dir' function. Here is a table of the variants: `$(@D)' The directory part of the file name of the target, with the trailing slash removed. If the value of `$@' is `dir/foo.o' then `$(@D)' is `dir'. This value is `.' if `$@' does not contain a slash. `$(@F)' The file-within-directory part of the file name of the target. If the value of `$@' is `dir/foo.o' then `$(@F)' is `foo.o'. `$(@F)' is equivalent to `$(notdir $@)'. `$(*D)' `$(*F)' The directory part and the file-within-directory part of the stem; `dir' and `foo' in this example. `$(%D)' `$(%F)' The directory part and the file-within-directory part of the target archive member name. This makes sense only for archive member targets of the form `ARCHIVE(MEMBER)' and is useful only when MEMBER may contain a directory name. (*Note Archive Members as Targets: Archive Members.) `$( foo.1 will fail when the build directory is not the source directory, because `foo.man' and `sedscript' are in the source directory. When using GNU `make', relying on `VPATH' to find the source file will work in the case where there is a single dependency file, since the `make' automatic variable `$<' will represent the source file wherever it is. (Many versions of `make' set `$<' only in implicit rules.) A Makefile target like foo.o : bar.c $(CC) -I. -I$(srcdir) $(CFLAGS) -c bar.c -o foo.o should instead be written as foo.o : bar.c $(CC) -I. -I$(srcdir) $(CFLAGS) -c $< -o $@ in order to allow `VPATH' to work correctly. When the target has multiple dependencies, using an explicit `$(srcdir)' is the easiest way to make the rule work well. For example, the target above for `foo.1' is best written as: foo.1 : foo.man sedscript sed -e $(srcdir)/sedscript $(srcdir)/foo.man > $@ GNU distributions usually contain some files which are not source files--for example, Info files, and the output from Autoconf, Automake, Bison or Flex. Since these files normally appear in the source directory, they should always appear in the source directory, not in the build directory. So Makefile rules to update them should put the updated files in the source directory. However, if a file does not appear in the distribution, then the Makefile should not put it in the source directory, because building a program in ordinary circumstances should not modify the source directory in any way. Try to make the build and installation targets, at least (and all their subtargets) work correctly with a parallel `make'.  File: make.info, Node: Utilities in Makefiles, Next: Command Variables, Prev: Makefile Basics, Up: Makefile Conventions 14.2 Utilities in Makefiles =========================== Write the Makefile commands (and any shell scripts, such as `configure') to run in `sh', not in `csh'. Don't use any special features of `ksh' or `bash'. The `configure' script and the Makefile rules for building and installation should not use any utilities directly except these: cat cmp cp diff echo egrep expr false grep install-info ln ls mkdir mv pwd rm rmdir sed sleep sort tar test touch true The compression program `gzip' can be used in the `dist' rule. Stick to the generally supported options for these programs. For example, don't use `mkdir -p', convenient as it may be, because most systems don't support it. It is a good idea to avoid creating symbolic links in makefiles, since a few systems don't support them. The Makefile rules for building and installation can also use compilers and related programs, but should do so via `make' variables so that the user can substitute alternatives. Here are some of the programs we mean: ar bison cc flex install ld ldconfig lex make makeinfo ranlib texi2dvi yacc Use the following `make' variables to run those programs: $(AR) $(BISON) $(CC) $(FLEX) $(INSTALL) $(LD) $(LDCONFIG) $(LEX) $(MAKE) $(MAKEINFO) $(RANLIB) $(TEXI2DVI) $(YACC) When you use `ranlib' or `ldconfig', you should make sure nothing bad happens if the system does not have the program in question. Arrange to ignore an error from that command, and print a message before the command to tell the user that failure of this command does not mean a problem. (The Autoconf `AC_PROG_RANLIB' macro can help with this.) If you use symbolic links, you should implement a fallback for systems that don't have symbolic links. Additional utilities that can be used via Make variables are: chgrp chmod chown mknod It is ok to use other utilities in Makefile portions (or scripts) intended only for particular systems where you know those utilities exist.  File: make.info, Node: Command Variables, Next: Directory Variables, Prev: Utilities in Makefiles, Up: Makefile Conventions 14.3 Variables for Specifying Commands ====================================== Makefiles should provide variables for overriding certain commands, options, and so on. In particular, you should run most utility programs via variables. Thus, if you use Bison, have a variable named `BISON' whose default value is set with `BISON = bison', and refer to it with `$(BISON)' whenever you need to use Bison. File management utilities such as `ln', `rm', `mv', and so on, need not be referred to through variables in this way, since users don't need to replace them with other programs. Each program-name variable should come with an options variable that is used to supply options to the program. Append `FLAGS' to the program-name variable name to get the options variable name--for example, `BISONFLAGS'. (The names `CFLAGS' for the C compiler, `YFLAGS' for yacc, and `LFLAGS' for lex, are exceptions to this rule, but we keep them because they are standard.) Use `CPPFLAGS' in any compilation command that runs the preprocessor, and use `LDFLAGS' in any compilation command that does linking as well as in any direct use of `ld'. If there are C compiler options that _must_ be used for proper compilation of certain files, do not include them in `CFLAGS'. Users expect to be able to specify `CFLAGS' freely themselves. Instead, arrange to pass the necessary options to the C compiler independently of `CFLAGS', by writing them explicitly in the compilation commands or by defining an implicit rule, like this: CFLAGS = -g ALL_CFLAGS = -I. $(CFLAGS) .c.o: $(CC) -c $(CPPFLAGS) $(ALL_CFLAGS) $< Do include the `-g' option in `CFLAGS', because that is not _required_ for proper compilation. You can consider it a default that is only recommended. If the package is set up so that it is compiled with GCC by default, then you might as well include `-O' in the default value of `CFLAGS' as well. Put `CFLAGS' last in the compilation command, after other variables containing compiler options, so the user can use `CFLAGS' to override the others. `CFLAGS' should be used in every invocation of the C compiler, both those which do compilation and those which do linking. Every Makefile should define the variable `INSTALL', which is the basic command for installing a file into the system. Every Makefile should also define the variables `INSTALL_PROGRAM' and `INSTALL_DATA'. (The default for `INSTALL_PROGRAM' should be `$(INSTALL)'; the default for `INSTALL_DATA' should be `${INSTALL} -m 644'.) Then it should use those variables as the commands for actual installation, for executables and nonexecutables respectively. Use these variables as follows: $(INSTALL_PROGRAM) foo $(bindir)/foo $(INSTALL_DATA) libfoo.a $(libdir)/libfoo.a Optionally, you may prepend the value of `DESTDIR' to the target filename. Doing this allows the installer to create a snapshot of the installation to be copied onto the real target filesystem later. Do not set the value of `DESTDIR' in your Makefile, and do not include it in any installed files. With support for `DESTDIR', the above examples become: $(INSTALL_PROGRAM) foo $(DESTDIR)$(bindir)/foo $(INSTALL_DATA) libfoo.a $(DESTDIR)$(libdir)/libfoo.a Always use a file name, not a directory name, as the second argument of the installation commands. Use a separate command for each file to be installed.  File: make.info, Node: Directory Variables, Next: Standard Targets, Prev: Command Variables, Up: Makefile Conventions 14.4 Variables for Installation Directories =========================================== Installation directories should always be named by variables, so it is easy to install in a nonstandard place. The standard names for these variables and the values they should have in GNU packages are described below. They are based on a standard filesystem layout; variants of it are used in GNU/Linux and other modern operating systems. Installers are expected to override these values when calling `make' (e.g., `make prefix=/usr install' or `configure' (e.g., `configure --prefix=/usr'). GNU packages should not try to guess which value should be appropriate for these variables on the system they are being installed onto: use the default settings specified here so that all GNU packages behave identically, allowing the installer to achieve any desired layout. These two variables set the root for the installation. All the other installation directories should be subdirectories of one of these two, and nothing should be directly installed into these two directories. `prefix' A prefix used in constructing the default values of the variables listed below. The default value of `prefix' should be `/usr/local'. When building the complete GNU system, the prefix will be empty and `/usr' will be a symbolic link to `/'. (If you are using Autoconf, write it as `@prefix@'.) Running `make install' with a different value of `prefix' from the one used to build the program should _not_ recompile the program. `exec_prefix' A prefix used in constructing the default values of some of the variables listed below. The default value of `exec_prefix' should be `$(prefix)'. (If you are using Autoconf, write it as `@exec_prefix@'.) Generally, `$(exec_prefix)' is used for directories that contain machine-specific files (such as executables and subroutine libraries), while `$(prefix)' is used directly for other directories. Running `make install' with a different value of `exec_prefix' from the one used to build the program should _not_ recompile the program. Executable programs are installed in one of the following directories. `bindir' The directory for installing executable programs that users can run. This should normally be `/usr/local/bin', but write it as `$(exec_prefix)/bin'. (If you are using Autoconf, write it as `@bindir@'.) `sbindir' The directory for installing executable programs that can be run from the shell, but are only generally useful to system administrators. This should normally be `/usr/local/sbin', but write it as `$(exec_prefix)/sbin'. (If you are using Autoconf, write it as `@sbindir@'.) `libexecdir' The directory for installing executable programs to be run by other programs rather than by users. This directory should normally be `/usr/local/libexec', but write it as `$(exec_prefix)/libexec'. (If you are using Autoconf, write it as `@libexecdir@'.) The definition of `libexecdir' is the same for all packages, so you should install your data in a subdirectory thereof. Most packages install their data under `$(libexecdir)/PACKAGE-NAME/', possibly within additional subdirectories thereof, such as `$(libexecdir)/PACKAGE-NAME/MACHINE/VERSION'. Data files used by the program during its execution are divided into categories in two ways. * Some files are normally modified by programs; others are never normally modified (though users may edit some of these). * Some files are architecture-independent and can be shared by all machines at a site; some are architecture-dependent and can be shared only by machines of the same kind and operating system; others may never be shared between two machines. This makes for six different possibilities. However, we want to discourage the use of architecture-dependent files, aside from object files and libraries. It is much cleaner to make other data files architecture-independent, and it is generally not hard. Here are the variables Makefiles should use to specify directories to put these various kinds of files in: `datarootdir' The root of the directory tree for read-only architecture-independent data files. This should normally be `/usr/local/share', but write it as `$(prefix)/share'. (If you are using Autoconf, write it as `@datarootdir@'.) `datadir''s default value is based on this variable; so are `infodir', `mandir', and others. `datadir' The directory for installing idiosyncratic read-only architecture-independent data files for this program. This is usually the same place as `datarootdir', but we use the two separate variables so that you can move these program-specific files without altering the location for Info files, man pages, etc. This should normally be `/usr/local/share', but write it as `$(datarootdir)'. (If you are using Autoconf, write it as `@datadir@'.) The definition of `datadir' is the same for all packages, so you should install your data in a subdirectory thereof. Most packages install their data under `$(datadir)/PACKAGE-NAME/'. `sysconfdir' The directory for installing read-only data files that pertain to a single machine-that is to say, files for configuring a host. Mailer and network configuration files, `/etc/passwd', and so forth belong here. All the files in this directory should be ordinary ASCII text files. This directory should normally be `/usr/local/etc', but write it as `$(prefix)/etc'. (If you are using Autoconf, write it as `@sysconfdir@'.) Do not install executables here in this directory (they probably belong in `$(libexecdir)' or `$(sbindir)'). Also do not install files that are modified in the normal course of their use (programs whose purpose is to change the configuration of the system excluded). Those probably belong in `$(localstatedir)'. `sharedstatedir' The directory for installing architecture-independent data files which the programs modify while they run. This should normally be `/usr/local/com', but write it as `$(prefix)/com'. (If you are using Autoconf, write it as `@sharedstatedir@'.) `localstatedir' The directory for installing data files which the programs modify while they run, and that pertain to one specific machine. Users should never need to modify files in this directory to configure the package's operation; put such configuration information in separate files that go in `$(datadir)' or `$(sysconfdir)'. `$(localstatedir)' should normally be `/usr/local/var', but write it as `$(prefix)/var'. (If you are using Autoconf, write it as `@localstatedir@'.) These variables specify the directory for installing certain specific types of files, if your program has them. Every GNU package should have Info files, so every program needs `infodir', but not all need `libdir' or `lispdir'. `includedir' The directory for installing header files to be included by user programs with the C `#include' preprocessor directive. This should normally be `/usr/local/include', but write it as `$(prefix)/include'. (If you are using Autoconf, write it as `@includedir@'.) Most compilers other than GCC do not look for header files in directory `/usr/local/include'. So installing the header files this way is only useful with GCC. Sometimes this is not a problem because some libraries are only really intended to work with GCC. But some libraries are intended to work with other compilers. They should install their header files in two places, one specified by `includedir' and one specified by `oldincludedir'. `oldincludedir' The directory for installing `#include' header files for use with compilers other than GCC. This should normally be `/usr/include'. (If you are using Autoconf, you can write it as `@oldincludedir@'.) The Makefile commands should check whether the value of `oldincludedir' is empty. If it is, they should not try to use it; they should cancel the second installation of the header files. A package should not replace an existing header in this directory unless the header came from the same package. Thus, if your Foo package provides a header file `foo.h', then it should install the header file in the `oldincludedir' directory if either (1) there is no `foo.h' there or (2) the `foo.h' that exists came from the Foo package. To tell whether `foo.h' came from the Foo package, put a magic string in the file--part of a comment--and `grep' for that string. `docdir' The directory for installing documentation files (other than Info) for this package. By default, it should be `/usr/local/share/doc/YOURPKG', but it should be written as `$(datarootdir)/doc/YOURPKG'. (If you are using Autoconf, write it as `@docdir@'.) The YOURPKG subdirectory, which may include a version number, prevents collisions among files with common names, such as `README'. `infodir' The directory for installing the Info files for this package. By default, it should be `/usr/local/share/info', but it should be written as `$(datarootdir)/info'. (If you are using Autoconf, write it as `@infodir@'.) `infodir' is separate from `docdir' for compatibility with existing practice. `htmldir' `dvidir' `pdfdir' `psdir' Directories for installing documentation files in the particular format. (It is not required to support documentation in all these formats.) They should all be set to `$(docdir)' by default. (If you are using Autoconf, write them as `@htmldir@', `@dvidir@', etc.) Packages which supply several translations of their documentation should install them in `$(htmldir)/'LL, `$(pdfdir)/'LL, etc. where LL is a locale abbreviation such as `en' or `pt_BR'. `libdir' The directory for object files and libraries of object code. Do not install executables here, they probably ought to go in `$(libexecdir)' instead. The value of `libdir' should normally be `/usr/local/lib', but write it as `$(exec_prefix)/lib'. (If you are using Autoconf, write it as `@libdir@'.) `lispdir' The directory for installing any Emacs Lisp files in this package. By default, it should be `/usr/local/share/emacs/site-lisp', but it should be written as `$(datarootdir)/emacs/site-lisp'. If you are using Autoconf, write the default as `@lispdir@'. In order to make `@lispdir@' work, you need the following lines in your `configure.in' file: lispdir='${datarootdir}/emacs/site-lisp' AC_SUBST(lispdir) `localedir' The directory for installing locale-specific message catalogs for this package. By default, it should be `/usr/local/share/locale', but it should be written as `$(datarootdir)/locale'. (If you are using Autoconf, write it as `@localedir@'.) This directory usually has a subdirectory per locale. Unix-style man pages are installed in one of the following: `mandir' The top-level directory for installing the man pages (if any) for this package. It will normally be `/usr/local/share/man', but you should write it as `$(datarootdir)/man'. (If you are using Autoconf, write it as `@mandir@'.) `man1dir' The directory for installing section 1 man pages. Write it as `$(mandir)/man1'. `man2dir' The directory for installing section 2 man pages. Write it as `$(mandir)/man2' `...' *Don't make the primary documentation for any GNU software be a man page. Write a manual in Texinfo instead. Man pages are just for the sake of people running GNU software on Unix, which is a secondary application only.* `manext' The file name extension for the installed man page. This should contain a period followed by the appropriate digit; it should normally be `.1'. `man1ext' The file name extension for installed section 1 man pages. `man2ext' The file name extension for installed section 2 man pages. `...' Use these names instead of `manext' if the package needs to install man pages in more than one section of the manual. And finally, you should set the following variable: `srcdir' The directory for the sources being compiled. The value of this variable is normally inserted by the `configure' shell script. (If you are using Autconf, use `srcdir = @srcdir@'.) For example: # Common prefix for installation directories. # NOTE: This directory must exist when you start the install. prefix = /usr/local datarootdir = $(prefix)/share datadir = $(datarootdir) exec_prefix = $(prefix) # Where to put the executable for the command `gcc'. bindir = $(exec_prefix)/bin # Where to put the directories used by the compiler. libexecdir = $(exec_prefix)/libexec # Where to put the Info files. infodir = $(datarootdir)/info If your program installs a large number of files into one of the standard user-specified directories, it might be useful to group them into a subdirectory particular to that program. If you do this, you should write the `install' rule to create these subdirectories. Do not expect the user to include the subdirectory name in the value of any of the variables listed above. The idea of having a uniform set of variable names for installation directories is to enable the user to specify the exact same values for several different GNU packages. In order for this to be useful, all the packages must be designed so that they will work sensibly when the user does so.  File: make.info, Node: Standard Targets, Next: Install Command Categories, Prev: Directory Variables, Up: Makefile Conventions 14.5 Standard Targets for Users =============================== All GNU programs should have the following targets in their Makefiles: `all' Compile the entire program. This should be the default target. This target need not rebuild any documentation files; Info files should normally be included in the distribution, and DVI files should be made only when explicitly asked for. By default, the Make rules should compile and link with `-g', so that executable programs have debugging symbols. Users who don't mind being helpless can strip the executables later if they wish. `install' Compile the program and copy the executables, libraries, and so on to the file names where they should reside for actual use. If there is a simple test to verify that a program is properly installed, this target should run that test. Do not strip executables when installing them. Devil-may-care users can use the `install-strip' target to do that. If possible, write the `install' target rule so that it does not modify anything in the directory where the program was built, provided `make all' has just been done. This is convenient for building the program under one user name and installing it under another. The commands should create all the directories in which files are to be installed, if they don't already exist. This includes the directories specified as the values of the variables `prefix' and `exec_prefix', as well as all subdirectories that are needed. One way to do this is by means of an `installdirs' target as described below. Use `-' before any command for installing a man page, so that `make' will ignore any errors. This is in case there are systems that don't have the Unix man page documentation system installed. The way to install Info files is to copy them into `$(infodir)' with `$(INSTALL_DATA)' (*note Command Variables::), and then run the `install-info' program if it is present. `install-info' is a program that edits the Info `dir' file to add or update the menu entry for the given Info file; it is part of the Texinfo package. Here is a sample rule to install an Info file: $(DESTDIR)$(infodir)/foo.info: foo.info $(POST_INSTALL) # There may be a newer info file in . than in srcdir. -if test -f foo.info; then d=.; \ else d=$(srcdir); fi; \ $(INSTALL_DATA) $$d/foo.info $(DESTDIR)$@; \ # Run install-info only if it exists. # Use `if' instead of just prepending `-' to the # line so we notice real errors from install-info. # We use `$(SHELL) -c' because some shells do not # fail gracefully when there is an unknown command. if $(SHELL) -c 'install-info --version' \ >/dev/null 2>&1; then \ install-info --dir-file=$(DESTDIR)$(infodir)/dir \ $(DESTDIR)$(infodir)/foo.info; \ else true; fi When writing the `install' target, you must classify all the commands into three categories: normal ones, "pre-installation" commands and "post-installation" commands. *Note Install Command Categories::. `install-html' `install-dvi' `install-pdf' `install-ps' These targets install documentation in formats other than Info; they're intended to be called explicitly by the person installing the package, if that format is desired. GNU prefers Info files, so these must be installed by the `install' target. When you have many documentation files to install, we recommend that you avoid collisions and clutter by arranging for these targets to install in subdirectories of the appropriate installation directory, such as `htmldir'. As one example, if your package has multiple manuals, and you wish to install HTML documentation with many files (such as the "split" mode output by `makeinfo --html'), you'll certainly want to use subdirectories, or two nodes with the same name in different manuals will overwrite each other. `uninstall' Delete all the installed files--the copies that the `install' and `install-*' targets create. This rule should not modify the directories where compilation is done, only the directories where files are installed. The uninstallation commands are divided into three categories, just like the installation commands. *Note Install Command Categories::. `install-strip' Like `install', but strip the executable files while installing them. In simple cases, this target can use the `install' target in a simple way: install-strip: $(MAKE) INSTALL_PROGRAM='$(INSTALL_PROGRAM) -s' \ install But if the package installs scripts as well as real executables, the `install-strip' target can't just refer to the `install' target; it has to strip the executables but not the scripts. `install-strip' should not strip the executables in the build directory which are being copied for installation. It should only strip the copies that are installed. Normally we do not recommend stripping an executable unless you are sure the program has no bugs. However, it can be reasonable to install a stripped executable for actual execution while saving the unstripped executable elsewhere in case there is a bug. `clean' Delete all files in the current directory that are normally created by building the program. Also delete files in other directories if they are created by this makefile. However, don't delete the files that record the configuration. Also preserve files that could be made by building, but normally aren't because the distribution comes with them. There is no need to delete parent directories that were created with `mkdir -p', since they could have existed anyway. Delete `.dvi' files here if they are not part of the distribution. `distclean' Delete all files in the current directory (or created by this makefile) that are created by configuring or building the program. If you have unpacked the source and built the program without creating any other files, `make distclean' should leave only the files that were in the distribution. However, there is no need to delete parent directories that were created with `mkdir -p', since they could have existed anyway. `mostlyclean' Like `clean', but may refrain from deleting a few files that people normally don't want to recompile. For example, the `mostlyclean' target for GCC does not delete `libgcc.a', because recompiling it is rarely necessary and takes a lot of time. `maintainer-clean' Delete almost everything that can be reconstructed with this Makefile. This typically includes everything deleted by `distclean', plus more: C source files produced by Bison, tags tables, Info files, and so on. The reason we say "almost everything" is that running the command `make maintainer-clean' should not delete `configure' even if `configure' can be remade using a rule in the Makefile. More generally, `make maintainer-clean' should not delete anything that needs to exist in order to run `configure' and then begin to build the program. Also, there is no need to delete parent directories that were created with `mkdir -p', since they could have existed anyway. These are the only exceptions; `maintainer-clean' should delete everything else that can be rebuilt. The `maintainer-clean' target is intended to be used by a maintainer of the package, not by ordinary users. You may need special tools to reconstruct some of the files that `make maintainer-clean' deletes. Since these files are normally included in the distribution, we don't take care to make them easy to reconstruct. If you find you need to unpack the full distribution again, don't blame us. To help make users aware of this, the commands for the special `maintainer-clean' target should start with these two: @echo 'This command is intended for maintainers to use; it' @echo 'deletes files that may need special tools to rebuild.' `TAGS' Update a tags table for this program. `info' Generate any Info files needed. The best way to write the rules is as follows: info: foo.info foo.info: foo.texi chap1.texi chap2.texi $(MAKEINFO) $(srcdir)/foo.texi You must define the variable `MAKEINFO' in the Makefile. It should run the `makeinfo' program, which is part of the Texinfo distribution. Normally a GNU distribution comes with Info files, and that means the Info files are present in the source directory. Therefore, the Make rule for an info file should update it in the source directory. When users build the package, ordinarily Make will not update the Info files because they will already be up to date. `dvi' `html' `pdf' `ps' Generate documentation files in the given format, if possible. Here's an example rule for generating DVI files from Texinfo: dvi: foo.dvi foo.dvi: foo.texi chap1.texi chap2.texi $(TEXI2DVI) $(srcdir)/foo.texi You must define the variable `TEXI2DVI' in the Makefile. It should run the program `texi2dvi', which is part of the Texinfo distribution.(1) Alternatively, write just the dependencies, and allow GNU `make' to provide the command. Here's another example, this one for generating HTML from Texinfo: html: foo.html foo.html: foo.texi chap1.texi chap2.texi $(TEXI2HTML) $(srcdir)/foo.texi Again, you would define the variable `TEXI2HTML' in the Makefile; for example, it might run `makeinfo --no-split --html' (`makeinfo' is part of the Texinfo distribution). `dist' Create a distribution tar file for this program. The tar file should be set up so that the file names in the tar file start with a subdirectory name which is the name of the package it is a distribution for. This name can include the version number. For example, the distribution tar file of GCC version 1.40 unpacks into a subdirectory named `gcc-1.40'. The easiest way to do this is to create a subdirectory appropriately named, use `ln' or `cp' to install the proper files in it, and then `tar' that subdirectory. Compress the tar file with `gzip'. For example, the actual distribution file for GCC version 1.40 is called `gcc-1.40.tar.gz'. The `dist' target should explicitly depend on all non-source files that are in the distribution, to make sure they are up to date in the distribution. *Note Making Releases: (standards)Releases. `check' Perform self-tests (if any). The user must build the program before running the tests, but need not install the program; you should write the self-tests so that they work when the program is built but not installed. The following targets are suggested as conventional names, for programs in which they are useful. `installcheck' Perform installation tests (if any). The user must build and install the program before running the tests. You should not assume that `$(bindir)' is in the search path. `installdirs' It's useful to add a target named `installdirs' to create the directories where files are installed, and their parent directories. There is a script called `mkinstalldirs' which is convenient for this; you can find it in the Texinfo package. You can use a rule like this: # Make sure all installation directories (e.g. $(bindir)) # actually exist by making them if necessary. installdirs: mkinstalldirs $(srcdir)/mkinstalldirs $(bindir) $(datadir) \ $(libdir) $(infodir) \ $(mandir) or, if you wish to support `DESTDIR', # Make sure all installation directories (e.g. $(bindir)) # actually exist by making them if necessary. installdirs: mkinstalldirs $(srcdir)/mkinstalldirs \ $(DESTDIR)$(bindir) $(DESTDIR)$(datadir) \ $(DESTDIR)$(libdir) $(DESTDIR)$(infodir) \ $(DESTDIR)$(mandir) This rule should not modify the directories where compilation is done. It should do nothing but create installation directories. ---------- Footnotes ---------- (1) `texi2dvi' uses TeX to do the real work of formatting. TeX is not distributed with Texinfo.  File: make.info, Node: Install Command Categories, Prev: Standard Targets, Up: Makefile Conventions 14.6 Install Command Categories =============================== When writing the `install' target, you must classify all the commands into three categories: normal ones, "pre-installation" commands and "post-installation" commands. Normal commands move files into their proper places, and set their modes. They may not alter any files except the ones that come entirely from the package they belong to. Pre-installation and post-installation commands may alter other files; in particular, they can edit global configuration files or data bases. Pre-installation commands are typically executed before the normal commands, and post-installation commands are typically run after the normal commands. The most common use for a post-installation command is to run `install-info'. This cannot be done with a normal command, since it alters a file (the Info directory) which does not come entirely and solely from the package being installed. It is a post-installation command because it needs to be done after the normal command which installs the package's Info files. Most programs don't need any pre-installation commands, but we have the feature just in case it is needed. To classify the commands in the `install' rule into these three categories, insert "category lines" among them. A category line specifies the category for the commands that follow. A category line consists of a tab and a reference to a special Make variable, plus an optional comment at the end. There are three variables you can use, one for each category; the variable name specifies the category. Category lines are no-ops in ordinary execution because these three Make variables are normally undefined (and you _should not_ define them in the makefile). Here are the three possible category lines, each with a comment that explains what it means: $(PRE_INSTALL) # Pre-install commands follow. $(POST_INSTALL) # Post-install commands follow. $(NORMAL_INSTALL) # Normal commands follow. If you don't use a category line at the beginning of the `install' rule, all the commands are classified as normal until the first category line. If you don't use any category lines, all the commands are classified as normal. These are the category lines for `uninstall': $(PRE_UNINSTALL) # Pre-uninstall commands follow. $(POST_UNINSTALL) # Post-uninstall commands follow. $(NORMAL_UNINSTALL) # Normal commands follow. Typically, a pre-uninstall command would be used for deleting entries from the Info directory. If the `install' or `uninstall' target has any dependencies which act as subroutines of installation, then you should start _each_ dependency's commands with a category line, and start the main target's commands with a category line also. This way, you can ensure that each command is placed in the right category regardless of which of the dependencies actually run. Pre-installation and post-installation commands should not run any programs except for these: [ basename bash cat chgrp chmod chown cmp cp dd diff echo egrep expand expr false fgrep find getopt grep gunzip gzip hostname install install-info kill ldconfig ln ls md5sum mkdir mkfifo mknod mv printenv pwd rm rmdir sed sort tee test touch true uname xargs yes The reason for distinguishing the commands in this way is for the sake of making binary packages. Typically a binary package contains all the executables and other files that need to be installed, and has its own method of installing them--so it does not need to run the normal installation commands. But installing the binary package does need to execute the pre-installation and post-installation commands. Programs to build binary packages work by extracting the pre-installation and post-installation commands. Here is one way of extracting the pre-installation commands (the `-s' option to `make' is needed to silence messages about entering subdirectories): make -s -n install -o all \ PRE_INSTALL=pre-install \ POST_INSTALL=post-install \ NORMAL_INSTALL=normal-install \ | gawk -f pre-install.awk where the file `pre-install.awk' could contain this: $0 ~ /^(normal-install|post-install)[ \t]*$/ {on = 0} on {print $0} $0 ~ /^pre-install[ \t]*$/ {on = 1}  File: make.info, Node: Quick Reference, Next: Error Messages, Prev: Makefile Conventions, Up: Top Appendix A Quick Reference ************************** This appendix summarizes the directives, text manipulation functions, and special variables which GNU `make' understands. *Note Special Targets::, *Note Catalogue of Implicit Rules: Catalogue of Rules, and *Note Summary of Options: Options Summary, for other summaries. Here is a summary of the directives GNU `make' recognizes: `define VARIABLE' `endef' Define a multi-line, recursively-expanded variable. *Note Sequences::. `ifdef VARIABLE' `ifndef VARIABLE' `ifeq (A,B)' `ifeq "A" "B"' `ifeq 'A' 'B'' `ifneq (A,B)' `ifneq "A" "B"' `ifneq 'A' 'B'' `else' `endif' Conditionally evaluate part of the makefile. *Note Conditionals::. `include FILE' `-include FILE' `sinclude FILE' Include another makefile. *Note Including Other Makefiles: Include. `override VARIABLE = VALUE' `override VARIABLE := VALUE' `override VARIABLE += VALUE' `override VARIABLE ?= VALUE' `override define VARIABLE' `endef' Define a variable, overriding any previous definition, even one from the command line. *Note The `override' Directive: Override Directive. `export' Tell `make' to export all variables to child processes by default. *Note Communicating Variables to a Sub-`make': Variables/Recursion. `export VARIABLE' `export VARIABLE = VALUE' `export VARIABLE := VALUE' `export VARIABLE += VALUE' `export VARIABLE ?= VALUE' `unexport VARIABLE' Tell `make' whether or not to export a particular variable to child processes. *Note Communicating Variables to a Sub-`make': Variables/Recursion. `vpath PATTERN PATH' Specify a search path for files matching a `%' pattern. *Note The `vpath' Directive: Selective Search. `vpath PATTERN' Remove all search paths previously specified for PATTERN. `vpath' Remove all search paths previously specified in any `vpath' directive. Here is a summary of the built-in functions (*note Functions::): `$(subst FROM,TO,TEXT)' Replace FROM with TO in TEXT. *Note Functions for String Substitution and Analysis: Text Functions. `$(patsubst PATTERN,REPLACEMENT,TEXT)' Replace words matching PATTERN with REPLACEMENT in TEXT. *Note Functions for String Substitution and Analysis: Text Functions. `$(strip STRING)' Remove excess whitespace characters from STRING. *Note Functions for String Substitution and Analysis: Text Functions. `$(findstring FIND,TEXT)' Locate FIND in TEXT. *Note Functions for String Substitution and Analysis: Text Functions. `$(filter PATTERN...,TEXT)' Select words in TEXT that match one of the PATTERN words. *Note Functions for String Substitution and Analysis: Text Functions. `$(filter-out PATTERN...,TEXT)' Select words in TEXT that _do not_ match any of the PATTERN words. *Note Functions for String Substitution and Analysis: Text Functions. `$(sort LIST)' Sort the words in LIST lexicographically, removing duplicates. *Note Functions for String Substitution and Analysis: Text Functions. `$(word N,TEXT)' Extract the Nth word (one-origin) of TEXT. *Note Functions for String Substitution and Analysis: Text Functions. `$(words TEXT)' Count the number of words in TEXT. *Note Functions for String Substitution and Analysis: Text Functions. `$(wordlist S,E,TEXT)' Returns the list of words in TEXT from S to E. *Note Functions for String Substitution and Analysis: Text Functions. `$(firstword NAMES...)' Extract the first word of NAMES. *Note Functions for String Substitution and Analysis: Text Functions. `$(lastword NAMES...)' Extract the last word of NAMES. *Note Functions for String Substitution and Analysis: Text Functions. `$(dir NAMES...)' Extract the directory part of each file name. *Note Functions for File Names: File Name Functions. `$(notdir NAMES...)' Extract the non-directory part of each file name. *Note Functions for File Names: File Name Functions. `$(suffix NAMES...)' Extract the suffix (the last `.' and following characters) of each file name. *Note Functions for File Names: File Name Functions. `$(basename NAMES...)' Extract the base name (name without suffix) of each file name. *Note Functions for File Names: File Name Functions. `$(addsuffix SUFFIX,NAMES...)' Append SUFFIX to each word in NAMES. *Note Functions for File Names: File Name Functions. `$(addprefix PREFIX,NAMES...)' Prepend PREFIX to each word in NAMES. *Note Functions for File Names: File Name Functions. `$(join LIST1,LIST2)' Join two parallel lists of words. *Note Functions for File Names: File Name Functions. `$(wildcard PATTERN...)' Find file names matching a shell file name pattern (_not_ a `%' pattern). *Note The Function `wildcard': Wildcard Function. `$(realpath NAMES...)' For each file name in NAMES, expand to an absolute name that does not contain any `.', `..', nor symlinks. *Note Functions for File Names: File Name Functions. `$(abspath NAMES...)' For each file name in NAMES, expand to an absolute name that does not contain any `.' or `..' components, but preserves symlinks. *Note Functions for File Names: File Name Functions. `$(error TEXT...)' When this function is evaluated, `make' generates a fatal error with the message TEXT. *Note Functions That Control Make: Make Control Functions. `$(warning TEXT...)' When this function is evaluated, `make' generates a warning with the message TEXT. *Note Functions That Control Make: Make Control Functions. `$(shell COMMAND)' Execute a shell command and return its output. *Note The `shell' Function: Shell Function. `$(origin VARIABLE)' Return a string describing how the `make' variable VARIABLE was defined. *Note The `origin' Function: Origin Function. `$(flavor VARIABLE)' Return a string describing the flavor of the `make' variable VARIABLE. *Note The `flavor' Function: Flavor Function. `$(foreach VAR,WORDS,TEXT)' Evaluate TEXT with VAR bound to each word in WORDS, and concatenate the results. *Note The `foreach' Function: Foreach Function. `$(call VAR,PARAM,...)' Evaluate the variable VAR replacing any references to `$(1)', `$(2)' with the first, second, etc. PARAM values. *Note The `call' Function: Call Function. `$(eval TEXT)' Evaluate TEXT then read the results as makefile commands. Expands to the empty string. *Note The `eval' Function: Eval Function. `$(value VAR)' Evaluates to the contents of the variable VAR, with no expansion performed on it. *Note The `value' Function: Value Function. Here is a summary of the automatic variables. *Note Automatic Variables::, for full information. `$@' The file name of the target. `$%' The target member name, when the target is an archive member. `$<' The name of the first prerequisite. `$?' The names of all the prerequisites that are newer than the target, with spaces between them. For prerequisites which are archive members, only the member named is used (*note Archives::). `$^' `$+' The names of all the prerequisites, with spaces between them. For prerequisites which are archive members, only the member named is used (*note Archives::). The value of `$^' omits duplicate prerequisites, while `$+' retains them and preserves their order. `$*' The stem with which an implicit rule matches (*note How Patterns Match: Pattern Match.). `$(@D)' `$(@F)' The directory part and the file-within-directory part of `$@'. `$(*D)' `$(*F)' The directory part and the file-within-directory part of `$*'. `$(%D)' `$(%F)' The directory part and the file-within-directory part of `$%'. `$( tar-`sed -e '/version_string/!d' \ -e 's/[^0-9.]*\([0-9.]*\).*/\1/' \ -e q version.c`.shar.Z .PHONY: dist dist: $(SRCS) $(AUX) echo tar-`sed \ -e '/version_string/!d' \ -e 's/[^0-9.]*\([0-9.]*\).*/\1/' \ -e q version.c` > .fname -rm -rf `cat .fname` mkdir `cat .fname` ln $(SRCS) $(AUX) `cat .fname` tar chZf `cat .fname`.tar.Z `cat .fname` -rm -rf `cat .fname` .fname tar.zoo: $(SRCS) $(AUX) -rm -rf tmp.dir -mkdir tmp.dir -rm tar.zoo for X in $(SRCS) $(AUX) ; do \ echo $$X ; \ sed 's/$$/^M/' $$X \ > tmp.dir/$$X ; done cd tmp.dir ; zoo aM ../tar.zoo * -rm -rf tmp.dir  File: make.info, Node: GNU Free Documentation License, Next: Concept Index, Prev: Complex Makefile, Up: Top Appendix D GNU Free Documentation License ***************************************** Version 1.2, November 2002 Copyright (C) 2000,2001,2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. 0. PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. 1. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque". Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only. The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text. A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. 2. VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. 3. COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. 4. MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version: A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement. C. State on the Title page the name of the publisher of the Modified Version, as the publisher. D. Preserve all the copyright notices of the Document. E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice. H. Include an unaltered copy of this License. I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence. J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version. N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section. O. Preserve any Warranty Disclaimers. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles. You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. 5. COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements." 6. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. 7. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. 8. TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. 9. TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 10. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See `http://www.gnu.org/copyleft/'. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. D.1 ADDENDUM: How to use this License for your documents ======================================================== To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: Copyright (C) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled ``GNU Free Documentation License''. If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the "with...Texts." line with this: with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST. If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation. If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.  File: make.info, Node: Concept Index, Next: Name Index, Prev: GNU Free Documentation License, Up: Top Index of Concepts ***************** [index] * Menu: * # (comments), in commands: Command Syntax. (line 27) * # (comments), in makefile: Makefile Contents. (line 41) * #include: Automatic Prerequisites. (line 16) * $, in function call: Syntax of Functions. (line 6) * $, in rules: Rule Syntax. (line 32) * $, in variable name: Computed Names. (line 6) * $, in variable reference: Reference. (line 6) * %, in pattern rules: Pattern Intro. (line 9) * %, quoting in patsubst: Text Functions. (line 26) * %, quoting in static pattern: Static Usage. (line 37) * %, quoting in vpath: Selective Search. (line 38) * %, quoting with \ (backslash) <1>: Text Functions. (line 26) * %, quoting with \ (backslash) <2>: Static Usage. (line 37) * %, quoting with \ (backslash): Selective Search. (line 38) * * (wildcard character): Wildcards. (line 6) * +, and command execution: Instead of Execution. (line 58) * +, and commands: MAKE Variable. (line 18) * +, and define: Sequences. (line 50) * +=: Appending. (line 6) * +=, expansion: Reading Makefiles. (line 33) * ,v (RCS file extension): Catalogue of Rules. (line 164) * - (in commands): Errors. (line 19) * -, and define: Sequences. (line 50) * --always-make: Options Summary. (line 15) * --assume-new <1>: Options Summary. (line 242) * --assume-new: Instead of Execution. (line 33) * --assume-new, and recursion: Options/Recursion. (line 22) * --assume-old <1>: Options Summary. (line 147) * --assume-old: Avoiding Compilation. (line 6) * --assume-old, and recursion: Options/Recursion. (line 22) * --check-symlink-times: Options Summary. (line 130) * --debug: Options Summary. (line 42) * --directory <1>: Options Summary. (line 26) * --directory: Recursion. (line 20) * --directory, and --print-directory: -w Option. (line 20) * --directory, and recursion: Options/Recursion. (line 22) * --dry-run <1>: Options Summary. (line 140) * --dry-run <2>: Instead of Execution. (line 14) * --dry-run: Echoing. (line 18) * --environment-overrides: Options Summary. (line 78) * --file <1>: Options Summary. (line 84) * --file <2>: Makefile Arguments. (line 6) * --file: Makefile Names. (line 23) * --file, and recursion: Options/Recursion. (line 22) * --help: Options Summary. (line 90) * --ignore-errors <1>: Options Summary. (line 94) * --ignore-errors: Errors. (line 30) * --include-dir <1>: Options Summary. (line 99) * --include-dir: Include. (line 52) * --jobs <1>: Options Summary. (line 106) * --jobs: Parallel. (line 6) * --jobs, and recursion: Options/Recursion. (line 25) * --just-print <1>: Options Summary. (line 139) * --just-print <2>: Instead of Execution. (line 14) * --just-print: Echoing. (line 18) * --keep-going <1>: Options Summary. (line 115) * --keep-going <2>: Testing. (line 16) * --keep-going: Errors. (line 47) * --load-average <1>: Options Summary. (line 122) * --load-average: Parallel. (line 57) * --makefile <1>: Options Summary. (line 85) * --makefile <2>: Makefile Arguments. (line 6) * --makefile: Makefile Names. (line 23) * --max-load <1>: Options Summary. (line 123) * --max-load: Parallel. (line 57) * --new-file <1>: Options Summary. (line 241) * --new-file: Instead of Execution. (line 33) * --new-file, and recursion: Options/Recursion. (line 22) * --no-builtin-rules: Options Summary. (line 175) * --no-builtin-variables: Options Summary. (line 188) * --no-keep-going: Options Summary. (line 203) * --no-print-directory <1>: Options Summary. (line 233) * --no-print-directory: -w Option. (line 20) * --old-file <1>: Options Summary. (line 146) * --old-file: Avoiding Compilation. (line 6) * --old-file, and recursion: Options/Recursion. (line 22) * --print-data-base: Options Summary. (line 155) * --print-directory: Options Summary. (line 225) * --print-directory, and --directory: -w Option. (line 20) * --print-directory, and recursion: -w Option. (line 20) * --print-directory, disabling: -w Option. (line 20) * --question <1>: Options Summary. (line 167) * --question: Instead of Execution. (line 25) * --quiet <1>: Options Summary. (line 198) * --quiet: Echoing. (line 24) * --recon <1>: Options Summary. (line 141) * --recon <2>: Instead of Execution. (line 14) * --recon: Echoing. (line 18) * --silent <1>: Options Summary. (line 197) * --silent: Echoing. (line 24) * --stop: Options Summary. (line 204) * --touch <1>: Options Summary. (line 212) * --touch: Instead of Execution. (line 19) * --touch, and recursion: MAKE Variable. (line 34) * --version: Options Summary. (line 220) * --warn-undefined-variables: Options Summary. (line 251) * --what-if <1>: Options Summary. (line 240) * --what-if: Instead of Execution. (line 33) * -B: Options Summary. (line 14) * -b: Options Summary. (line 9) * -C <1>: Options Summary. (line 25) * -C: Recursion. (line 20) * -C, and -w: -w Option. (line 20) * -C, and recursion: Options/Recursion. (line 22) * -d: Options Summary. (line 33) * -e: Options Summary. (line 77) * -e (shell flag): Automatic Prerequisites. (line 66) * -f <1>: Options Summary. (line 83) * -f <2>: Makefile Arguments. (line 6) * -f: Makefile Names. (line 23) * -f, and recursion: Options/Recursion. (line 22) * -h: Options Summary. (line 89) * -I: Options Summary. (line 98) * -i <1>: Options Summary. (line 93) * -i: Errors. (line 30) * -I: Include. (line 52) * -j <1>: Options Summary. (line 105) * -j: Parallel. (line 6) * -j, and archive update: Archive Pitfalls. (line 6) * -j, and recursion: Options/Recursion. (line 25) * -k <1>: Options Summary. (line 114) * -k <2>: Testing. (line 16) * -k: Errors. (line 47) * -L: Options Summary. (line 129) * -l: Options Summary. (line 121) * -l (library search): Libraries/Search. (line 6) * -l (load average): Parallel. (line 57) * -m: Options Summary. (line 10) * -M (to compiler): Automatic Prerequisites. (line 18) * -MM (to GNU compiler): Automatic Prerequisites. (line 68) * -n <1>: Options Summary. (line 138) * -n <2>: Instead of Execution. (line 14) * -n: Echoing. (line 18) * -o <1>: Options Summary. (line 145) * -o: Avoiding Compilation. (line 6) * -o, and recursion: Options/Recursion. (line 22) * -p: Options Summary. (line 154) * -q <1>: Options Summary. (line 166) * -q: Instead of Execution. (line 25) * -R: Options Summary. (line 187) * -r: Options Summary. (line 174) * -S: Options Summary. (line 202) * -s <1>: Options Summary. (line 196) * -s: Echoing. (line 24) * -t <1>: Options Summary. (line 211) * -t: Instead of Execution. (line 19) * -t, and recursion: MAKE Variable. (line 34) * -v: Options Summary. (line 219) * -W: Options Summary. (line 239) * -w: Options Summary. (line 224) * -W: Instead of Execution. (line 33) * -w, and -C: -w Option. (line 20) * -w, and recursion: -w Option. (line 20) * -W, and recursion: Options/Recursion. (line 22) * -w, disabling: -w Option. (line 20) * .a (archives): Archive Suffix Rules. (line 6) * .C: Catalogue of Rules. (line 39) * .c: Catalogue of Rules. (line 35) * .cc: Catalogue of Rules. (line 39) * .ch: Catalogue of Rules. (line 151) * .cpp: Catalogue of Rules. (line 39) * .d: Automatic Prerequisites. (line 81) * .def: Catalogue of Rules. (line 74) * .dvi: Catalogue of Rules. (line 151) * .F: Catalogue of Rules. (line 49) * .f: Catalogue of Rules. (line 49) * .info: Catalogue of Rules. (line 158) * .l: Catalogue of Rules. (line 124) * .LIBPATTERNS, and link libraries: Libraries/Search. (line 6) * .ln: Catalogue of Rules. (line 146) * .mod: Catalogue of Rules. (line 74) * .o: Catalogue of Rules. (line 35) * .p: Catalogue of Rules. (line 45) * .PRECIOUS intermediate files: Chained Rules. (line 56) * .r: Catalogue of Rules. (line 49) * .S: Catalogue of Rules. (line 82) * .s: Catalogue of Rules. (line 79) * .sh: Catalogue of Rules. (line 180) * .sym: Catalogue of Rules. (line 74) * .tex: Catalogue of Rules. (line 151) * .texi: Catalogue of Rules. (line 158) * .texinfo: Catalogue of Rules. (line 158) * .txinfo: Catalogue of Rules. (line 158) * .w: Catalogue of Rules. (line 151) * .web: Catalogue of Rules. (line 151) * .y: Catalogue of Rules. (line 120) * :: rules (double-colon): Double-Colon. (line 6) * := <1>: Setting. (line 6) * :=: Flavors. (line 56) * = <1>: Setting. (line 6) * =: Flavors. (line 10) * =, expansion: Reading Makefiles. (line 33) * ? (wildcard character): Wildcards. (line 6) * ?= <1>: Setting. (line 6) * ?=: Flavors. (line 129) * ?=, expansion: Reading Makefiles. (line 33) * @ (in commands): Echoing. (line 6) * @, and define: Sequences. (line 50) * [...] (wildcard characters): Wildcards. (line 6) * \ (backslash), for continuation lines: Simple Makefile. (line 40) * \ (backslash), in commands: Splitting Lines. (line 6) * \ (backslash), to quote % <1>: Text Functions. (line 26) * \ (backslash), to quote % <2>: Static Usage. (line 37) * \ (backslash), to quote %: Selective Search. (line 38) * __.SYMDEF: Archive Symbols. (line 6) * abspath: File Name Functions. (line 121) * algorithm for directory search: Search Algorithm. (line 6) * all (standard target): Goals. (line 72) * appending to variables: Appending. (line 6) * ar: Implicit Variables. (line 41) * archive: Archives. (line 6) * archive member targets: Archive Members. (line 6) * archive symbol directory updating: Archive Symbols. (line 6) * archive, and -j: Archive Pitfalls. (line 6) * archive, and parallel execution: Archive Pitfalls. (line 6) * archive, suffix rule for: Archive Suffix Rules. (line 6) * Arg list too long: Options/Recursion. (line 57) * arguments of functions: Syntax of Functions. (line 6) * as <1>: Implicit Variables. (line 44) * as: Catalogue of Rules. (line 79) * assembly, rule to compile: Catalogue of Rules. (line 79) * automatic generation of prerequisites <1>: Automatic Prerequisites. (line 6) * automatic generation of prerequisites: Include. (line 50) * automatic variables: Automatic Variables. (line 6) * automatic variables in prerequisites: Automatic Variables. (line 17) * backquotes: Shell Function. (line 6) * backslash (\), for continuation lines: Simple Makefile. (line 40) * backslash (\), in commands: Splitting Lines. (line 6) * backslash (\), to quote % <1>: Text Functions. (line 26) * backslash (\), to quote % <2>: Static Usage. (line 37) * backslash (\), to quote %: Selective Search. (line 38) * backslashes in pathnames and wildcard expansion: Wildcard Pitfall. (line 31) * basename: File Name Functions. (line 57) * binary packages: Install Command Categories. (line 80) * broken pipe: Parallel. (line 30) * bugs, reporting: Bugs. (line 6) * built-in special targets: Special Targets. (line 6) * C++, rule to compile: Catalogue of Rules. (line 39) * C, rule to compile: Catalogue of Rules. (line 35) * cc <1>: Implicit Variables. (line 47) * cc: Catalogue of Rules. (line 35) * cd (shell command) <1>: MAKE Variable. (line 16) * cd (shell command): Execution. (line 10) * chains of rules: Chained Rules. (line 6) * check (standard target): Goals. (line 114) * clean (standard target): Goals. (line 75) * clean target <1>: Cleanup. (line 11) * clean target: Simple Makefile. (line 83) * cleaning up: Cleanup. (line 6) * clobber (standard target): Goals. (line 86) * co <1>: Implicit Variables. (line 56) * co: Catalogue of Rules. (line 164) * combining rules by prerequisite: Combine By Prerequisite. (line 6) * command line variable definitions, and recursion: Options/Recursion. (line 17) * command line variables: Overriding. (line 6) * command syntax: Command Syntax. (line 6) * commands: Rule Syntax. (line 26) * commands setting shell variables: Execution. (line 10) * commands, backslash (\) in: Splitting Lines. (line 6) * commands, comments in: Command Syntax. (line 27) * commands, echoing: Echoing. (line 6) * commands, empty: Empty Commands. (line 6) * commands, errors in: Errors. (line 6) * commands, execution: Execution. (line 6) * commands, execution in parallel: Parallel. (line 6) * commands, expansion: Shell Function. (line 6) * commands, how to write: Commands. (line 6) * commands, instead of executing: Instead of Execution. (line 6) * commands, introduction to: Rule Introduction. (line 8) * commands, quoting newlines in: Splitting Lines. (line 6) * commands, sequences of: Sequences. (line 6) * commands, splitting: Splitting Lines. (line 6) * commands, using variables in: Variables in Commands. (line 6) * comments, in commands: Command Syntax. (line 27) * comments, in makefile: Makefile Contents. (line 41) * compatibility: Features. (line 6) * compatibility in exporting: Variables/Recursion. (line 105) * compilation, testing: Testing. (line 6) * computed variable name: Computed Names. (line 6) * conditional expansion: Conditional Functions. (line 6) * conditional variable assignment: Flavors. (line 129) * conditionals: Conditionals. (line 6) * continuation lines: Simple Makefile. (line 40) * controlling make: Make Control Functions. (line 6) * conventions for makefiles: Makefile Conventions. (line 6) * ctangle <1>: Implicit Variables. (line 107) * ctangle: Catalogue of Rules. (line 151) * cweave <1>: Implicit Variables. (line 101) * cweave: Catalogue of Rules. (line 151) * data base of make rules: Options Summary. (line 155) * deducing commands (implicit rules): make Deduces. (line 6) * default directories for included makefiles: Include. (line 52) * default goal <1>: Rules. (line 11) * default goal: How Make Works. (line 11) * default makefile name: Makefile Names. (line 6) * default rules, last-resort: Last Resort. (line 6) * define, expansion: Reading Makefiles. (line 33) * defining variables verbatim: Defining. (line 6) * deletion of target files <1>: Interrupts. (line 6) * deletion of target files: Errors. (line 64) * directive: Makefile Contents. (line 28) * directories, printing them: -w Option. (line 6) * directories, updating archive symbol: Archive Symbols. (line 6) * directory part: File Name Functions. (line 17) * directory search (VPATH): Directory Search. (line 6) * directory search (VPATH), and implicit rules: Implicit/Search. (line 6) * directory search (VPATH), and link libraries: Libraries/Search. (line 6) * directory search (VPATH), and shell commands: Commands/Search. (line 6) * directory search algorithm: Search Algorithm. (line 6) * directory search, traditional (GPATH): Search Algorithm. (line 42) * dist (standard target): Goals. (line 106) * distclean (standard target): Goals. (line 84) * dollar sign ($), in function call: Syntax of Functions. (line 6) * dollar sign ($), in rules: Rule Syntax. (line 32) * dollar sign ($), in variable name: Computed Names. (line 6) * dollar sign ($), in variable reference: Reference. (line 6) * DOS, choosing a shell in: Choosing the Shell. (line 36) * double-colon rules: Double-Colon. (line 6) * duplicate words, removing: Text Functions. (line 155) * E2BIG: Options/Recursion. (line 57) * echoing of commands: Echoing. (line 6) * editor: Introduction. (line 22) * Emacs (M-x compile): Errors. (line 62) * empty commands: Empty Commands. (line 6) * empty targets: Empty Targets. (line 6) * environment: Environment. (line 6) * environment, and recursion: Variables/Recursion. (line 6) * environment, SHELL in: Choosing the Shell. (line 10) * error, stopping on: Make Control Functions. (line 11) * errors (in commands): Errors. (line 6) * errors with wildcards: Wildcard Pitfall. (line 6) * evaluating makefile syntax: Eval Function. (line 6) * execution, in parallel: Parallel. (line 6) * execution, instead of: Instead of Execution. (line 6) * execution, of commands: Execution. (line 6) * exit status (errors): Errors. (line 6) * exit status of make: Running. (line 18) * expansion, secondary: Secondary Expansion. (line 6) * explicit rule, definition of: Makefile Contents. (line 10) * explicit rule, expansion: Reading Makefiles. (line 62) * explicit rules, secondary expansion of: Secondary Expansion. (line 106) * exporting variables: Variables/Recursion. (line 6) * f77 <1>: Implicit Variables. (line 64) * f77: Catalogue of Rules. (line 49) * FDL, GNU Free Documentation License: GNU Free Documentation License. (line 6) * features of GNU make: Features. (line 6) * features, missing: Missing. (line 6) * file name functions: File Name Functions. (line 6) * file name of makefile: Makefile Names. (line 6) * file name of makefile, how to specify: Makefile Names. (line 30) * file name prefix, adding: File Name Functions. (line 79) * file name suffix: File Name Functions. (line 43) * file name suffix, adding: File Name Functions. (line 68) * file name with wildcards: Wildcards. (line 6) * file name, abspath of: File Name Functions. (line 121) * file name, basename of: File Name Functions. (line 57) * file name, directory part: File Name Functions. (line 17) * file name, nondirectory part: File Name Functions. (line 27) * file name, realpath of: File Name Functions. (line 114) * files, assuming new: Instead of Execution. (line 33) * files, assuming old: Avoiding Compilation. (line 6) * files, avoiding recompilation of: Avoiding Compilation. (line 6) * files, intermediate: Chained Rules. (line 16) * filtering out words: Text Functions. (line 132) * filtering words: Text Functions. (line 114) * finding strings: Text Functions. (line 103) * flags: Options Summary. (line 6) * flags for compilers: Implicit Variables. (line 6) * flavor of variable: Flavor Function. (line 6) * flavors of variables: Flavors. (line 6) * FORCE: Force Targets. (line 6) * force targets: Force Targets. (line 6) * Fortran, rule to compile: Catalogue of Rules. (line 49) * functions: Functions. (line 6) * functions, for controlling make: Make Control Functions. (line 6) * functions, for file names: File Name Functions. (line 6) * functions, for text: Text Functions. (line 6) * functions, syntax of: Syntax of Functions. (line 6) * functions, user defined: Call Function. (line 6) * g++ <1>: Implicit Variables. (line 53) * g++: Catalogue of Rules. (line 39) * gcc: Catalogue of Rules. (line 35) * generating prerequisites automatically <1>: Automatic Prerequisites. (line 6) * generating prerequisites automatically: Include. (line 50) * get <1>: Implicit Variables. (line 67) * get: Catalogue of Rules. (line 173) * globbing (wildcards): Wildcards. (line 6) * goal: How Make Works. (line 11) * goal, default <1>: Rules. (line 11) * goal, default: How Make Works. (line 11) * goal, how to specify: Goals. (line 6) * home directory: Wildcards. (line 11) * IEEE Standard 1003.2: Overview. (line 13) * ifdef, expansion: Reading Makefiles. (line 51) * ifeq, expansion: Reading Makefiles. (line 51) * ifndef, expansion: Reading Makefiles. (line 51) * ifneq, expansion: Reading Makefiles. (line 51) * implicit rule: Implicit Rules. (line 6) * implicit rule, and directory search: Implicit/Search. (line 6) * implicit rule, and VPATH: Implicit/Search. (line 6) * implicit rule, definition of: Makefile Contents. (line 16) * implicit rule, expansion: Reading Makefiles. (line 62) * implicit rule, how to use: Using Implicit. (line 6) * implicit rule, introduction to: make Deduces. (line 6) * implicit rule, predefined: Catalogue of Rules. (line 6) * implicit rule, search algorithm: Implicit Rule Search. (line 6) * implicit rules, secondary expansion of: Secondary Expansion. (line 146) * included makefiles, default directories: Include. (line 52) * including (MAKEFILE_LIST variable): MAKEFILE_LIST Variable. (line 6) * including (MAKEFILES variable): MAKEFILES Variable. (line 6) * including other makefiles: Include. (line 6) * incompatibilities: Missing. (line 6) * Info, rule to format: Catalogue of Rules. (line 158) * install (standard target): Goals. (line 92) * intermediate files: Chained Rules. (line 16) * intermediate files, preserving: Chained Rules. (line 46) * intermediate targets, explicit: Special Targets. (line 44) * interrupt: Interrupts. (line 6) * job slots: Parallel. (line 6) * job slots, and recursion: Options/Recursion. (line 25) * jobs, limiting based on load: Parallel. (line 57) * joining lists of words: File Name Functions. (line 90) * killing (interruption): Interrupts. (line 6) * last-resort default rules: Last Resort. (line 6) * ld: Catalogue of Rules. (line 86) * lex <1>: Implicit Variables. (line 71) * lex: Catalogue of Rules. (line 124) * Lex, rule to run: Catalogue of Rules. (line 124) * libraries for linking, directory search: Libraries/Search. (line 6) * library archive, suffix rule for: Archive Suffix Rules. (line 6) * limiting jobs based on load: Parallel. (line 57) * link libraries, and directory search: Libraries/Search. (line 6) * link libraries, patterns matching: Libraries/Search. (line 6) * linking, predefined rule for: Catalogue of Rules. (line 86) * lint <1>: Implicit Variables. (line 78) * lint: Catalogue of Rules. (line 146) * lint, rule to run: Catalogue of Rules. (line 146) * list of all prerequisites: Automatic Variables. (line 61) * list of changed prerequisites: Automatic Variables. (line 51) * load average: Parallel. (line 57) * loops in variable expansion: Flavors. (line 44) * lpr (shell command) <1>: Empty Targets. (line 25) * lpr (shell command): Wildcard Examples. (line 21) * m2c <1>: Implicit Variables. (line 81) * m2c: Catalogue of Rules. (line 74) * macro: Using Variables. (line 10) * make depend: Automatic Prerequisites. (line 37) * makefile: Introduction. (line 7) * makefile name: Makefile Names. (line 6) * makefile name, how to specify: Makefile Names. (line 30) * makefile rule parts: Rule Introduction. (line 6) * makefile syntax, evaluating: Eval Function. (line 6) * makefile, and MAKEFILES variable: MAKEFILES Variable. (line 6) * makefile, conventions for: Makefile Conventions. (line 6) * makefile, how make processes: How Make Works. (line 6) * makefile, how to write: Makefiles. (line 6) * makefile, including: Include. (line 6) * makefile, overriding: Overriding Makefiles. (line 6) * makefile, parsing: Reading Makefiles. (line 6) * makefile, remaking of: Remaking Makefiles. (line 6) * makefile, simple: Simple Makefile. (line 6) * makefiles, and MAKEFILE_LIST variable: MAKEFILE_LIST Variable. (line 6) * makefiles, and special variables: Special Variables. (line 6) * makeinfo <1>: Implicit Variables. (line 88) * makeinfo: Catalogue of Rules. (line 158) * match-anything rule: Match-Anything Rules. (line 6) * match-anything rule, used to override: Overriding Makefiles. (line 12) * missing features: Missing. (line 6) * mistakes with wildcards: Wildcard Pitfall. (line 6) * modified variable reference: Substitution Refs. (line 6) * Modula-2, rule to compile: Catalogue of Rules. (line 74) * mostlyclean (standard target): Goals. (line 78) * multiple rules for one target: Multiple Rules. (line 6) * multiple rules for one target (::): Double-Colon. (line 6) * multiple targets: Multiple Targets. (line 6) * multiple targets, in pattern rule: Pattern Intro. (line 49) * name of makefile: Makefile Names. (line 6) * name of makefile, how to specify: Makefile Names. (line 30) * nested variable reference: Computed Names. (line 6) * newline, quoting, in commands: Splitting Lines. (line 6) * newline, quoting, in makefile: Simple Makefile. (line 40) * nondirectory part: File Name Functions. (line 27) * normal prerequisites: Prerequisite Types. (line 6) * OBJ: Variables Simplify. (line 20) * obj: Variables Simplify. (line 20) * OBJECTS: Variables Simplify. (line 20) * objects: Variables Simplify. (line 14) * OBJS: Variables Simplify. (line 20) * objs: Variables Simplify. (line 20) * old-fashioned suffix rules: Suffix Rules. (line 6) * options: Options Summary. (line 6) * options, and recursion: Options/Recursion. (line 6) * options, setting from environment: Options/Recursion. (line 81) * options, setting in makefiles: Options/Recursion. (line 81) * order of pattern rules: Pattern Intro. (line 57) * order-only prerequisites: Prerequisite Types. (line 6) * origin of variable: Origin Function. (line 6) * overriding makefiles: Overriding Makefiles. (line 6) * overriding variables with arguments: Overriding. (line 6) * overriding with override: Override Directive. (line 6) * parallel execution: Parallel. (line 6) * parallel execution, and archive update: Archive Pitfalls. (line 6) * parallel execution, overriding: Special Targets. (line 135) * parts of makefile rule: Rule Introduction. (line 6) * Pascal, rule to compile: Catalogue of Rules. (line 45) * pattern rule: Pattern Intro. (line 6) * pattern rule, expansion: Reading Makefiles. (line 62) * pattern rules, order of: Pattern Intro. (line 57) * pattern rules, static (not implicit): Static Pattern. (line 6) * pattern rules, static, syntax of: Static Usage. (line 6) * pattern-specific variables: Pattern-specific. (line 6) * pc <1>: Implicit Variables. (line 84) * pc: Catalogue of Rules. (line 45) * phony targets: Phony Targets. (line 6) * pitfalls of wildcards: Wildcard Pitfall. (line 6) * portability: Features. (line 6) * POSIX: Overview. (line 13) * POSIX.2: Options/Recursion. (line 60) * post-installation commands: Install Command Categories. (line 6) * pre-installation commands: Install Command Categories. (line 6) * precious targets: Special Targets. (line 29) * predefined rules and variables, printing: Options Summary. (line 155) * prefix, adding: File Name Functions. (line 79) * prerequisite: Rules. (line 6) * prerequisite pattern, implicit: Pattern Intro. (line 22) * prerequisite pattern, static (not implicit): Static Usage. (line 30) * prerequisite types: Prerequisite Types. (line 6) * prerequisite, expansion: Reading Makefiles. (line 62) * prerequisites: Rule Syntax. (line 46) * prerequisites, and automatic variables: Automatic Variables. (line 17) * prerequisites, automatic generation <1>: Automatic Prerequisites. (line 6) * prerequisites, automatic generation: Include. (line 50) * prerequisites, introduction to: Rule Introduction. (line 8) * prerequisites, list of all: Automatic Variables. (line 61) * prerequisites, list of changed: Automatic Variables. (line 51) * prerequisites, normal: Prerequisite Types. (line 6) * prerequisites, order-only: Prerequisite Types. (line 6) * prerequisites, varying (static pattern): Static Pattern. (line 6) * preserving intermediate files: Chained Rules. (line 46) * preserving with .PRECIOUS <1>: Chained Rules. (line 56) * preserving with .PRECIOUS: Special Targets. (line 29) * preserving with .SECONDARY: Special Targets. (line 49) * print (standard target): Goals. (line 97) * print target <1>: Empty Targets. (line 25) * print target: Wildcard Examples. (line 21) * printing directories: -w Option. (line 6) * printing messages: Make Control Functions. (line 43) * printing of commands: Echoing. (line 6) * printing user warnings: Make Control Functions. (line 35) * problems and bugs, reporting: Bugs. (line 6) * problems with wildcards: Wildcard Pitfall. (line 6) * processing a makefile: How Make Works. (line 6) * question mode: Instead of Execution. (line 25) * quoting %, in patsubst: Text Functions. (line 26) * quoting %, in static pattern: Static Usage. (line 37) * quoting %, in vpath: Selective Search. (line 38) * quoting newline, in commands: Splitting Lines. (line 6) * quoting newline, in makefile: Simple Makefile. (line 40) * Ratfor, rule to compile: Catalogue of Rules. (line 49) * RCS, rule to extract from: Catalogue of Rules. (line 164) * reading makefiles: Reading Makefiles. (line 6) * README: Makefile Names. (line 9) * realclean (standard target): Goals. (line 85) * realpath: File Name Functions. (line 114) * recompilation: Introduction. (line 22) * recompilation, avoiding: Avoiding Compilation. (line 6) * recording events with empty targets: Empty Targets. (line 6) * recursion: Recursion. (line 6) * recursion, and -C: Options/Recursion. (line 22) * recursion, and -f: Options/Recursion. (line 22) * recursion, and -j: Options/Recursion. (line 25) * recursion, and -o: Options/Recursion. (line 22) * recursion, and -t: MAKE Variable. (line 34) * recursion, and -w: -w Option. (line 20) * recursion, and -W: Options/Recursion. (line 22) * recursion, and command line variable definitions: Options/Recursion. (line 17) * recursion, and environment: Variables/Recursion. (line 6) * recursion, and MAKE variable: MAKE Variable. (line 6) * recursion, and MAKEFILES variable: MAKEFILES Variable. (line 14) * recursion, and options: Options/Recursion. (line 6) * recursion, and printing directories: -w Option. (line 6) * recursion, and variables: Variables/Recursion. (line 6) * recursion, level of: Variables/Recursion. (line 115) * recursive variable expansion <1>: Flavors. (line 6) * recursive variable expansion: Using Variables. (line 6) * recursively expanded variables: Flavors. (line 6) * reference to variables <1>: Advanced. (line 6) * reference to variables: Reference. (line 6) * relinking: How Make Works. (line 46) * remaking makefiles: Remaking Makefiles. (line 6) * removal of target files <1>: Interrupts. (line 6) * removal of target files: Errors. (line 64) * removing duplicate words: Text Functions. (line 155) * removing targets on failure: Special Targets. (line 68) * removing, to clean up: Cleanup. (line 6) * reporting bugs: Bugs. (line 6) * rm: Implicit Variables. (line 110) * rm (shell command) <1>: Errors. (line 27) * rm (shell command) <2>: Phony Targets. (line 20) * rm (shell command) <3>: Wildcard Examples. (line 12) * rm (shell command): Simple Makefile. (line 83) * rule commands: Commands. (line 6) * rule prerequisites: Rule Syntax. (line 46) * rule syntax: Rule Syntax. (line 6) * rule targets: Rule Syntax. (line 18) * rule, double-colon (::): Double-Colon. (line 6) * rule, explicit, definition of: Makefile Contents. (line 10) * rule, how to write: Rules. (line 6) * rule, implicit: Implicit Rules. (line 6) * rule, implicit, and directory search: Implicit/Search. (line 6) * rule, implicit, and VPATH: Implicit/Search. (line 6) * rule, implicit, chains of: Chained Rules. (line 6) * rule, implicit, definition of: Makefile Contents. (line 16) * rule, implicit, how to use: Using Implicit. (line 6) * rule, implicit, introduction to: make Deduces. (line 6) * rule, implicit, predefined: Catalogue of Rules. (line 6) * rule, introduction to: Rule Introduction. (line 6) * rule, multiple for one target: Multiple Rules. (line 6) * rule, no commands or prerequisites: Force Targets. (line 6) * rule, pattern: Pattern Intro. (line 6) * rule, static pattern: Static Pattern. (line 6) * rule, static pattern versus implicit: Static versus Implicit. (line 6) * rule, with multiple targets: Multiple Targets. (line 6) * rules, and $: Rule Syntax. (line 32) * s. (SCCS file prefix): Catalogue of Rules. (line 173) * SCCS, rule to extract from: Catalogue of Rules. (line 173) * search algorithm, implicit rule: Implicit Rule Search. (line 6) * search path for prerequisites (VPATH): Directory Search. (line 6) * search path for prerequisites (VPATH), and implicit rules: Implicit/Search. (line 6) * search path for prerequisites (VPATH), and link libraries: Libraries/Search. (line 6) * searching for strings: Text Functions. (line 103) * secondary expansion: Secondary Expansion. (line 6) * secondary expansion and explicit rules: Secondary Expansion. (line 106) * secondary expansion and implicit rules: Secondary Expansion. (line 146) * secondary expansion and static pattern rules: Secondary Expansion. (line 138) * secondary files: Chained Rules. (line 46) * secondary targets: Special Targets. (line 49) * sed (shell command): Automatic Prerequisites. (line 73) * selecting a word: Text Functions. (line 159) * selecting word lists: Text Functions. (line 168) * sequences of commands: Sequences. (line 6) * setting options from environment: Options/Recursion. (line 81) * setting options in makefiles: Options/Recursion. (line 81) * setting variables: Setting. (line 6) * several rules for one target: Multiple Rules. (line 6) * several targets in a rule: Multiple Targets. (line 6) * shar (standard target): Goals. (line 103) * shell command: Simple Makefile. (line 72) * shell command, and directory search: Commands/Search. (line 6) * shell command, execution: Execution. (line 6) * shell command, function for: Shell Function. (line 6) * shell file name pattern (in include): Include. (line 13) * shell variables, setting in commands: Execution. (line 10) * shell wildcards (in include): Include. (line 13) * shell, choosing the: Choosing the Shell. (line 6) * SHELL, exported value: Variables/Recursion. (line 23) * SHELL, import from environment: Environment. (line 37) * shell, in DOS and Windows: Choosing the Shell. (line 36) * SHELL, MS-DOS specifics: Choosing the Shell. (line 42) * SHELL, value of: Choosing the Shell. (line 6) * signal: Interrupts. (line 6) * silent operation: Echoing. (line 6) * simple makefile: Simple Makefile. (line 6) * simple variable expansion: Using Variables. (line 6) * simplifying with variables: Variables Simplify. (line 6) * simply expanded variables: Flavors. (line 56) * sorting words: Text Functions. (line 146) * spaces, in variable values: Flavors. (line 103) * spaces, stripping: Text Functions. (line 80) * special targets: Special Targets. (line 6) * special variables: Special Variables. (line 6) * specifying makefile name: Makefile Names. (line 30) * splitting commands: Splitting Lines. (line 6) * standard input: Parallel. (line 30) * standards conformance: Overview. (line 13) * standards for makefiles: Makefile Conventions. (line 6) * static pattern rule: Static Pattern. (line 6) * static pattern rule, syntax of: Static Usage. (line 6) * static pattern rule, versus implicit: Static versus Implicit. (line 6) * static pattern rules, secondary expansion of: Secondary Expansion. (line 138) * stem <1>: Pattern Match. (line 6) * stem: Static Usage. (line 17) * stem, variable for: Automatic Variables. (line 77) * stopping make: Make Control Functions. (line 11) * strings, searching for: Text Functions. (line 103) * stripping whitespace: Text Functions. (line 80) * sub-make: Variables/Recursion. (line 6) * subdirectories, recursion for: Recursion. (line 6) * substitution variable reference: Substitution Refs. (line 6) * suffix rule: Suffix Rules. (line 6) * suffix rule, for archive: Archive Suffix Rules. (line 6) * suffix, adding: File Name Functions. (line 68) * suffix, function to find: File Name Functions. (line 43) * suffix, substituting in variables: Substitution Refs. (line 6) * switches: Options Summary. (line 6) * symbol directories, updating archive: Archive Symbols. (line 6) * syntax of commands: Command Syntax. (line 6) * syntax of rules: Rule Syntax. (line 6) * tab character (in commands): Rule Syntax. (line 26) * tabs in rules: Rule Introduction. (line 21) * TAGS (standard target): Goals. (line 111) * tangle <1>: Implicit Variables. (line 104) * tangle: Catalogue of Rules. (line 151) * tar (standard target): Goals. (line 100) * target: Rules. (line 6) * target pattern, implicit: Pattern Intro. (line 9) * target pattern, static (not implicit): Static Usage. (line 17) * target, deleting on error: Errors. (line 64) * target, deleting on interrupt: Interrupts. (line 6) * target, expansion: Reading Makefiles. (line 62) * target, multiple in pattern rule: Pattern Intro. (line 49) * target, multiple rules for one: Multiple Rules. (line 6) * target, touching: Instead of Execution. (line 19) * target-specific variables: Target-specific. (line 6) * targets: Rule Syntax. (line 18) * targets without a file: Phony Targets. (line 6) * targets, built-in special: Special Targets. (line 6) * targets, empty: Empty Targets. (line 6) * targets, force: Force Targets. (line 6) * targets, introduction to: Rule Introduction. (line 8) * targets, multiple: Multiple Targets. (line 6) * targets, phony: Phony Targets. (line 6) * terminal rule: Match-Anything Rules. (line 6) * test (standard target): Goals. (line 115) * testing compilation: Testing. (line 6) * tex <1>: Implicit Variables. (line 91) * tex: Catalogue of Rules. (line 151) * TeX, rule to run: Catalogue of Rules. (line 151) * texi2dvi <1>: Implicit Variables. (line 95) * texi2dvi: Catalogue of Rules. (line 158) * Texinfo, rule to format: Catalogue of Rules. (line 158) * tilde (~): Wildcards. (line 11) * touch (shell command) <1>: Empty Targets. (line 25) * touch (shell command): Wildcard Examples. (line 21) * touching files: Instead of Execution. (line 19) * traditional directory search (GPATH): Search Algorithm. (line 42) * types of prerequisites: Prerequisite Types. (line 6) * undefined variables, warning message: Options Summary. (line 251) * updating archive symbol directories: Archive Symbols. (line 6) * updating makefiles: Remaking Makefiles. (line 6) * user defined functions: Call Function. (line 6) * value: Using Variables. (line 6) * value, how a variable gets it: Values. (line 6) * variable: Using Variables. (line 6) * variable definition: Makefile Contents. (line 22) * variable references in commands: Variables in Commands. (line 6) * variables: Variables Simplify. (line 6) * variables, $ in name: Computed Names. (line 6) * variables, and implicit rule: Automatic Variables. (line 6) * variables, appending to: Appending. (line 6) * variables, automatic: Automatic Variables. (line 6) * variables, command line: Overriding. (line 6) * variables, command line, and recursion: Options/Recursion. (line 17) * variables, computed names: Computed Names. (line 6) * variables, conditional assignment: Flavors. (line 129) * variables, defining verbatim: Defining. (line 6) * variables, environment <1>: Environment. (line 6) * variables, environment: Variables/Recursion. (line 6) * variables, exporting: Variables/Recursion. (line 6) * variables, flavor of: Flavor Function. (line 6) * variables, flavors: Flavors. (line 6) * variables, how they get their values: Values. (line 6) * variables, how to reference: Reference. (line 6) * variables, loops in expansion: Flavors. (line 44) * variables, modified reference: Substitution Refs. (line 6) * variables, nested references: Computed Names. (line 6) * variables, origin of: Origin Function. (line 6) * variables, overriding: Override Directive. (line 6) * variables, overriding with arguments: Overriding. (line 6) * variables, pattern-specific: Pattern-specific. (line 6) * variables, recursively expanded: Flavors. (line 6) * variables, setting: Setting. (line 6) * variables, simply expanded: Flavors. (line 56) * variables, spaces in values: Flavors. (line 103) * variables, substituting suffix in: Substitution Refs. (line 6) * variables, substitution reference: Substitution Refs. (line 6) * variables, target-specific: Target-specific. (line 6) * variables, unexpanded value: Value Function. (line 6) * variables, warning for undefined: Options Summary. (line 251) * varying prerequisites: Static Pattern. (line 6) * verbatim variable definition: Defining. (line 6) * vpath: Directory Search. (line 6) * VPATH, and implicit rules: Implicit/Search. (line 6) * VPATH, and link libraries: Libraries/Search. (line 6) * warnings, printing: Make Control Functions. (line 35) * weave <1>: Implicit Variables. (line 98) * weave: Catalogue of Rules. (line 151) * Web, rule to run: Catalogue of Rules. (line 151) * what if: Instead of Execution. (line 33) * whitespace, in variable values: Flavors. (line 103) * whitespace, stripping: Text Functions. (line 80) * wildcard: Wildcards. (line 6) * wildcard pitfalls: Wildcard Pitfall. (line 6) * wildcard, function: File Name Functions. (line 107) * wildcard, in archive member: Archive Members. (line 36) * wildcard, in include: Include. (line 13) * wildcards and MS-DOS/MS-Windows backslashes: Wildcard Pitfall. (line 31) * Windows, choosing a shell in: Choosing the Shell. (line 36) * word, selecting a: Text Functions. (line 159) * words, extracting first: Text Functions. (line 184) * words, extracting last: Text Functions. (line 197) * words, filtering: Text Functions. (line 114) * words, filtering out: Text Functions. (line 132) * words, finding number: Text Functions. (line 180) * words, iterating over: Foreach Function. (line 6) * words, joining lists: File Name Functions. (line 90) * words, removing duplicates: Text Functions. (line 155) * words, selecting lists of: Text Functions. (line 168) * writing rule commands: Commands. (line 6) * writing rules: Rules. (line 6) * yacc <1>: Implicit Variables. (line 75) * yacc <2>: Catalogue of Rules. (line 120) * yacc: Sequences. (line 18) * Yacc, rule to run: Catalogue of Rules. (line 120) * ~ (tilde): Wildcards. (line 11)  File: make.info, Node: Name Index, Prev: Concept Index, Up: Top Index of Functions, Variables, & Directives ******************************************* [index] * Menu: * $%: Automatic Variables. (line 37) * $(%D): Automatic Variables. (line 129) * $(%F): Automatic Variables. (line 130) * $(*D): Automatic Variables. (line 124) * $(*F): Automatic Variables. (line 125) * $(+D): Automatic Variables. (line 147) * $(+F): Automatic Variables. (line 148) * $(: Last Resort. (line 23) * .DEFAULT: Special Targets. (line 20) * .DEFAULT, and empty commands: Empty Commands. (line 16) * .DEFAULT_GOAL (define default goal): Special Variables. (line 10) * .DELETE_ON_ERROR <1>: Errors. (line 64) * .DELETE_ON_ERROR: Special Targets. (line 67) * .EXPORT_ALL_VARIABLES <1>: Variables/Recursion. (line 99) * .EXPORT_ALL_VARIABLES: Special Targets. (line 129) * .FEATURES (list of supported features): Special Variables. (line 65) * .IGNORE <1>: Errors. (line 30) * .IGNORE: Special Targets. (line 74) * .INCLUDE_DIRS (list of include directories): Special Variables. (line 98) * .INTERMEDIATE: Special Targets. (line 43) * .LIBPATTERNS: Libraries/Search. (line 6) * .LOW_RESOLUTION_TIME: Special Targets. (line 86) * .NOTPARALLEL: Special Targets. (line 134) * .PHONY <1>: Special Targets. (line 8) * .PHONY: Phony Targets. (line 22) * .POSIX: Options/Recursion. (line 60) * .PRECIOUS <1>: Interrupts. (line 22) * .PRECIOUS: Special Targets. (line 28) * .SECONDARY: Special Targets. (line 48) * .SECONDEXPANSION <1>: Special Targets. (line 57) * .SECONDEXPANSION: Secondary Expansion. (line 6) * .SILENT <1>: Echoing. (line 24) * .SILENT: Special Targets. (line 116) * .SUFFIXES <1>: Suffix Rules. (line 61) * .SUFFIXES: Special Targets. (line 15) * .VARIABLES (list of variables): Special Variables. (line 56) * /usr/gnu/include: Include. (line 52) * /usr/include: Include. (line 52) * /usr/local/include: Include. (line 52) * < (automatic variable): Automatic Variables. (line 43) * : Flavors. (line 84) * MAKE: MAKE Variable. (line 6) * MAKE_RESTARTS (number of times make has restarted): Special Variables. (line 49) * MAKE_VERSION: Features. (line 197) * MAKECMDGOALS: Goals. (line 30) * makefile: Makefile Names. (line 7) * Makefile: Makefile Names. (line 7) * MAKEFILE_LIST: MAKEFILE_LIST Variable. (line 6) * MAKEFILES <1>: Variables/Recursion. (line 127) * MAKEFILES: MAKEFILES Variable. (line 6) * MAKEFLAGS: Options/Recursion. (line 6) * MAKEINFO: Implicit Variables. (line 87) * MAKELEVEL <1>: Flavors. (line 84) * MAKELEVEL: Variables/Recursion. (line 115) * MAKEOVERRIDES: Options/Recursion. (line 49) * MAKESHELL (MS-DOS alternative to SHELL): Choosing the Shell. (line 25) * MFLAGS: Options/Recursion. (line 65) * notdir: File Name Functions. (line 27) * or: Conditional Functions. (line 37) * origin: Origin Function. (line 6) * OUTPUT_OPTION: Catalogue of Rules. (line 202) * override: Override Directive. (line 6) * patsubst <1>: Text Functions. (line 18) * patsubst: Substitution Refs. (line 28) * PC: Implicit Variables. (line 84) * PFLAGS: Implicit Variables. (line 153) * prefix: Directory Variables. (line 25) * realpath: File Name Functions. (line 114) * RFLAGS: Implicit Variables. (line 156) * RM: Implicit Variables. (line 110) * sbindir: Directory Variables. (line 59) * shell: Shell Function. (line 6) * SHELL: Choosing the Shell. (line 6) * SHELL (command execution): Execution. (line 6) * sort: Text Functions. (line 146) * strip: Text Functions. (line 80) * subst <1>: Text Functions. (line 9) * subst: Multiple Targets. (line 28) * suffix: File Name Functions. (line 43) * SUFFIXES: Suffix Rules. (line 81) * TANGLE: Implicit Variables. (line 104) * TEX: Implicit Variables. (line 91) * TEXI2DVI: Implicit Variables. (line 94) * unexport: Variables/Recursion. (line 45) * value: Value Function. (line 6) * vpath: Selective Search. (line 6) * VPATH: General Search. (line 6) * vpath: Directory Search. (line 6) * VPATH: Directory Search. (line 6) * warning: Make Control Functions. (line 35) * WEAVE: Implicit Variables. (line 98) * wildcard <1>: File Name Functions. (line 107) * wildcard: Wildcard Function. (line 6) * word: Text Functions. (line 159) * wordlist: Text Functions. (line 168) * words: Text Functions. (line 180) * YACC: Implicit Variables. (line 74) * YFLAGS: Implicit Variables. (line 150) * | (automatic variable): Automatic Variables. (line 69) make-doc-non-dfsg-3.81.orig/doc/make.texi0000644000175000017500000146474610416557457020470 0ustar srivastasrivasta\input texinfo @c -*- Texinfo -*- @c %**start of header @setfilename make.info @include version.texi @set EDITION 0.70 @set RCSID $Id: make.texi,v 1.45 2006/04/01 06:36:40 psmith Exp $ @settitle GNU @code{make} @setchapternewpage odd @c Combine the variable and function indices: @syncodeindex vr fn @c Combine the program and concept indices: @syncodeindex pg cp @c FSF publishers: format makebook.texi instead of using this file directly. @c ISBN provided by Lisa M. Opus Goldstein , 5 May 2004 @set ISBN 1-882114-83-5 @c %**end of header @copying This file documents the GNU @code{make} utility, which determines automatically which pieces of a large program need to be recompiled, and issues the commands to recompile them. This is Edition @value{EDITION}, last updated @value{UPDATED}, of @cite{The GNU Make Manual}, for GNU @code{make} version @value{VERSION}. Copyright @copyright{} 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2002, 2003, 2004, 2005, 2006 Free Software Foundation, Inc. @quotation Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with the Front-Cover Texts being ``A GNU Manual,'' and with the Back-Cover Texts as in (a) below. A copy of the license is included in the section entitled ``GNU Free Documentation License.'' (a) The FSF's Back-Cover Text is: ``You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development.'' @end quotation @end copying @c finalout @c ISPELL CHECK: done, 10 June 1993 --roland @c ISPELL CHECK: done, 2000-06-25 --Martin Buchholz @dircategory GNU Packages @direntry * Make: (make). Remake files automatically. @end direntry @iftex @shorttitlepage GNU Make @end iftex @titlepage @title GNU Make @subtitle A Program for Directing Recompilation @subtitle GNU @code{make} Version @value{VERSION} @subtitle @value{UPDATED-MONTH} @author Richard M. Stallman, Roland McGrath, Paul D. Smith @page @vskip 0pt plus 1filll @insertcopying @sp 2 Published by the Free Software Foundation @* 51 Franklin St. -- Fifth Floor @* Boston, MA 02110-1301 USA @* ISBN @value{ISBN} @* @sp 2 Cover art by Etienne Suvasa. @end titlepage @summarycontents @contents @ifnottex @node Top, Overview, (dir), (dir) @top GNU @code{make} @insertcopying @end ifnottex @menu * Overview:: Overview of @code{make}. * Introduction:: An introduction to @code{make}. * Makefiles:: Makefiles tell @code{make} what to do. * Rules:: Rules describe when a file must be remade. * Commands:: Commands say how to remake a file. * Using Variables:: You can use variables to avoid repetition. * Conditionals:: Use or ignore parts of the makefile based on the values of variables. * Functions:: Many powerful ways to manipulate text. * Invoking make: Running. How to invoke @code{make} on the command line. * Implicit Rules:: Use implicit rules to treat many files alike, based on their file names. * Archives:: How @code{make} can update library archives. * Features:: Features GNU @code{make} has over other @code{make}s. * Missing:: What GNU @code{make} lacks from other @code{make}s. * Makefile Conventions:: Conventions for writing makefiles for GNU programs. * Quick Reference:: A quick reference for experienced users. * Error Messages:: A list of common errors generated by @code{make}. * Complex Makefile:: A real example of a straightforward, but nontrivial, makefile. * GNU Free Documentation License:: License for copying this manual * Concept Index:: Index of Concepts * Name Index:: Index of Functions, Variables, & Directives @detailmenu --- The Detailed Node Listing --- Overview of @code{make} * Preparing:: Preparing and Running Make * Reading:: On Reading this Text * Bugs:: Problems and Bugs An Introduction to Makefiles * Rule Introduction:: What a rule looks like. * Simple Makefile:: A Simple Makefile * How Make Works:: How @code{make} Processes This Makefile * Variables Simplify:: Variables Make Makefiles Simpler * make Deduces:: Letting @code{make} Deduce the Commands * Combine By Prerequisite:: Another Style of Makefile * Cleanup:: Rules for Cleaning the Directory Writing Makefiles * Makefile Contents:: What makefiles contain. * Makefile Names:: How to name your makefile. * Include:: How one makefile can use another makefile. * MAKEFILES Variable:: The environment can specify extra makefiles. * MAKEFILE_LIST Variable:: Discover which makefiles have been read. * Special Variables:: Other special variables. * Remaking Makefiles:: How makefiles get remade. * Overriding Makefiles:: How to override part of one makefile with another makefile. * Reading Makefiles:: How makefiles are parsed. * Secondary Expansion:: How and when secondary expansion is performed. Writing Rules * Rule Example:: An example explained. * Rule Syntax:: General syntax explained. * Prerequisite Types:: There are two types of prerequisites. * Wildcards:: Using wildcard characters such as `*'. * Directory Search:: Searching other directories for source files. * Phony Targets:: Using a target that is not a real file's name. * Force Targets:: You can use a target without commands or prerequisites to mark other targets as phony. * Empty Targets:: When only the date matters and the files are empty. * Special Targets:: Targets with special built-in meanings. * Multiple Targets:: When to make use of several targets in a rule. * Multiple Rules:: How to use several rules with the same target. * Static Pattern:: Static pattern rules apply to multiple targets and can vary the prerequisites according to the target name. * Double-Colon:: How to use a special kind of rule to allow several independent rules for one target. * Automatic Prerequisites:: How to automatically generate rules giving prerequisites from source files themselves. Using Wildcard Characters in File Names * Wildcard Examples:: Several examples * Wildcard Pitfall:: Problems to avoid. * Wildcard Function:: How to cause wildcard expansion where it does not normally take place. Searching Directories for Prerequisites * General Search:: Specifying a search path that applies to every prerequisite. * Selective Search:: Specifying a search path for a specified class of names. * Search Algorithm:: When and how search paths are applied. * Commands/Search:: How to write shell commands that work together with search paths. * Implicit/Search:: How search paths affect implicit rules. * Libraries/Search:: Directory search for link libraries. Static Pattern Rules * Static Usage:: The syntax of static pattern rules. * Static versus Implicit:: When are they better than implicit rules? Writing the Commands in Rules * Command Syntax:: Command syntax features and pitfalls. * Echoing:: How to control when commands are echoed. * Execution:: How commands are executed. * Parallel:: How commands can be executed in parallel. * Errors:: What happens after a command execution error. * Interrupts:: What happens when a command is interrupted. * Recursion:: Invoking @code{make} from makefiles. * Sequences:: Defining canned sequences of commands. * Empty Commands:: Defining useful, do-nothing commands. Command Syntax * Splitting Lines:: Breaking long command lines for readability. * Variables in Commands:: Using @code{make} variables in commands. Command Execution * Choosing the Shell:: How @code{make} chooses the shell used to run commands. Recursive Use of @code{make} * MAKE Variable:: The special effects of using @samp{$(MAKE)}. * Variables/Recursion:: How to communicate variables to a sub-@code{make}. * Options/Recursion:: How to communicate options to a sub-@code{make}. * -w Option:: How the @samp{-w} or @samp{--print-directory} option helps debug use of recursive @code{make} commands. How to Use Variables * Reference:: How to use the value of a variable. * Flavors:: Variables come in two flavors. * Advanced:: Advanced features for referencing a variable. * Values:: All the ways variables get their values. * Setting:: How to set a variable in the makefile. * Appending:: How to append more text to the old value of a variable. * Override Directive:: How to set a variable in the makefile even if the user has set it with a command argument. * Defining:: An alternate way to set a variable to a verbatim string. * Environment:: Variable values can come from the environment. * Target-specific:: Variable values can be defined on a per-target basis. * Pattern-specific:: Target-specific variable values can be applied to a group of targets that match a pattern. Advanced Features for Reference to Variables * Substitution Refs:: Referencing a variable with substitutions on the value. * Computed Names:: Computing the name of the variable to refer to. Conditional Parts of Makefiles * Conditional Example:: Example of a conditional * Conditional Syntax:: The syntax of conditionals. * Testing Flags:: Conditionals that test flags. Functions for Transforming Text * Syntax of Functions:: How to write a function call. * Text Functions:: General-purpose text manipulation functions. * File Name Functions:: Functions for manipulating file names. * Conditional Functions:: Functions that implement conditions. * Foreach Function:: Repeat some text with controlled variation. * Call Function:: Expand a user-defined function. * Value Function:: Return the un-expanded value of a variable. * Eval Function:: Evaluate the arguments as makefile syntax. * Origin Function:: Find where a variable got its value. * Flavor Function:: Find out the flavor of a variable. * Shell Function:: Substitute the output of a shell command. * Make Control Functions:: Functions that control how make runs. How to Run @code{make} * Makefile Arguments:: How to specify which makefile to use. * Goals:: How to use goal arguments to specify which parts of the makefile to use. * Instead of Execution:: How to use mode flags to specify what kind of thing to do with the commands in the makefile other than simply execute them. * Avoiding Compilation:: How to avoid recompiling certain files. * Overriding:: How to override a variable to specify an alternate compiler and other things. * Testing:: How to proceed past some errors, to test compilation. * Options Summary:: Summary of Options Using Implicit Rules * Using Implicit:: How to use an existing implicit rule to get the commands for updating a file. * Catalogue of Rules:: A list of built-in implicit rules. * Implicit Variables:: How to change what predefined rules do. * Chained Rules:: How to use a chain of implicit rules. * Pattern Rules:: How to define new implicit rules. * Last Resort:: How to define commands for rules which cannot find any. * Suffix Rules:: The old-fashioned style of implicit rule. * Implicit Rule Search:: The precise algorithm for applying implicit rules. Defining and Redefining Pattern Rules * Pattern Intro:: An introduction to pattern rules. * Pattern Examples:: Examples of pattern rules. * Automatic Variables:: How to use automatic variables in the commands of implicit rules. * Pattern Match:: How patterns match. * Match-Anything Rules:: Precautions you should take prior to defining rules that can match any target file whatever. * Canceling Rules:: How to override or cancel built-in rules. Using @code{make} to Update Archive Files * Archive Members:: Archive members as targets. * Archive Update:: The implicit rule for archive member targets. * Archive Pitfalls:: Dangers to watch out for when using archives. * Archive Suffix Rules:: You can write a special kind of suffix rule for updating archives. Implicit Rule for Archive Member Targets * Archive Symbols:: How to update archive symbol directories. @end detailmenu @end menu @node Overview, Introduction, Top, Top @comment node-name, next, previous, up @chapter Overview of @code{make} The @code{make} utility automatically determines which pieces of a large program need to be recompiled, and issues commands to recompile them. This manual describes GNU @code{make}, which was implemented by Richard Stallman and Roland McGrath. Development since Version 3.76 has been handled by Paul D. Smith. GNU @code{make} conforms to section 6.2 of @cite{IEEE Standard 1003.2-1992} (POSIX.2). @cindex POSIX @cindex IEEE Standard 1003.2 @cindex standards conformance Our examples show C programs, since they are most common, but you can use @code{make} with any programming language whose compiler can be run with a shell command. Indeed, @code{make} is not limited to programs. You can use it to describe any task where some files must be updated automatically from others whenever the others change. @menu * Preparing:: Preparing and Running Make * Reading:: On Reading this Text * Bugs:: Problems and Bugs @end menu @node Preparing, Reading, Overview, Overview @ifnottex @heading Preparing and Running Make @end ifnottex To prepare to use @code{make}, you must write a file called the @dfn{makefile} that describes the relationships among files in your program and provides commands for updating each file. In a program, typically, the executable file is updated from object files, which are in turn made by compiling source files.@refill Once a suitable makefile exists, each time you change some source files, this simple shell command: @example make @end example @noindent suffices to perform all necessary recompilations. The @code{make} program uses the makefile data base and the last-modification times of the files to decide which of the files need to be updated. For each of those files, it issues the commands recorded in the data base. You can provide command line arguments to @code{make} to control which files should be recompiled, or how. @xref{Running, ,How to Run @code{make}}. @node Reading, Bugs, Preparing, Overview @section How to Read This Manual If you are new to @code{make}, or are looking for a general introduction, read the first few sections of each chapter, skipping the later sections. In each chapter, the first few sections contain introductory or general information and the later sections contain specialized or technical information. @ifnottex The exception is the second chapter, @ref{Introduction, ,An Introduction to Makefiles}, all of which is introductory. @end ifnottex @iftex The exception is @ref{Introduction, ,An Introduction to Makefiles}, all of which is introductory. @end iftex If you are familiar with other @code{make} programs, see @ref{Features, ,Features of GNU @code{make}}, which lists the enhancements GNU @code{make} has, and @ref{Missing, ,Incompatibilities and Missing Features}, which explains the few things GNU @code{make} lacks that others have. For a quick summary, see @ref{Options Summary}, @ref{Quick Reference}, and @ref{Special Targets}. @node Bugs, , Reading, Overview @section Problems and Bugs @cindex reporting bugs @cindex bugs, reporting @cindex problems and bugs, reporting If you have problems with GNU @code{make} or think you've found a bug, please report it to the developers; we cannot promise to do anything but we might well want to fix it. Before reporting a bug, make sure you've actually found a real bug. Carefully reread the documentation and see if it really says you can do what you're trying to do. If it's not clear whether you should be able to do something or not, report that too; it's a bug in the documentation! Before reporting a bug or trying to fix it yourself, try to isolate it to the smallest possible makefile that reproduces the problem. Then send us the makefile and the exact results @code{make} gave you, including any error or warning messages. Please don't paraphrase these messages: it's best to cut and paste them into your report. When generating this small makefile, be sure to not use any non-free or unusual tools in your commands: you can almost always emulate what such a tool would do with simple shell commands. Finally, be sure to explain what you expected to occur; this will help us decide whether the problem was really in the documentation. Once you have a precise problem you can report it in one of two ways. Either send electronic mail to: @example bug-make@@gnu.org @end example @noindent or use our Web-based project management tool, at: @example http://savannah.gnu.org/projects/make/ @end example @noindent In addition to the information above, please be careful to include the version number of @code{make} you are using. You can get this information with the command @samp{make --version}. Be sure also to include the type of machine and operating system you are using. One way to obtain this information is by looking at the final lines of output from the command @samp{make --help}. @node Introduction, Makefiles, Overview, Top @comment node-name, next, previous, up @chapter An Introduction to Makefiles You need a file called a @dfn{makefile} to tell @code{make} what to do. Most often, the makefile tells @code{make} how to compile and link a program. @cindex makefile In this chapter, we will discuss a simple makefile that describes how to compile and link a text editor which consists of eight C source files and three header files. The makefile can also tell @code{make} how to run miscellaneous commands when explicitly asked (for example, to remove certain files as a clean-up operation). To see a more complex example of a makefile, see @ref{Complex Makefile}. When @code{make} recompiles the editor, each changed C source file must be recompiled. If a header file has changed, each C source file that includes the header file must be recompiled to be safe. Each compilation produces an object file corresponding to the source file. Finally, if any source file has been recompiled, all the object files, whether newly made or saved from previous compilations, must be linked together to produce the new executable editor. @cindex recompilation @cindex editor @menu * Rule Introduction:: What a rule looks like. * Simple Makefile:: A Simple Makefile * How Make Works:: How @code{make} Processes This Makefile * Variables Simplify:: Variables Make Makefiles Simpler * make Deduces:: Letting @code{make} Deduce the Commands * Combine By Prerequisite:: Another Style of Makefile * Cleanup:: Rules for Cleaning the Directory @end menu @node Rule Introduction, Simple Makefile, Introduction, Introduction @comment node-name, next, previous, up @section What a Rule Looks Like @cindex rule, introduction to @cindex makefile rule parts @cindex parts of makefile rule A simple makefile consists of ``rules'' with the following shape: @cindex targets, introduction to @cindex prerequisites, introduction to @cindex commands, introduction to @example @group @var{target} @dots{} : @var{prerequisites} @dots{} @var{command} @dots{} @dots{} @end group @end example A @dfn{target} is usually the name of a file that is generated by a program; examples of targets are executable or object files. A target can also be the name of an action to carry out, such as @samp{clean} (@pxref{Phony Targets}). A @dfn{prerequisite} is a file that is used as input to create the target. A target often depends on several files. @cindex tabs in rules A @dfn{command} is an action that @code{make} carries out. A rule may have more than one command, each on its own line. @strong{Please note:} you need to put a tab character at the beginning of every command line! This is an obscurity that catches the unwary. Usually a command is in a rule with prerequisites and serves to create a target file if any of the prerequisites change. However, the rule that specifies commands for the target need not have prerequisites. For example, the rule containing the delete command associated with the target @samp{clean} does not have prerequisites. A @dfn{rule}, then, explains how and when to remake certain files which are the targets of the particular rule. @code{make} carries out the commands on the prerequisites to create or update the target. A rule can also explain how and when to carry out an action. @xref{Rules, , Writing Rules}. A makefile may contain other text besides rules, but a simple makefile need only contain rules. Rules may look somewhat more complicated than shown in this template, but all fit the pattern more or less. @node Simple Makefile, How Make Works, Rule Introduction, Introduction @section A Simple Makefile @cindex simple makefile @cindex makefile, simple Here is a straightforward makefile that describes the way an executable file called @code{edit} depends on eight object files which, in turn, depend on eight C source and three header files. In this example, all the C files include @file{defs.h}, but only those defining editing commands include @file{command.h}, and only low level files that change the editor buffer include @file{buffer.h}. @example @group edit : main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o cc -o edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o main.o : main.c defs.h cc -c main.c kbd.o : kbd.c defs.h command.h cc -c kbd.c command.o : command.c defs.h command.h cc -c command.c display.o : display.c defs.h buffer.h cc -c display.c insert.o : insert.c defs.h buffer.h cc -c insert.c search.o : search.c defs.h buffer.h cc -c search.c files.o : files.c defs.h buffer.h command.h cc -c files.c utils.o : utils.c defs.h cc -c utils.c clean : rm edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o @end group @end example @noindent We split each long line into two lines using backslash-newline; this is like using one long line, but is easier to read. @cindex continuation lines @cindex @code{\} (backslash), for continuation lines @cindex backslash (@code{\}), for continuation lines @cindex quoting newline, in makefile @cindex newline, quoting, in makefile To use this makefile to create the executable file called @file{edit}, type: @example make @end example To use this makefile to delete the executable file and all the object files from the directory, type: @example make clean @end example In the example makefile, the targets include the executable file @samp{edit}, and the object files @samp{main.o} and @samp{kbd.o}. The prerequisites are files such as @samp{main.c} and @samp{defs.h}. In fact, each @samp{.o} file is both a target and a prerequisite. Commands include @w{@samp{cc -c main.c}} and @w{@samp{cc -c kbd.c}}. When a target is a file, it needs to be recompiled or relinked if any of its prerequisites change. In addition, any prerequisites that are themselves automatically generated should be updated first. In this example, @file{edit} depends on each of the eight object files; the object file @file{main.o} depends on the source file @file{main.c} and on the header file @file{defs.h}. A shell command follows each line that contains a target and prerequisites. These shell commands say how to update the target file. A tab character must come at the beginning of every command line to distinguish command lines from other lines in the makefile. (Bear in mind that @code{make} does not know anything about how the commands work. It is up to you to supply commands that will update the target file properly. All @code{make} does is execute the commands in the rule you have specified when the target file needs to be updated.) @cindex shell command The target @samp{clean} is not a file, but merely the name of an action. Since you normally do not want to carry out the actions in this rule, @samp{clean} is not a prerequisite of any other rule. Consequently, @code{make} never does anything with it unless you tell it specifically. Note that this rule not only is not a prerequisite, it also does not have any prerequisites, so the only purpose of the rule is to run the specified commands. Targets that do not refer to files but are just actions are called @dfn{phony targets}. @xref{Phony Targets}, for information about this kind of target. @xref{Errors, , Errors in Commands}, to see how to cause @code{make} to ignore errors from @code{rm} or any other command. @cindex @code{clean} target @cindex @code{rm} (shell command) @node How Make Works, Variables Simplify, Simple Makefile, Introduction @comment node-name, next, previous, up @section How @code{make} Processes a Makefile @cindex processing a makefile @cindex makefile, how @code{make} processes By default, @code{make} starts with the first target (not targets whose names start with @samp{.}). This is called the @dfn{default goal}. (@dfn{Goals} are the targets that @code{make} strives ultimately to update. You can override this behavior using the command line (@pxref{Goals, , Arguments to Specify the Goals}) or with the @code{.DEFAULT_GOAL} special variable (@pxref{Special Variables, , Other Special Variables}). @cindex default goal @cindex goal, default @cindex goal In the simple example of the previous section, the default goal is to update the executable program @file{edit}; therefore, we put that rule first. Thus, when you give the command: @example make @end example @noindent @code{make} reads the makefile in the current directory and begins by processing the first rule. In the example, this rule is for relinking @file{edit}; but before @code{make} can fully process this rule, it must process the rules for the files that @file{edit} depends on, which in this case are the object files. Each of these files is processed according to its own rule. These rules say to update each @samp{.o} file by compiling its source file. The recompilation must be done if the source file, or any of the header files named as prerequisites, is more recent than the object file, or if the object file does not exist. The other rules are processed because their targets appear as prerequisites of the goal. If some other rule is not depended on by the goal (or anything it depends on, etc.), that rule is not processed, unless you tell @code{make} to do so (with a command such as @w{@code{make clean}}). Before recompiling an object file, @code{make} considers updating its prerequisites, the source file and header files. This makefile does not specify anything to be done for them---the @samp{.c} and @samp{.h} files are not the targets of any rules---so @code{make} does nothing for these files. But @code{make} would update automatically generated C programs, such as those made by Bison or Yacc, by their own rules at this time. After recompiling whichever object files need it, @code{make} decides whether to relink @file{edit}. This must be done if the file @file{edit} does not exist, or if any of the object files are newer than it. If an object file was just recompiled, it is now newer than @file{edit}, so @file{edit} is relinked. @cindex relinking Thus, if we change the file @file{insert.c} and run @code{make}, @code{make} will compile that file to update @file{insert.o}, and then link @file{edit}. If we change the file @file{command.h} and run @code{make}, @code{make} will recompile the object files @file{kbd.o}, @file{command.o} and @file{files.o} and then link the file @file{edit}. @node Variables Simplify, make Deduces, How Make Works, Introduction @section Variables Make Makefiles Simpler @cindex variables @cindex simplifying with variables In our example, we had to list all the object files twice in the rule for @file{edit} (repeated here): @example @group edit : main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o cc -o edit main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o @end group @end example @cindex @code{objects} Such duplication is error-prone; if a new object file is added to the system, we might add it to one list and forget the other. We can eliminate the risk and simplify the makefile by using a variable. @dfn{Variables} allow a text string to be defined once and substituted in multiple places later (@pxref{Using Variables, ,How to Use Variables}). @cindex @code{OBJECTS} @cindex @code{objs} @cindex @code{OBJS} @cindex @code{obj} @cindex @code{OBJ} It is standard practice for every makefile to have a variable named @code{objects}, @code{OBJECTS}, @code{objs}, @code{OBJS}, @code{obj}, or @code{OBJ} which is a list of all object file names. We would define such a variable @code{objects} with a line like this in the makefile:@refill @example @group objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o @end group @end example @noindent Then, each place we want to put a list of the object file names, we can substitute the variable's value by writing @samp{$(objects)} (@pxref{Using Variables, ,How to Use Variables}). Here is how the complete simple makefile looks when you use a variable for the object files: @example @group objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o edit : $(objects) cc -o edit $(objects) main.o : main.c defs.h cc -c main.c kbd.o : kbd.c defs.h command.h cc -c kbd.c command.o : command.c defs.h command.h cc -c command.c display.o : display.c defs.h buffer.h cc -c display.c insert.o : insert.c defs.h buffer.h cc -c insert.c search.o : search.c defs.h buffer.h cc -c search.c files.o : files.c defs.h buffer.h command.h cc -c files.c utils.o : utils.c defs.h cc -c utils.c clean : rm edit $(objects) @end group @end example @node make Deduces, Combine By Prerequisite, Variables Simplify, Introduction @section Letting @code{make} Deduce the Commands @cindex deducing commands (implicit rules) @cindex implicit rule, introduction to @cindex rule, implicit, introduction to It is not necessary to spell out the commands for compiling the individual C source files, because @code{make} can figure them out: it has an @dfn{implicit rule} for updating a @samp{.o} file from a correspondingly named @samp{.c} file using a @samp{cc -c} command. For example, it will use the command @samp{cc -c main.c -o main.o} to compile @file{main.c} into @file{main.o}. We can therefore omit the commands from the rules for the object files. @xref{Implicit Rules, ,Using Implicit Rules}.@refill When a @samp{.c} file is used automatically in this way, it is also automatically added to the list of prerequisites. We can therefore omit the @samp{.c} files from the prerequisites, provided we omit the commands. Here is the entire example, with both of these changes, and a variable @code{objects} as suggested above: @example @group objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o edit : $(objects) cc -o edit $(objects) main.o : defs.h kbd.o : defs.h command.h command.o : defs.h command.h display.o : defs.h buffer.h insert.o : defs.h buffer.h search.o : defs.h buffer.h files.o : defs.h buffer.h command.h utils.o : defs.h .PHONY : clean clean : rm edit $(objects) @end group @end example @noindent This is how we would write the makefile in actual practice. (The complications associated with @samp{clean} are described elsewhere. See @ref{Phony Targets}, and @ref{Errors, ,Errors in Commands}.) Because implicit rules are so convenient, they are important. You will see them used frequently.@refill @node Combine By Prerequisite, Cleanup, make Deduces, Introduction @section Another Style of Makefile @cindex combining rules by prerequisite When the objects of a makefile are created only by implicit rules, an alternative style of makefile is possible. In this style of makefile, you group entries by their prerequisites instead of by their targets. Here is what one looks like: @example @group objects = main.o kbd.o command.o display.o \ insert.o search.o files.o utils.o edit : $(objects) cc -o edit $(objects) $(objects) : defs.h kbd.o command.o files.o : command.h display.o insert.o search.o files.o : buffer.h @end group @end example @noindent Here @file{defs.h} is given as a prerequisite of all the object files; @file{command.h} and @file{buffer.h} are prerequisites of the specific object files listed for them. Whether this is better is a matter of taste: it is more compact, but some people dislike it because they find it clearer to put all the information about each target in one place. @node Cleanup, , Combine By Prerequisite, Introduction @section Rules for Cleaning the Directory @cindex cleaning up @cindex removing, to clean up Compiling a program is not the only thing you might want to write rules for. Makefiles commonly tell how to do a few other things besides compiling a program: for example, how to delete all the object files and executables so that the directory is @samp{clean}. @cindex @code{clean} target Here is how we could write a @code{make} rule for cleaning our example editor: @example @group clean: rm edit $(objects) @end group @end example In practice, we might want to write the rule in a somewhat more complicated manner to handle unanticipated situations. We would do this: @example @group .PHONY : clean clean : -rm edit $(objects) @end group @end example @noindent This prevents @code{make} from getting confused by an actual file called @file{clean} and causes it to continue in spite of errors from @code{rm}. (See @ref{Phony Targets}, and @ref{Errors, ,Errors in Commands}.) @noindent A rule such as this should not be placed at the beginning of the makefile, because we do not want it to run by default! Thus, in the example makefile, we want the rule for @code{edit}, which recompiles the editor, to remain the default goal. Since @code{clean} is not a prerequisite of @code{edit}, this rule will not run at all if we give the command @samp{make} with no arguments. In order to make the rule run, we have to type @samp{make clean}. @xref{Running, ,How to Run @code{make}}. @node Makefiles, Rules, Introduction, Top @chapter Writing Makefiles @cindex makefile, how to write The information that tells @code{make} how to recompile a system comes from reading a data base called the @dfn{makefile}. @menu * Makefile Contents:: What makefiles contain. * Makefile Names:: How to name your makefile. * Include:: How one makefile can use another makefile. * MAKEFILES Variable:: The environment can specify extra makefiles. * MAKEFILE_LIST Variable:: Discover which makefiles have been read. * Special Variables:: Other special variables. * Remaking Makefiles:: How makefiles get remade. * Overriding Makefiles:: How to override part of one makefile with another makefile. * Reading Makefiles:: How makefiles are parsed. * Secondary Expansion:: How and when secondary expansion is performed. @end menu @node Makefile Contents, Makefile Names, Makefiles, Makefiles @section What Makefiles Contain Makefiles contain five kinds of things: @dfn{explicit rules}, @dfn{implicit rules}, @dfn{variable definitions}, @dfn{directives}, and @dfn{comments}. Rules, variables, and directives are described at length in later chapters.@refill @itemize @bullet @cindex rule, explicit, definition of @cindex explicit rule, definition of @item An @dfn{explicit rule} says when and how to remake one or more files, called the rule's @dfn{targets}. It lists the other files that the targets depend on, called the @dfn{prerequisites} of the target, and may also give commands to use to create or update the targets. @xref{Rules, ,Writing Rules}. @cindex rule, implicit, definition of @cindex implicit rule, definition of @item An @dfn{implicit rule} says when and how to remake a class of files based on their names. It describes how a target may depend on a file with a name similar to the target and gives commands to create or update such a target. @xref{Implicit Rules, ,Using Implicit Rules}. @cindex variable definition @item A @dfn{variable definition} is a line that specifies a text string value for a variable that can be substituted into the text later. The simple makefile example shows a variable definition for @code{objects} as a list of all object files (@pxref{Variables Simplify, , Variables Make Makefiles Simpler}). @cindex directive @item A @dfn{directive} is a command for @code{make} to do something special while reading the makefile. These include: @itemize @bullet @item Reading another makefile (@pxref{Include, ,Including Other Makefiles}). @item Deciding (based on the values of variables) whether to use or ignore a part of the makefile (@pxref{Conditionals, ,Conditional Parts of Makefiles}). @item Defining a variable from a verbatim string containing multiple lines (@pxref{Defining, ,Defining Variables Verbatim}). @end itemize @cindex comments, in makefile @cindex @code{#} (comments), in makefile @item @samp{#} in a line of a makefile starts a @dfn{comment}. It and the rest of the line are ignored, except that a trailing backslash not escaped by another backslash will continue the comment across multiple lines. A line containing just a comment (with perhaps spaces before it) is effectively blank, and is ignored. If you want a literal @code{#}, escape it with a backslash (e.g., @code{\#}). Comments may appear on any line in the makefile, although they are treated specially in certain situations. Within a command script (if the line begins with a TAB character) the entire line is passed to the shell, just as with any other line that begins with a TAB. The shell decides how to interpret the text: whether or not this is a comment is up to the shell. Within a @code{define} directive, comments are not ignored during the definition of the variable, but rather kept intact in the value of the variable. When the variable is expanded they will either be treated as @code{make} comments or as command script text, depending on the context in which the variable is evaluated. @end itemize @node Makefile Names, Include, Makefile Contents, Makefiles @section What Name to Give Your Makefile @cindex makefile name @cindex name of makefile @cindex default makefile name @cindex file name of makefile @c following paragraph rewritten to avoid overfull hbox By default, when @code{make} looks for the makefile, it tries the following names, in order: @file{GNUmakefile}, @file{makefile} and @file{Makefile}.@refill @findex Makefile @findex GNUmakefile @findex makefile @cindex @code{README} Normally you should call your makefile either @file{makefile} or @file{Makefile}. (We recommend @file{Makefile} because it appears prominently near the beginning of a directory listing, right near other important files such as @file{README}.) The first name checked, @file{GNUmakefile}, is not recommended for most makefiles. You should use this name if you have a makefile that is specific to GNU @code{make}, and will not be understood by other versions of @code{make}. Other @code{make} programs look for @file{makefile} and @file{Makefile}, but not @file{GNUmakefile}. If @code{make} finds none of these names, it does not use any makefile. Then you must specify a goal with a command argument, and @code{make} will attempt to figure out how to remake it using only its built-in implicit rules. @xref{Implicit Rules, ,Using Implicit Rules}. @cindex @code{-f} @cindex @code{--file} @cindex @code{--makefile} If you want to use a nonstandard name for your makefile, you can specify the makefile name with the @samp{-f} or @samp{--file} option. The arguments @w{@samp{-f @var{name}}} or @w{@samp{--file=@var{name}}} tell @code{make} to read the file @var{name} as the makefile. If you use more than one @samp{-f} or @samp{--file} option, you can specify several makefiles. All the makefiles are effectively concatenated in the order specified. The default makefile names @file{GNUmakefile}, @file{makefile} and @file{Makefile} are not checked automatically if you specify @samp{-f} or @samp{--file}.@refill @cindex specifying makefile name @cindex makefile name, how to specify @cindex name of makefile, how to specify @cindex file name of makefile, how to specify @node Include, MAKEFILES Variable, Makefile Names, Makefiles @section Including Other Makefiles @cindex including other makefiles @cindex makefile, including @findex include The @code{include} directive tells @code{make} to suspend reading the current makefile and read one or more other makefiles before continuing. The directive is a line in the makefile that looks like this: @example include @var{filenames}@dots{} @end example @noindent @var{filenames} can contain shell file name patterns. If @var{filenames} is empty, nothing is included and no error is printed. @cindex shell file name pattern (in @code{include}) @cindex shell wildcards (in @code{include}) @cindex wildcard, in @code{include} Extra spaces are allowed and ignored at the beginning of the line, but a tab is not allowed. (If the line begins with a tab, it will be considered a command line.) Whitespace is required between @code{include} and the file names, and between file names; extra whitespace is ignored there and at the end of the directive. A comment starting with @samp{#} is allowed at the end of the line. If the file names contain any variable or function references, they are expanded. @xref{Using Variables, ,How to Use Variables}. For example, if you have three @file{.mk} files, @file{a.mk}, @file{b.mk}, and @file{c.mk}, and @code{$(bar)} expands to @code{bish bash}, then the following expression @example include foo *.mk $(bar) @end example is equivalent to @example include foo a.mk b.mk c.mk bish bash @end example When @code{make} processes an @code{include} directive, it suspends reading of the containing makefile and reads from each listed file in turn. When that is finished, @code{make} resumes reading the makefile in which the directive appears. One occasion for using @code{include} directives is when several programs, handled by individual makefiles in various directories, need to use a common set of variable definitions (@pxref{Setting, ,Setting Variables}) or pattern rules (@pxref{Pattern Rules, ,Defining and Redefining Pattern Rules}). Another such occasion is when you want to generate prerequisites from source files automatically; the prerequisites can be put in a file that is included by the main makefile. This practice is generally cleaner than that of somehow appending the prerequisites to the end of the main makefile as has been traditionally done with other versions of @code{make}. @xref{Automatic Prerequisites}. @cindex prerequisites, automatic generation @cindex automatic generation of prerequisites @cindex generating prerequisites automatically @cindex @code{-I} @cindex @code{--include-dir} @cindex included makefiles, default directories @cindex default directories for included makefiles @findex /usr/gnu/include @findex /usr/local/include @findex /usr/include If the specified name does not start with a slash, and the file is not found in the current directory, several other directories are searched. First, any directories you have specified with the @samp{-I} or @samp{--include-dir} option are searched (@pxref{Options Summary, ,Summary of Options}). Then the following directories (if they exist) are searched, in this order: @file{@var{prefix}/include} (normally @file{/usr/local/include} @footnote{GNU Make compiled for MS-DOS and MS-Windows behaves as if @var{prefix} has been defined to be the root of the DJGPP tree hierarchy.}) @file{/usr/gnu/include}, @file{/usr/local/include}, @file{/usr/include}. If an included makefile cannot be found in any of these directories, a warning message is generated, but it is not an immediately fatal error; processing of the makefile containing the @code{include} continues. Once it has finished reading makefiles, @code{make} will try to remake any that are out of date or don't exist. @xref{Remaking Makefiles, ,How Makefiles Are Remade}. Only after it has tried to find a way to remake a makefile and failed, will @code{make} diagnose the missing makefile as a fatal error. If you want @code{make} to simply ignore a makefile which does not exist and cannot be remade, with no error message, use the @w{@code{-include}} directive instead of @code{include}, like this: @example -include @var{filenames}@dots{} @end example This acts like @code{include} in every way except that there is no error (not even a warning) if any of the @var{filenames} do not exist. For compatibility with some other @code{make} implementations, @code{sinclude} is another name for @w{@code{-include}}. @node MAKEFILES Variable, MAKEFILE_LIST Variable, Include, Makefiles @section The Variable @code{MAKEFILES} @cindex makefile, and @code{MAKEFILES} variable @cindex including (@code{MAKEFILES} variable) @vindex MAKEFILES If the environment variable @code{MAKEFILES} is defined, @code{make} considers its value as a list of names (separated by whitespace) of additional makefiles to be read before the others. This works much like the @code{include} directive: various directories are searched for those files (@pxref{Include, ,Including Other Makefiles}). In addition, the default goal is never taken from one of these makefiles and it is not an error if the files listed in @code{MAKEFILES} are not found.@refill @cindex recursion, and @code{MAKEFILES} variable The main use of @code{MAKEFILES} is in communication between recursive invocations of @code{make} (@pxref{Recursion, ,Recursive Use of @code{make}}). It usually is not desirable to set the environment variable before a top-level invocation of @code{make}, because it is usually better not to mess with a makefile from outside. However, if you are running @code{make} without a specific makefile, a makefile in @code{MAKEFILES} can do useful things to help the built-in implicit rules work better, such as defining search paths (@pxref{Directory Search}). Some users are tempted to set @code{MAKEFILES} in the environment automatically on login, and program makefiles to expect this to be done. This is a very bad idea, because such makefiles will fail to work if run by anyone else. It is much better to write explicit @code{include} directives in the makefiles. @xref{Include, , Including Other Makefiles}. @node MAKEFILE_LIST Variable, Special Variables, MAKEFILES Variable, Makefiles @comment node-name, next, previous, up @section The Variable @code{MAKEFILE_LIST} @cindex makefiles, and @code{MAKEFILE_LIST} variable @cindex including (@code{MAKEFILE_LIST} variable) @vindex MAKEFILE_LIST As @code{make} reads various makefiles, including any obtained from the @code{MAKEFILES} variable, the command line, the default files, or from @code{include} directives, their names will be automatically appended to the @code{MAKEFILE_LIST} variable. They are added right before @code{make} begins to parse them. This means that if the first thing a makefile does is examine the last word in this variable, it will be the name of the current makefile. Once the current makefile has used @code{include}, however, the last word will be the just-included makefile. If a makefile named @code{Makefile} has this content: @example @group name1 := $(lastword $(MAKEFILE_LIST)) include inc.mk name2 := $(lastword $(MAKEFILE_LIST)) all: @@echo name1 = $(name1) @@echo name2 = $(name2) @end group @end example @noindent then you would expect to see this output: @example @group name1 = Makefile name2 = inc.mk @end group @end example @xref{Text Functions}, for more information on the @code{word} and @code{words} functions used above. @xref{Flavors, The Two Flavors of Variables}, for more information on simply-expanded (@code{:=}) variable definitions. @node Special Variables, Remaking Makefiles, MAKEFILE_LIST Variable, Makefiles @comment node-name, next, previous, up @section Other Special Variables @cindex makefiles, and special variables @cindex special variables GNU @code{make} also supports other special variables. Unless otherwise documented here, these values lose their special properties if they are set by a makefile or on the command line. @table @code @vindex .DEFAULT_GOAL @r{(define default goal)} @item .DEFAULT_GOAL Sets the default goal to be used if no targets were specified on the command line (@pxref{Goals, , Arguments to Specify the Goals}). The @code{.DEFAULT_GOAL} variable allows you to discover the current default goal, restart the default goal selection algorithm by clearing its value, or to explicitly set the default goal. The following example illustrates these cases: @example @group # Query the default goal. ifeq ($(.DEFAULT_GOAL),) $(warning no default goal is set) endif .PHONY: foo foo: ; @@echo $@@ $(warning default goal is $(.DEFAULT_GOAL)) # Reset the default goal. .DEFAULT_GOAL := .PHONY: bar bar: ; @@echo $@@ $(warning default goal is $(.DEFAULT_GOAL)) # Set our own. .DEFAULT_GOAL := foo @end group @end example This makefile prints: @example @group no default goal is set default goal is foo default goal is bar foo @end group @end example Note that assigning more than one target name to @code{.DEFAULT_GOAL} is illegal and will result in an error. @vindex MAKE_RESTARTS @r{(number of times @code{make} has restarted)} @item MAKE_RESTARTS This variable is set only if this instance of @code{make} has restarted (@pxref{Remaking Makefiles, , How Makefiles Are Remade}): it will contain the number of times this instance has restarted. Note this is not the same as recursion (counted by the @code{MAKELEVEL} variable). You should not set, modify, or export this variable. @vindex .VARIABLES @r{(list of variables)} @item .VARIABLES Expands to a list of the @emph{names} of all global variables defined so far. This includes variables which have empty values, as well as built-in variables (@pxref{Implicit Variables, , Variables Used by Implicit Rules}), but does not include any variables which are only defined in a target-specific context. Note that any value you assign to this variable will be ignored; it will always return its special value. @c @vindex .TARGETS @r{(list of targets)} @c @item .TARGETS @c The second special variable is @code{.TARGETS}. When expanded, the @c value consists of a list of all targets defined in all makefiles read @c up until that point. Note it's not enough for a file to be simply @c mentioned in the makefile to be listed in this variable, even if it @c would match an implicit rule and become an ``implicit target''. The @c file must appear as a target, on the left-hand side of a ``:'', to be @c considered a target for the purposes of this variable. @vindex .FEATURES @r{(list of supported features)} @item .FEATURES Expands to a list of special features supported by this version of @code{make}. Possible values include: @table @samp @item archives Supports @code{ar} (archive) files using special filename syntax. @xref{Archives, ,Using @code{make} to Update Archive Files}. @item check-symlink Supports the @code{-L} (@code{--check-symlink-times}) flag. @xref{Options Summary, ,Summary of Options}. @item else-if Supports ``else if'' non-nested conditionals. @xref{Conditional Syntax, ,Syntax of Conditionals}. @item jobserver Supports ``job server'' enhanced parallel builds. @xref{Parallel, ,Parallel Execution}. @item second-expansion Supports secondary expansion of prerequisite lists. @item order-only Supports order-only prerequisites. @xref{Prerequisite Types, ,Types of Prerequisites}. @item target-specific Supports target-specific and pattern-specific variable assignments. @xref{Target-specific, ,Target-specific Variable Values}. @end table @vindex .INCLUDE_DIRS @r{(list of include directories)} @item .INCLUDE_DIRS Expands to a list of directories that @code{make} searches for included makefiles (@pxref{Include, , Including Other Makefiles}). @end table @node Remaking Makefiles, Overriding Makefiles, Special Variables, Makefiles @section How Makefiles Are Remade @cindex updating makefiles @cindex remaking makefiles @cindex makefile, remaking of Sometimes makefiles can be remade from other files, such as RCS or SCCS files. If a makefile can be remade from other files, you probably want @code{make} to get an up-to-date version of the makefile to read in. To this end, after reading in all makefiles, @code{make} will consider each as a goal target and attempt to update it. If a makefile has a rule which says how to update it (found either in that very makefile or in another one) or if an implicit rule applies to it (@pxref{Implicit Rules, ,Using Implicit Rules}), it will be updated if necessary. After all makefiles have been checked, if any have actually been changed, @code{make} starts with a clean slate and reads all the makefiles over again. (It will also attempt to update each of them over again, but normally this will not change them again, since they are already up to date.)@refill If you know that one or more of your makefiles cannot be remade and you want to keep @code{make} from performing an implicit rule search on them, perhaps for efficiency reasons, you can use any normal method of preventing implicit rule lookup to do so. For example, you can write an explicit rule with the makefile as the target, and an empty command string (@pxref{Empty Commands, ,Using Empty Commands}). If the makefiles specify a double-colon rule to remake a file with commands but no prerequisites, that file will always be remade (@pxref{Double-Colon}). In the case of makefiles, a makefile that has a double-colon rule with commands but no prerequisites will be remade every time @code{make} is run, and then again after @code{make} starts over and reads the makefiles in again. This would cause an infinite loop: @code{make} would constantly remake the makefile, and never do anything else. So, to avoid this, @code{make} will @strong{not} attempt to remake makefiles which are specified as targets of a double-colon rule with commands but no prerequisites.@refill If you do not specify any makefiles to be read with @samp{-f} or @samp{--file} options, @code{make} will try the default makefile names; @pxref{Makefile Names, ,What Name to Give Your Makefile}. Unlike makefiles explicitly requested with @samp{-f} or @samp{--file} options, @code{make} is not certain that these makefiles should exist. However, if a default makefile does not exist but can be created by running @code{make} rules, you probably want the rules to be run so that the makefile can be used. Therefore, if none of the default makefiles exists, @code{make} will try to make each of them in the same order in which they are searched for (@pxref{Makefile Names, ,What Name to Give Your Makefile}) until it succeeds in making one, or it runs out of names to try. Note that it is not an error if @code{make} cannot find or make any makefile; a makefile is not always necessary.@refill When you use the @samp{-t} or @samp{--touch} option (@pxref{Instead of Execution, ,Instead of Executing the Commands}), you would not want to use an out-of-date makefile to decide which targets to touch. So the @samp{-t} option has no effect on updating makefiles; they are really updated even if @samp{-t} is specified. Likewise, @samp{-q} (or @samp{--question}) and @samp{-n} (or @samp{--just-print}) do not prevent updating of makefiles, because an out-of-date makefile would result in the wrong output for other targets. Thus, @samp{make -f mfile -n foo} will update @file{mfile}, read it in, and then print the commands to update @file{foo} and its prerequisites without running them. The commands printed for @file{foo} will be those specified in the updated contents of @file{mfile}. However, on occasion you might actually wish to prevent updating of even the makefiles. You can do this by specifying the makefiles as goals in the command line as well as specifying them as makefiles. When the makefile name is specified explicitly as a goal, the options @samp{-t} and so on do apply to them. Thus, @samp{make -f mfile -n mfile foo} would read the makefile @file{mfile}, print the commands needed to update it without actually running them, and then print the commands needed to update @file{foo} without running them. The commands for @file{foo} will be those specified by the existing contents of @file{mfile}. @node Overriding Makefiles, Reading Makefiles, Remaking Makefiles, Makefiles @section Overriding Part of Another Makefile @cindex overriding makefiles @cindex makefile, overriding Sometimes it is useful to have a makefile that is mostly just like another makefile. You can often use the @samp{include} directive to include one in the other, and add more targets or variable definitions. However, if the two makefiles give different commands for the same target, @code{make} will not let you just do this. But there is another way. @cindex match-anything rule, used to override In the containing makefile (the one that wants to include the other), you can use a match-anything pattern rule to say that to remake any target that cannot be made from the information in the containing makefile, @code{make} should look in another makefile. @xref{Pattern Rules}, for more information on pattern rules. For example, if you have a makefile called @file{Makefile} that says how to make the target @samp{foo} (and other targets), you can write a makefile called @file{GNUmakefile} that contains: @example foo: frobnicate > foo %: force @@$(MAKE) -f Makefile $@@ force: ; @end example If you say @samp{make foo}, @code{make} will find @file{GNUmakefile}, read it, and see that to make @file{foo}, it needs to run the command @samp{frobnicate > foo}. If you say @samp{make bar}, @code{make} will find no way to make @file{bar} in @file{GNUmakefile}, so it will use the commands from the pattern rule: @samp{make -f Makefile bar}. If @file{Makefile} provides a rule for updating @file{bar}, @code{make} will apply the rule. And likewise for any other target that @file{GNUmakefile} does not say how to make. The way this works is that the pattern rule has a pattern of just @samp{%}, so it matches any target whatever. The rule specifies a prerequisite @file{force}, to guarantee that the commands will be run even if the target file already exists. We give @file{force} target empty commands to prevent @code{make} from searching for an implicit rule to build it---otherwise it would apply the same match-anything rule to @file{force} itself and create a prerequisite loop! @node Reading Makefiles, Secondary Expansion, Overriding Makefiles, Makefiles @section How @code{make} Reads a Makefile @cindex reading makefiles @cindex makefile, parsing GNU @code{make} does its work in two distinct phases. During the first phase it reads all the makefiles, included makefiles, etc. and internalizes all the variables and their values, implicit and explicit rules, and constructs a dependency graph of all the targets and their prerequisites. During the second phase, @code{make} uses these internal structures to determine what targets will need to be rebuilt and to invoke the rules necessary to do so. It's important to understand this two-phase approach because it has a direct impact on how variable and function expansion happens; this is often a source of some confusion when writing makefiles. Here we will present a summary of the phases in which expansion happens for different constructs within the makefile. We say that expansion is @dfn{immediate} if it happens during the first phase: in this case @code{make} will expand any variables or functions in that section of a construct as the makefile is parsed. We say that expansion is @dfn{deferred} if expansion is not performed immediately. Expansion of deferred construct is not performed until either the construct appears later in an immediate context, or until the second phase. You may not be familiar with some of these constructs yet. You can reference this section as you become familiar with them, in later chapters. @subheading Variable Assignment @cindex +=, expansion @cindex =, expansion @cindex ?=, expansion @cindex +=, expansion @cindex define, expansion Variable definitions are parsed as follows: @example @var{immediate} = @var{deferred} @var{immediate} ?= @var{deferred} @var{immediate} := @var{immediate} @var{immediate} += @var{deferred} or @var{immediate} define @var{immediate} @var{deferred} endef @end example For the append operator, @samp{+=}, the right-hand side is considered immediate if the variable was previously set as a simple variable (@samp{:=}), and deferred otherwise. @subheading Conditional Statements @cindex ifdef, expansion @cindex ifeq, expansion @cindex ifndef, expansion @cindex ifneq, expansion All instances of conditional syntax are parsed immediately, in their entirety; this includes the @code{ifdef}, @code{ifeq}, @code{ifndef}, and @code{ifneq} forms. Of course this means that automatic variables cannot be used in conditional statements, as automatic variables are not set until the command script for that rule is invoked. If you need to use automatic variables in a conditional you @emph{must} use shell conditional syntax, in your command script proper, for these tests, not @code{make} conditionals. @subheading Rule Definition @cindex target, expansion @cindex prerequisite, expansion @cindex implicit rule, expansion @cindex pattern rule, expansion @cindex explicit rule, expansion A rule is always expanded the same way, regardless of the form: @example @var{immediate} : @var{immediate} ; @var{deferred} @var{deferred} @end example That is, the target and prerequisite sections are expanded immediately, and the commands used to construct the target are always deferred. This general rule is true for explicit rules, pattern rules, suffix rules, static pattern rules, and simple prerequisite definitions. @node Secondary Expansion, , Reading Makefiles, Makefiles @section Secondary Expansion @cindex secondary expansion @cindex expansion, secondary @findex .SECONDEXPANSION In the previous section we learned that GNU @code{make} works in two distinct phases: a read-in phase and a target-update phase (@pxref{Reading Makefiles, , How @code{make} Reads a Makefile}). GNU make also has the ability to enable a @emph{second expansion} of the prerequisites (only) for some or all targets defined in the makefile. In order for this second expansion to occur, the special target @code{.SECONDEXPANSION} must be defined before the first prerequisite list that makes use of this feature. If that special target is defined then in between the two phases mentioned above, right at the end of the read-in phase, all the prerequisites of the targets defined after the special target are expanded a @emph{second time}. In most circumstances this secondary expansion will have no effect, since all variable and function references will have been expanded during the initial parsing of the makefiles. In order to take advantage of the secondary expansion phase of the parser, then, it's necessary to @emph{escape} the variable or function reference in the makefile. In this case the first expansion merely un-escapes the reference but doesn't expand it, and expansion is left to the secondary expansion phase. For example, consider this makefile: @example .SECONDEXPANSION: ONEVAR = onefile TWOVAR = twofile myfile: $(ONEVAR) $$(TWOVAR) @end example After the first expansion phase the prerequisites list of the @file{myfile} target will be @code{onefile} and @code{$(TWOVAR)}; the first (unescaped) variable reference to @var{ONEVAR} is expanded, while the second (escaped) variable reference is simply unescaped, without being recognized as a variable reference. Now during the secondary expansion the first word is expanded again but since it contains no variable or function references it remains the static value @file{onefile}, while the second word is now a normal reference to the variable @var{TWOVAR}, which is expanded to the value @file{twofile}. The final result is that there are two prerequisites, @file{onefile} and @file{twofile}. Obviously, this is not a very interesting case since the same result could more easily have been achieved simply by having both variables appear, unescaped, in the prerequisites list. One difference becomes apparent if the variables are reset; consider this example: @example .SECONDEXPANSION: AVAR = top onefile: $(AVAR) twofile: $$(AVAR) AVAR = bottom @end example Here the prerequisite of @file{onefile} will be expanded immediately, and resolve to the value @file{top}, while the prerequisite of @file{twofile} will not be full expanded until the secondary expansion and yield a value of @file{bottom}. This is marginally more exciting, but the true power of this feature only becomes apparent when you discover that secondary expansions always take place within the scope of the automatic variables for that target. This means that you can use variables such as @code{$@@}, @code{$*}, etc. during the second expansion and they will have their expected values, just as in the command script. All you have to do is defer the expansion by escaping the @code{$}. Also, secondary expansion occurs for both explicit and implicit (pattern) rules. Knowing this, the possible uses for this feature increase dramatically. For example: @example .SECONDEXPANSION: main_OBJS := main.o try.o test.o lib_OBJS := lib.o api.o main lib: $$($$@@_OBJS) @end example Here, after the initial expansion the prerequisites of both the @file{main} and @file{lib} targets will be @code{$($@@_OBJS)}. During the secondary expansion, the @code{$@@} variable is set to the name of the target and so the expansion for the @file{main} target will yield @code{$(main_OBJS)}, or @code{main.o try.o test.o}, while the secondary expansion for the @file{lib} target will yield @code{$(lib_OBJS)}, or @code{lib.o api.o}. You can also mix functions here, as long as they are properly escaped: @example main_SRCS := main.c try.c test.c lib_SRCS := lib.c api.c .SECONDEXPANSION: main lib: $$(patsubst %.c,%.o,$$($$@@_SRCS)) @end example This version allows users to specify source files rather than object files, but gives the same resulting prerequisites list as the previous example. Evaluation of automatic variables during the secondary expansion phase, especially of the target name variable @code{$$@@}, behaves similarly to evaluation within command scripts. However, there are some subtle differences and ``corner cases'' which come into play for the different types of rule definitions that @code{make} understands. The subtleties of using the different automatic variables are described below. @subheading Secondary Expansion of Explicit Rules @cindex secondary expansion and explicit rules @cindex explicit rules, secondary expansion of During the secondary expansion of explicit rules, @code{$$@@} and @code{$$%} evaluate, respectively, to the file name of the target and, when the target is an archive member, the target member name. The @code{$$<} variable evaluates to the first prerequisite in the first rule for this target. @code{$$^} and @code{$$+} evaluate to the list of all prerequisites of rules @emph{that have already appeared} for the same target (@code{$$+} with repetitions and @code{$$^} without). The following example will help illustrate these behaviors: @example .SECONDEXPANSION: foo: foo.1 bar.1 $$< $$^ $$+ # line #1 foo: foo.2 bar.2 $$< $$^ $$+ # line #2 foo: foo.3 bar.3 $$< $$^ $$+ # line #3 @end example In the first prerequisite list, all three variables (@code{$$<}, @code{$$^}, and @code{$$+}) expand to the empty string. In the second, they will have values @code{foo.1}, @code{foo.1 bar.1}, and @code{foo.1 bar.1} respectively. In the third they will have values @code{foo.1}, @code{foo.1 bar.1 foo.2 bar.2}, and @code{foo.1 bar.1 foo.2 bar.2} respectively. Rules undergo secondary expansion in makefile order, except that the rule with the command script is always evaluated last. The variables @code{$$?} and @code{$$*} are not available and expand to the empty string. @subheading Secondary Expansion of Static Pattern Rules @cindex secondary expansion and static pattern rules @cindex static pattern rules, secondary expansion of Rules for secondary expansion of static pattern rules are identical to those for explicit rules, above, with one exception: for static pattern rules the @code{$$*} variable is set to the pattern stem. As with explicit rules, @code{$$?} is not available and expands to the empty string. @subheading Secondary Expansion of Implicit Rules @cindex secondary expansion and implicit rules @cindex implicit rules, secondary expansion of As @code{make} searches for an implicit rule, it substitutes the stem and then performs secondary expansion for every rule with a matching target pattern. The value of the automatic variables is derived in the same fashion as for static pattern rules. As an example: @example .SECONDEXPANSION: foo: bar foo foz: fo%: bo% %oo: $$< $$^ $$+ $$* @end example When the implicit rule is tried for target @file{foo}, @code{$$<} expands to @file{bar}, @code{$$^} expands to @file{bar boo}, @code{$$+} also expands to @file{bar boo}, and @code{$$*} expands to @file{f}. Note that the directory prefix (D), as described in @ref{Implicit Rule Search, ,Implicit Rule Search Algorithm}, is appended (after expansion) to all the patterns in the prerequisites list. As an example: @example .SECONDEXPANSION: /tmp/foo.o: %.o: $$(addsuffix /%.c,foo bar) foo.h @end example The prerequisite list after the secondary expansion and directory prefix reconstruction will be @file{/tmp/foo/foo.c /tmp/var/bar/foo.c foo.h}. If you are not interested in this reconstruction, you can use @code{$$*} instead of @code{%} in the prerequisites list. @node Rules, Commands, Makefiles, Top @chapter Writing Rules @cindex writing rules @cindex rule, how to write @cindex target @cindex prerequisite A @dfn{rule} appears in the makefile and says when and how to remake certain files, called the rule's @dfn{targets} (most often only one per rule). It lists the other files that are the @dfn{prerequisites} of the target, and @dfn{commands} to use to create or update the target. @cindex default goal @cindex goal, default The order of rules is not significant, except for determining the @dfn{default goal}: the target for @code{make} to consider, if you do not otherwise specify one. The default goal is the target of the first rule in the first makefile. If the first rule has multiple targets, only the first target is taken as the default. There are two exceptions: a target starting with a period is not a default unless it contains one or more slashes, @samp{/}, as well; and, a target that defines a pattern rule has no effect on the default goal. (@xref{Pattern Rules, ,Defining and Redefining Pattern Rules}.) Therefore, we usually write the makefile so that the first rule is the one for compiling the entire program or all the programs described by the makefile (often with a target called @samp{all}). @xref{Goals, ,Arguments to Specify the Goals}. @menu * Rule Example:: An example explained. * Rule Syntax:: General syntax explained. * Prerequisite Types:: There are two types of prerequisites. * Wildcards:: Using wildcard characters such as `*'. * Directory Search:: Searching other directories for source files. * Phony Targets:: Using a target that is not a real file's name. * Force Targets:: You can use a target without commands or prerequisites to mark other targets as phony. * Empty Targets:: When only the date matters and the files are empty. * Special Targets:: Targets with special built-in meanings. * Multiple Targets:: When to make use of several targets in a rule. * Multiple Rules:: How to use several rules with the same target. * Static Pattern:: Static pattern rules apply to multiple targets and can vary the prerequisites according to the target name. * Double-Colon:: How to use a special kind of rule to allow several independent rules for one target. * Automatic Prerequisites:: How to automatically generate rules giving prerequisites from source files themselves. @end menu @ifnottex @node Rule Example, Rule Syntax, Rules, Rules @section Rule Example Here is an example of a rule: @example foo.o : foo.c defs.h # module for twiddling the frobs cc -c -g foo.c @end example Its target is @file{foo.o} and its prerequisites are @file{foo.c} and @file{defs.h}. It has one command, which is @samp{cc -c -g foo.c}. The command line starts with a tab to identify it as a command. This rule says two things: @itemize @bullet @item How to decide whether @file{foo.o} is out of date: it is out of date if it does not exist, or if either @file{foo.c} or @file{defs.h} is more recent than it. @item How to update the file @file{foo.o}: by running @code{cc} as stated. The command does not explicitly mention @file{defs.h}, but we presume that @file{foo.c} includes it, and that that is why @file{defs.h} was added to the prerequisites. @end itemize @end ifnottex @node Rule Syntax, Prerequisite Types, Rule Example, Rules @section Rule Syntax @cindex rule syntax @cindex syntax of rules In general, a rule looks like this: @example @var{targets} : @var{prerequisites} @var{command} @dots{} @end example @noindent or like this: @example @var{targets} : @var{prerequisites} ; @var{command} @var{command} @dots{} @end example @cindex targets @cindex rule targets The @var{targets} are file names, separated by spaces. Wildcard characters may be used (@pxref{Wildcards, ,Using Wildcard Characters in File Names}) and a name of the form @file{@var{a}(@var{m})} represents member @var{m} in archive file @var{a} (@pxref{Archive Members, ,Archive Members as Targets}). Usually there is only one target per rule, but occasionally there is a reason to have more (@pxref{Multiple Targets, , Multiple Targets in a Rule}).@refill @cindex commands @cindex tab character (in commands) The @var{command} lines start with a tab character. The first command may appear on the line after the prerequisites, with a tab character, or may appear on the same line, with a semicolon. Either way, the effect is the same. There are other differences in the syntax of command lines. @xref{Commands, ,Writing the Commands in Rules}. @cindex dollar sign (@code{$}), in rules @cindex @code{$}, in rules @cindex rules, and @code{$} Because dollar signs are used to start @code{make} variable references, if you really want a dollar sign in a target or prerequisite you must write two of them, @samp{$$} (@pxref{Using Variables, ,How to Use Variables}). If you have enabled secondary expansion (@pxref{Secondary Expansion}) and you want a literal dollar sign in the prerequisites lise, you must actually write @emph{four} dollar signs (@samp{$$$$}). You may split a long line by inserting a backslash followed by a newline, but this is not required, as @code{make} places no limit on the length of a line in a makefile. A rule tells @code{make} two things: when the targets are out of date, and how to update them when necessary. @cindex prerequisites @cindex rule prerequisites The criterion for being out of date is specified in terms of the @var{prerequisites}, which consist of file names separated by spaces. (Wildcards and archive members (@pxref{Archives}) are allowed here too.) A target is out of date if it does not exist or if it is older than any of the prerequisites (by comparison of last-modification times). The idea is that the contents of the target file are computed based on information in the prerequisites, so if any of the prerequisites changes, the contents of the existing target file are no longer necessarily valid. How to update is specified by @var{commands}. These are lines to be executed by the shell (normally @samp{sh}), but with some extra features (@pxref{Commands, ,Writing the Commands in Rules}). @node Prerequisite Types, Wildcards, Rule Syntax, Rules @comment node-name, next, previous, up @section Types of Prerequisites @cindex prerequisite types @cindex types of prerequisites @cindex prerequisites, normal @cindex normal prerequisites @cindex prerequisites, order-only @cindex order-only prerequisites There are actually two different types of prerequisites understood by GNU @code{make}: normal prerequisites such as described in the previous section, and @dfn{order-only} prerequisites. A normal prerequisite makes two statements: first, it imposes an order of execution of build commands: any commands necessary to build any of a target's prerequisites will be fully executed before any commands necessary to build the target. Second, it imposes a dependency relationship: if any prerequisite is newer than the target, then the target is considered out-of-date and must be rebuilt. Normally, this is exactly what you want: if a target's prerequisite is updated, then the target should also be updated. Occasionally, however, you have a situation where you want to impose a specific ordering on the rules to be invoked @emph{without} forcing the target to be updated if one of those rules is executed. In that case, you want to define @dfn{order-only} prerequisites. Order-only prerequisites can be specified by placing a pipe symbol (@code{|}) in the prerequisites list: any prerequisites to the left of the pipe symbol are normal; any prerequisites to the right are order-only: @example @var{targets} : @var{normal-prerequisites} | @var{order-only-prerequisites} @end example The normal prerequisites section may of course be empty. Also, you may still declare multiple lines of prerequisites for the same target: they are appended appropriately. Note that if you declare the same file to be both a normal and an order-only prerequisite, the normal prerequisite takes precedence (since they are a strict superset of the behavior of an order-only prerequisite). @node Wildcards, Directory Search, Prerequisite Types, Rules @section Using Wildcard Characters in File Names @cindex wildcard @cindex file name with wildcards @cindex globbing (wildcards) @cindex @code{*} (wildcard character) @cindex @code{?} (wildcard character) @cindex @code{[@dots{}]} (wildcard characters) A single file name can specify many files using @dfn{wildcard characters}. The wildcard characters in @code{make} are @samp{*}, @samp{?} and @samp{[@dots{}]}, the same as in the Bourne shell. For example, @file{*.c} specifies a list of all the files (in the working directory) whose names end in @samp{.c}.@refill @cindex @code{~} (tilde) @cindex tilde (@code{~}) @cindex home directory The character @samp{~} at the beginning of a file name also has special significance. If alone, or followed by a slash, it represents your home directory. For example @file{~/bin} expands to @file{/home/you/bin}. If the @samp{~} is followed by a word, the string represents the home directory of the user named by that word. For example @file{~john/bin} expands to @file{/home/john/bin}. On systems which don't have a home directory for each user (such as MS-DOS or MS-Windows), this functionality can be simulated by setting the environment variable @var{HOME}.@refill Wildcard expansion is performed by @code{make} automatically in targets and in prerequisites. In commands the shell is responsible for wildcard expansion. In other contexts, wildcard expansion happens only if you request it explicitly with the @code{wildcard} function. The special significance of a wildcard character can be turned off by preceding it with a backslash. Thus, @file{foo\*bar} would refer to a specific file whose name consists of @samp{foo}, an asterisk, and @samp{bar}.@refill @menu * Wildcard Examples:: Several examples * Wildcard Pitfall:: Problems to avoid. * Wildcard Function:: How to cause wildcard expansion where it does not normally take place. @end menu @node Wildcard Examples, Wildcard Pitfall, Wildcards, Wildcards @subsection Wildcard Examples Wildcards can be used in the commands of a rule, where they are expanded by the shell. For example, here is a rule to delete all the object files: @example @group clean: rm -f *.o @end group @end example @cindex @code{rm} (shell command) Wildcards are also useful in the prerequisites of a rule. With the following rule in the makefile, @samp{make print} will print all the @samp{.c} files that have changed since the last time you printed them: @example print: *.c lpr -p $? touch print @end example @cindex @code{print} target @cindex @code{lpr} (shell command) @cindex @code{touch} (shell command) @noindent This rule uses @file{print} as an empty target file; see @ref{Empty Targets, ,Empty Target Files to Record Events}. (The automatic variable @samp{$?} is used to print only those files that have changed; see @ref{Automatic Variables}.)@refill Wildcard expansion does not happen when you define a variable. Thus, if you write this: @example objects = *.o @end example @noindent then the value of the variable @code{objects} is the actual string @samp{*.o}. However, if you use the value of @code{objects} in a target, prerequisite or command, wildcard expansion will take place at that time. To set @code{objects} to the expansion, instead use: @example objects := $(wildcard *.o) @end example @noindent @xref{Wildcard Function}. @node Wildcard Pitfall, Wildcard Function, Wildcard Examples, Wildcards @subsection Pitfalls of Using Wildcards @cindex wildcard pitfalls @cindex pitfalls of wildcards @cindex mistakes with wildcards @cindex errors with wildcards @cindex problems with wildcards Now here is an example of a naive way of using wildcard expansion, that does not do what you would intend. Suppose you would like to say that the executable file @file{foo} is made from all the object files in the directory, and you write this: @example objects = *.o foo : $(objects) cc -o foo $(CFLAGS) $(objects) @end example @noindent The value of @code{objects} is the actual string @samp{*.o}. Wildcard expansion happens in the rule for @file{foo}, so that each @emph{existing} @samp{.o} file becomes a prerequisite of @file{foo} and will be recompiled if necessary. But what if you delete all the @samp{.o} files? When a wildcard matches no files, it is left as it is, so then @file{foo} will depend on the oddly-named file @file{*.o}. Since no such file is likely to exist, @code{make} will give you an error saying it cannot figure out how to make @file{*.o}. This is not what you want! Actually it is possible to obtain the desired result with wildcard expansion, but you need more sophisticated techniques, including the @code{wildcard} function and string substitution. @ifnottex @xref{Wildcard Function, ,The Function @code{wildcard}}. @end ifnottex @iftex These are described in the following section. @end iftex @cindex wildcards and MS-DOS/MS-Windows backslashes @cindex backslashes in pathnames and wildcard expansion Microsoft operating systems (MS-DOS and MS-Windows) use backslashes to separate directories in pathnames, like so: @example c:\foo\bar\baz.c @end example This is equivalent to the Unix-style @file{c:/foo/bar/baz.c} (the @file{c:} part is the so-called drive letter). When @code{make} runs on these systems, it supports backslashes as well as the Unix-style forward slashes in pathnames. However, this support does @emph{not} include the wildcard expansion, where backslash is a quote character. Therefore, you @emph{must} use Unix-style slashes in these cases. @node Wildcard Function, , Wildcard Pitfall, Wildcards @subsection The Function @code{wildcard} @findex wildcard Wildcard expansion happens automatically in rules. But wildcard expansion does not normally take place when a variable is set, or inside the arguments of a function. If you want to do wildcard expansion in such places, you need to use the @code{wildcard} function, like this: @example $(wildcard @var{pattern}@dots{}) @end example @noindent This string, used anywhere in a makefile, is replaced by a space-separated list of names of existing files that match one of the given file name patterns. If no existing file name matches a pattern, then that pattern is omitted from the output of the @code{wildcard} function. Note that this is different from how unmatched wildcards behave in rules, where they are used verbatim rather than ignored (@pxref{Wildcard Pitfall}). One use of the @code{wildcard} function is to get a list of all the C source files in a directory, like this: @example $(wildcard *.c) @end example We can change the list of C source files into a list of object files by replacing the @samp{.c} suffix with @samp{.o} in the result, like this: @example $(patsubst %.c,%.o,$(wildcard *.c)) @end example @noindent (Here we have used another function, @code{patsubst}. @xref{Text Functions, ,Functions for String Substitution and Analysis}.)@refill Thus, a makefile to compile all C source files in the directory and then link them together could be written as follows: @example objects := $(patsubst %.c,%.o,$(wildcard *.c)) foo : $(objects) cc -o foo $(objects) @end example @noindent (This takes advantage of the implicit rule for compiling C programs, so there is no need to write explicit rules for compiling the files. @xref{Flavors, ,The Two Flavors of Variables}, for an explanation of @samp{:=}, which is a variant of @samp{=}.) @node Directory Search, Phony Targets, Wildcards, Rules @section Searching Directories for Prerequisites @vindex VPATH @findex vpath @cindex vpath @cindex search path for prerequisites (@code{VPATH}) @cindex directory search (@code{VPATH}) For large systems, it is often desirable to put sources in a separate directory from the binaries. The @dfn{directory search} features of @code{make} facilitate this by searching several directories automatically to find a prerequisite. When you redistribute the files among directories, you do not need to change the individual rules, just the search paths. @menu * General Search:: Specifying a search path that applies to every prerequisite. * Selective Search:: Specifying a search path for a specified class of names. * Search Algorithm:: When and how search paths are applied. * Commands/Search:: How to write shell commands that work together with search paths. * Implicit/Search:: How search paths affect implicit rules. * Libraries/Search:: Directory search for link libraries. @end menu @node General Search, Selective Search, Directory Search, Directory Search @subsection @code{VPATH}: Search Path for All Prerequisites @vindex VPATH The value of the @code{make} variable @code{VPATH} specifies a list of directories that @code{make} should search. Most often, the directories are expected to contain prerequisite files that are not in the current directory; however, @code{make} uses @code{VPATH} as a search list for both prerequisites and targets of rules. Thus, if a file that is listed as a target or prerequisite does not exist in the current directory, @code{make} searches the directories listed in @code{VPATH} for a file with that name. If a file is found in one of them, that file may become the prerequisite (see below). Rules may then specify the names of files in the prerequisite list as if they all existed in the current directory. @xref{Commands/Search, ,Writing Shell Commands with Directory Search}. In the @code{VPATH} variable, directory names are separated by colons or blanks. The order in which directories are listed is the order followed by @code{make} in its search. (On MS-DOS and MS-Windows, semi-colons are used as separators of directory names in @code{VPATH}, since the colon can be used in the pathname itself, after the drive letter.) For example, @example VPATH = src:../headers @end example @noindent specifies a path containing two directories, @file{src} and @file{../headers}, which @code{make} searches in that order. With this value of @code{VPATH}, the following rule, @example foo.o : foo.c @end example @noindent is interpreted as if it were written like this: @example foo.o : src/foo.c @end example @noindent assuming the file @file{foo.c} does not exist in the current directory but is found in the directory @file{src}. @node Selective Search, Search Algorithm, General Search, Directory Search @subsection The @code{vpath} Directive @findex vpath Similar to the @code{VPATH} variable, but more selective, is the @code{vpath} directive (note lower case), which allows you to specify a search path for a particular class of file names: those that match a particular pattern. Thus you can supply certain search directories for one class of file names and other directories (or none) for other file names. There are three forms of the @code{vpath} directive: @table @code @item vpath @var{pattern} @var{directories} Specify the search path @var{directories} for file names that match @var{pattern}. The search path, @var{directories}, is a list of directories to be searched, separated by colons (semi-colons on MS-DOS and MS-Windows) or blanks, just like the search path used in the @code{VPATH} variable. @item vpath @var{pattern} Clear out the search path associated with @var{pattern}. @c Extra blank line makes sure this gets two lines. @item vpath Clear all search paths previously specified with @code{vpath} directives. @end table A @code{vpath} pattern is a string containing a @samp{%} character. The string must match the file name of a prerequisite that is being searched for, the @samp{%} character matching any sequence of zero or more characters (as in pattern rules; @pxref{Pattern Rules, ,Defining and Redefining Pattern Rules}). For example, @code{%.h} matches files that end in @code{.h}. (If there is no @samp{%}, the pattern must match the prerequisite exactly, which is not useful very often.) @cindex @code{%}, quoting in @code{vpath} @cindex @code{%}, quoting with @code{\} (backslash) @cindex @code{\} (backslash), to quote @code{%} @cindex backslash (@code{\}), to quote @code{%} @cindex quoting @code{%}, in @code{vpath} @samp{%} characters in a @code{vpath} directive's pattern can be quoted with preceding backslashes (@samp{\}). Backslashes that would otherwise quote @samp{%} characters can be quoted with more backslashes. Backslashes that quote @samp{%} characters or other backslashes are removed from the pattern before it is compared to file names. Backslashes that are not in danger of quoting @samp{%} characters go unmolested.@refill When a prerequisite fails to exist in the current directory, if the @var{pattern} in a @code{vpath} directive matches the name of the prerequisite file, then the @var{directories} in that directive are searched just like (and before) the directories in the @code{VPATH} variable. For example, @example vpath %.h ../headers @end example @noindent tells @code{make} to look for any prerequisite whose name ends in @file{.h} in the directory @file{../headers} if the file is not found in the current directory. If several @code{vpath} patterns match the prerequisite file's name, then @code{make} processes each matching @code{vpath} directive one by one, searching all the directories mentioned in each directive. @code{make} handles multiple @code{vpath} directives in the order in which they appear in the makefile; multiple directives with the same pattern are independent of each other. @need 750 Thus, @example @group vpath %.c foo vpath % blish vpath %.c bar @end group @end example @noindent will look for a file ending in @samp{.c} in @file{foo}, then @file{blish}, then @file{bar}, while @example @group vpath %.c foo:bar vpath % blish @end group @end example @noindent will look for a file ending in @samp{.c} in @file{foo}, then @file{bar}, then @file{blish}. @node Search Algorithm, Commands/Search, Selective Search, Directory Search @subsection How Directory Searches are Performed @cindex algorithm for directory search @cindex directory search algorithm When a prerequisite is found through directory search, regardless of type (general or selective), the pathname located may not be the one that @code{make} actually provides you in the prerequisite list. Sometimes the path discovered through directory search is thrown away. The algorithm @code{make} uses to decide whether to keep or abandon a path found via directory search is as follows: @enumerate @item If a target file does not exist at the path specified in the makefile, directory search is performed. @item If the directory search is successful, that path is kept and this file is tentatively stored as the target. @item All prerequisites of this target are examined using this same method. @item After processing the prerequisites, the target may or may not need to be rebuilt: @enumerate a @item If the target does @emph{not} need to be rebuilt, the path to the file found during directory search is used for any prerequisite lists which contain this target. In short, if @code{make} doesn't need to rebuild the target then you use the path found via directory search. @item If the target @emph{does} need to be rebuilt (is out-of-date), the pathname found during directory search is @emph{thrown away}, and the target is rebuilt using the file name specified in the makefile. In short, if @code{make} must rebuild, then the target is rebuilt locally, not in the directory found via directory search. @end enumerate @end enumerate This algorithm may seem complex, but in practice it is quite often exactly what you want. @cindex traditional directory search (GPATH) @cindex directory search, traditional (GPATH) Other versions of @code{make} use a simpler algorithm: if the file does not exist, and it is found via directory search, then that pathname is always used whether or not the target needs to be built. Thus, if the target is rebuilt it is created at the pathname discovered during directory search. @vindex GPATH If, in fact, this is the behavior you want for some or all of your directories, you can use the @code{GPATH} variable to indicate this to @code{make}. @code{GPATH} has the same syntax and format as @code{VPATH} (that is, a space- or colon-delimited list of pathnames). If an out-of-date target is found by directory search in a directory that also appears in @code{GPATH}, then that pathname is not thrown away. The target is rebuilt using the expanded path. @node Commands/Search, Implicit/Search, Search Algorithm, Directory Search @subsection Writing Shell Commands with Directory Search @cindex shell command, and directory search @cindex directory search (@code{VPATH}), and shell commands When a prerequisite is found in another directory through directory search, this cannot change the commands of the rule; they will execute as written. Therefore, you must write the commands with care so that they will look for the prerequisite in the directory where @code{make} finds it. This is done with the @dfn{automatic variables} such as @samp{$^} (@pxref{Automatic Variables}). For instance, the value of @samp{$^} is a list of all the prerequisites of the rule, including the names of the directories in which they were found, and the value of @samp{$@@} is the target. Thus:@refill @example foo.o : foo.c cc -c $(CFLAGS) $^ -o $@@ @end example @noindent (The variable @code{CFLAGS} exists so you can specify flags for C compilation by implicit rules; we use it here for consistency so it will affect all C compilations uniformly; @pxref{Implicit Variables, ,Variables Used by Implicit Rules}.) Often the prerequisites include header files as well, which you do not want to mention in the commands. The automatic variable @samp{$<} is just the first prerequisite: @example VPATH = src:../headers foo.o : foo.c defs.h hack.h cc -c $(CFLAGS) $< -o $@@ @end example @node Implicit/Search, Libraries/Search, Commands/Search, Directory Search @subsection Directory Search and Implicit Rules @cindex @code{VPATH}, and implicit rules @cindex directory search (@code{VPATH}), and implicit rules @cindex search path for prerequisites (@code{VPATH}), and implicit rules @cindex implicit rule, and directory search @cindex implicit rule, and @code{VPATH} @cindex rule, implicit, and directory search @cindex rule, implicit, and @code{VPATH} The search through the directories specified in @code{VPATH} or with @code{vpath} also happens during consideration of implicit rules (@pxref{Implicit Rules, ,Using Implicit Rules}). For example, when a file @file{foo.o} has no explicit rule, @code{make} considers implicit rules, such as the built-in rule to compile @file{foo.c} if that file exists. If such a file is lacking in the current directory, the appropriate directories are searched for it. If @file{foo.c} exists (or is mentioned in the makefile) in any of the directories, the implicit rule for C compilation is applied. The commands of implicit rules normally use automatic variables as a matter of necessity; consequently they will use the file names found by directory search with no extra effort. @node Libraries/Search, , Implicit/Search, Directory Search @subsection Directory Search for Link Libraries @cindex link libraries, and directory search @cindex libraries for linking, directory search @cindex directory search (@code{VPATH}), and link libraries @cindex @code{VPATH}, and link libraries @cindex search path for prerequisites (@code{VPATH}), and link libraries @cindex @code{-l} (library search) @cindex link libraries, patterns matching @cindex @code{.LIBPATTERNS}, and link libraries @vindex .LIBPATTERNS Directory search applies in a special way to libraries used with the linker. This special feature comes into play when you write a prerequisite whose name is of the form @samp{-l@var{name}}. (You can tell something strange is going on here because the prerequisite is normally the name of a file, and the @emph{file name} of a library generally looks like @file{lib@var{name}.a}, not like @samp{-l@var{name}}.)@refill When a prerequisite's name has the form @samp{-l@var{name}}, @code{make} handles it specially by searching for the file @file{lib@var{name}.so} in the current directory, in directories specified by matching @code{vpath} search paths and the @code{VPATH} search path, and then in the directories @file{/lib}, @file{/usr/lib}, and @file{@var{prefix}/lib} (normally @file{/usr/local/lib}, but MS-DOS/MS-Windows versions of @code{make} behave as if @var{prefix} is defined to be the root of the DJGPP installation tree). If that file is not found, then the file @file{lib@var{name}.a} is searched for, in the same directories as above. For example, if there is a @file{/usr/lib/libcurses.a} library on your system (and no @file{/usr/lib/libcurses.so} file), then @example @group foo : foo.c -lcurses cc $^ -o $@@ @end group @end example @noindent would cause the command @samp{cc foo.c /usr/lib/libcurses.a -o foo} to be executed when @file{foo} is older than @file{foo.c} or than @file{/usr/lib/libcurses.a}.@refill Although the default set of files to be searched for is @file{lib@var{name}.so} and @file{lib@var{name}.a}, this is customizable via the @code{.LIBPATTERNS} variable. Each word in the value of this variable is a pattern string. When a prerequisite like @samp{-l@var{name}} is seen, @code{make} will replace the percent in each pattern in the list with @var{name} and perform the above directory searches using that library filename. If no library is found, the next word in the list will be used. The default value for @code{.LIBPATTERNS} is @samp{lib%.so lib%.a}, which provides the default behavior described above. You can turn off link library expansion completely by setting this variable to an empty value. @node Phony Targets, Force Targets, Directory Search, Rules @section Phony Targets @cindex phony targets @cindex targets, phony @cindex targets without a file A phony target is one that is not really the name of a file. It is just a name for some commands to be executed when you make an explicit request. There are two reasons to use a phony target: to avoid a conflict with a file of the same name, and to improve performance. If you write a rule whose commands will not create the target file, the commands will be executed every time the target comes up for remaking. Here is an example: @example @group clean: rm *.o temp @end group @end example @noindent Because the @code{rm} command does not create a file named @file{clean}, probably no such file will ever exist. Therefore, the @code{rm} command will be executed every time you say @samp{make clean}. @cindex @code{rm} (shell command) @findex .PHONY The phony target will cease to work if anything ever does create a file named @file{clean} in this directory. Since it has no prerequisites, the file @file{clean} would inevitably be considered up to date, and its commands would not be executed. To avoid this problem, you can explicitly declare the target to be phony, using the special target @code{.PHONY} (@pxref{Special Targets, ,Special Built-in Target Names}) as follows: @example .PHONY : clean @end example @noindent Once this is done, @samp{make clean} will run the commands regardless of whether there is a file named @file{clean}. Since it knows that phony targets do not name actual files that could be remade from other files, @code{make} skips the implicit rule search for phony targets (@pxref{Implicit Rules}). This is why declaring a target phony is good for performance, even if you are not worried about the actual file existing. Thus, you first write the line that states that @code{clean} is a phony target, then you write the rule, like this: @example @group .PHONY: clean clean: rm *.o temp @end group @end example Another example of the usefulness of phony targets is in conjunction with recursive invocations of @code{make} (for more information, see @ref{Recursion, ,Recursive Use of @code{make}}). In this case the makefile will often contain a variable which lists a number of subdirectories to be built. One way to handle this is with one rule whose command is a shell loop over the subdirectories, like this: @example @group SUBDIRS = foo bar baz subdirs: for dir in $(SUBDIRS); do \ $(MAKE) -C $$dir; \ done @end group @end example There are a few problems with this method, however. First, any error detected in a submake is not noted by this rule, so it will continue to build the rest of the directories even when one fails. This can be overcome by adding shell commands to note the error and exit, but then it will do so even if @code{make} is invoked with the @code{-k} option, which is unfortunate. Second, and perhaps more importantly, you cannot take advantage of @code{make}'s ability to build targets in parallel (@pxref{Parallel, ,Parallel Execution}), since there is only one rule. By declaring the subdirectories as phony targets (you must do this as the subdirectory obviously always exists; otherwise it won't be built) you can remove these problems: @example @group SUBDIRS = foo bar baz .PHONY: subdirs $(SUBDIRS) subdirs: $(SUBDIRS) $(SUBDIRS): $(MAKE) -C $@@ foo: baz @end group @end example Here we've also declared that the @file{foo} subdirectory cannot be built until after the @file{baz} subdirectory is complete; this kind of relationship declaration is particularly important when attempting parallel builds. A phony target should not be a prerequisite of a real target file; if it is, its commands are run every time @code{make} goes to update that file. As long as a phony target is never a prerequisite of a real target, the phony target commands will be executed only when the phony target is a specified goal (@pxref{Goals, ,Arguments to Specify the Goals}). Phony targets can have prerequisites. When one directory contains multiple programs, it is most convenient to describe all of the programs in one makefile @file{./Makefile}. Since the target remade by default will be the first one in the makefile, it is common to make this a phony target named @samp{all} and give it, as prerequisites, all the individual programs. For example: @example all : prog1 prog2 prog3 .PHONY : all prog1 : prog1.o utils.o cc -o prog1 prog1.o utils.o prog2 : prog2.o cc -o prog2 prog2.o prog3 : prog3.o sort.o utils.o cc -o prog3 prog3.o sort.o utils.o @end example @noindent Now you can say just @samp{make} to remake all three programs, or specify as arguments the ones to remake (as in @samp{make prog1 prog3}). Phoniness is not inherited: the prerequisites of a phony target are not themselves phony, unless explicitly declared to be so. When one phony target is a prerequisite of another, it serves as a subroutine of the other. For example, here @samp{make cleanall} will delete the object files, the difference files, and the file @file{program}: @example .PHONY: cleanall cleanobj cleandiff cleanall : cleanobj cleandiff rm program cleanobj : rm *.o cleandiff : rm *.diff @end example @node Force Targets, Empty Targets, Phony Targets, Rules @section Rules without Commands or Prerequisites @cindex force targets @cindex targets, force @cindex @code{FORCE} @cindex rule, no commands or prerequisites If a rule has no prerequisites or commands, and the target of the rule is a nonexistent file, then @code{make} imagines this target to have been updated whenever its rule is run. This implies that all targets depending on this one will always have their commands run. An example will illustrate this: @example @group clean: FORCE rm $(objects) FORCE: @end group @end example Here the target @samp{FORCE} satisfies the special conditions, so the target @file{clean} that depends on it is forced to run its commands. There is nothing special about the name @samp{FORCE}, but that is one name commonly used this way. As you can see, using @samp{FORCE} this way has the same results as using @samp{.PHONY: clean}. Using @samp{.PHONY} is more explicit and more efficient. However, other versions of @code{make} do not support @samp{.PHONY}; thus @samp{FORCE} appears in many makefiles. @xref{Phony Targets}. @node Empty Targets, Special Targets, Force Targets, Rules @section Empty Target Files to Record Events @cindex empty targets @cindex targets, empty @cindex recording events with empty targets The @dfn{empty target} is a variant of the phony target; it is used to hold commands for an action that you request explicitly from time to time. Unlike a phony target, this target file can really exist; but the file's contents do not matter, and usually are empty. The purpose of the empty target file is to record, with its last-modification time, when the rule's commands were last executed. It does so because one of the commands is a @code{touch} command to update the target file. The empty target file should have some prerequisites (otherwise it doesn't make sense). When you ask to remake the empty target, the commands are executed if any prerequisite is more recent than the target; in other words, if a prerequisite has changed since the last time you remade the target. Here is an example: @example print: foo.c bar.c lpr -p $? touch print @end example @cindex @code{print} target @cindex @code{lpr} (shell command) @cindex @code{touch} (shell command) @noindent With this rule, @samp{make print} will execute the @code{lpr} command if either source file has changed since the last @samp{make print}. The automatic variable @samp{$?} is used to print only those files that have changed (@pxref{Automatic Variables}). @node Special Targets, Multiple Targets, Empty Targets, Rules @section Special Built-in Target Names @cindex special targets @cindex built-in special targets @cindex targets, built-in special Certain names have special meanings if they appear as targets. @table @code @findex .PHONY @item .PHONY The prerequisites of the special target @code{.PHONY} are considered to be phony targets. When it is time to consider such a target, @code{make} will run its commands unconditionally, regardless of whether a file with that name exists or what its last-modification time is. @xref{Phony Targets, ,Phony Targets}. @findex .SUFFIXES @item .SUFFIXES The prerequisites of the special target @code{.SUFFIXES} are the list of suffixes to be used in checking for suffix rules. @xref{Suffix Rules, , Old-Fashioned Suffix Rules}. @findex .DEFAULT @item .DEFAULT The commands specified for @code{.DEFAULT} are used for any target for which no rules are found (either explicit rules or implicit rules). @xref{Last Resort}. If @code{.DEFAULT} commands are specified, every file mentioned as a prerequisite, but not as a target in a rule, will have these commands executed on its behalf. @xref{Implicit Rule Search, ,Implicit Rule Search Algorithm}. @findex .PRECIOUS @item .PRECIOUS @cindex precious targets @cindex preserving with @code{.PRECIOUS} The targets which @code{.PRECIOUS} depends on are given the following special treatment: if @code{make} is killed or interrupted during the execution of their commands, the target is not deleted. @xref{Interrupts, ,Interrupting or Killing @code{make}}. Also, if the target is an intermediate file, it will not be deleted after it is no longer needed, as is normally done. @xref{Chained Rules, ,Chains of Implicit Rules}. In this latter respect it overlaps with the @code{.SECONDARY} special target. You can also list the target pattern of an implicit rule (such as @samp{%.o}) as a prerequisite file of the special target @code{.PRECIOUS} to preserve intermediate files created by rules whose target patterns match that file's name. @findex .INTERMEDIATE @item .INTERMEDIATE @cindex intermediate targets, explicit The targets which @code{.INTERMEDIATE} depends on are treated as intermediate files. @xref{Chained Rules, ,Chains of Implicit Rules}. @code{.INTERMEDIATE} with no prerequisites has no effect. @findex .SECONDARY @item .SECONDARY @cindex secondary targets @cindex preserving with @code{.SECONDARY} The targets which @code{.SECONDARY} depends on are treated as intermediate files, except that they are never automatically deleted. @xref{Chained Rules, ,Chains of Implicit Rules}. @code{.SECONDARY} with no prerequisites causes all targets to be treated as secondary (i.e., no target is removed because it is considered intermediate). @findex .SECONDEXPANSION @item .SECONDEXPANSION If @code{.SECONDEXPANSION} is mentioned as a target anywhere in the makefile, then all prerequisite lists defined @emph{after} it appears will be expanded a second time after all makefiles have been read in. @xref{Secondary Expansion, ,Secondary Expansion}. The prerequisites of the special target @code{.SUFFIXES} are the list of suffixes to be used in checking for suffix rules. @xref{Suffix Rules, , Old-Fashioned Suffix Rules}. @findex .DELETE_ON_ERROR @item .DELETE_ON_ERROR @cindex removing targets on failure If @code{.DELETE_ON_ERROR} is mentioned as a target anywhere in the makefile, then @code{make} will delete the target of a rule if it has changed and its commands exit with a nonzero exit status, just as it does when it receives a signal. @xref{Errors, ,Errors in Commands}. @findex .IGNORE @item .IGNORE If you specify prerequisites for @code{.IGNORE}, then @code{make} will ignore errors in execution of the commands run for those particular files. The commands for @code{.IGNORE} are not meaningful. If mentioned as a target with no prerequisites, @code{.IGNORE} says to ignore errors in execution of commands for all files. This usage of @samp{.IGNORE} is supported only for historical compatibility. Since this affects every command in the makefile, it is not very useful; we recommend you use the more selective ways to ignore errors in specific commands. @xref{Errors, ,Errors in Commands}. @findex .LOW_RESOLUTION_TIME @item .LOW_RESOLUTION_TIME If you specify prerequisites for @code{.LOW_RESOLUTION_TIME}, @command{make} assumes that these files are created by commands that generate low resolution time stamps. The commands for @code{.LOW_RESOLUTION_TIME} are not meaningful. The high resolution file time stamps of many modern hosts lessen the chance of @command{make} incorrectly concluding that a file is up to date. Unfortunately, these hosts provide no way to set a high resolution file time stamp, so commands like @samp{cp -p} that explicitly set a file's time stamp must discard its subsecond part. If a file is created by such a command, you should list it as a prerequisite of @code{.LOW_RESOLUTION_TIME} so that @command{make} does not mistakenly conclude that the file is out of date. For example: @example @group .LOW_RESOLUTION_TIME: dst dst: src cp -p src dst @end group @end example Since @samp{cp -p} discards the subsecond part of @file{src}'s time stamp, @file{dst} is typically slightly older than @file{src} even when it is up to date. The @code{.LOW_RESOLUTION_TIME} line causes @command{make} to consider @file{dst} to be up to date if its time stamp is at the start of the same second that @file{src}'s time stamp is in. Due to a limitation of the archive format, archive member time stamps are always low resolution. You need not list archive members as prerequisites of @code{.LOW_RESOLUTION_TIME}, as @command{make} does this automatically. @findex .SILENT @item .SILENT If you specify prerequisites for @code{.SILENT}, then @code{make} will not print the commands to remake those particular files before executing them. The commands for @code{.SILENT} are not meaningful. If mentioned as a target with no prerequisites, @code{.SILENT} says not to print any commands before executing them. This usage of @samp{.SILENT} is supported only for historical compatibility. We recommend you use the more selective ways to silence specific commands. @xref{Echoing, ,Command Echoing}. If you want to silence all commands for a particular run of @code{make}, use the @samp{-s} or @w{@samp{--silent}} option (@pxref{Options Summary}). @findex .EXPORT_ALL_VARIABLES @item .EXPORT_ALL_VARIABLES Simply by being mentioned as a target, this tells @code{make} to export all variables to child processes by default. @xref{Variables/Recursion, ,Communicating Variables to a Sub-@code{make}}. @findex .NOTPARALLEL @item .NOTPARALLEL @cindex parallel execution, overriding If @code{.NOTPARALLEL} is mentioned as a target, then this invocation of @code{make} will be run serially, even if the @samp{-j} option is given. Any recursively invoked @code{make} command will still be run in parallel (unless its makefile contains this target). Any prerequisites on this target are ignored. @end table Any defined implicit rule suffix also counts as a special target if it appears as a target, and so does the concatenation of two suffixes, such as @samp{.c.o}. These targets are suffix rules, an obsolete way of defining implicit rules (but a way still widely used). In principle, any target name could be special in this way if you break it in two and add both pieces to the suffix list. In practice, suffixes normally begin with @samp{.}, so these special target names also begin with @samp{.}. @xref{Suffix Rules, ,Old-Fashioned Suffix Rules}. @node Multiple Targets, Multiple Rules, Special Targets, Rules @section Multiple Targets in a Rule @cindex multiple targets @cindex several targets in a rule @cindex targets, multiple @cindex rule, with multiple targets A rule with multiple targets is equivalent to writing many rules, each with one target, and all identical aside from that. The same commands apply to all the targets, but their effects may vary because you can substitute the actual target name into the command using @samp{$@@}. The rule contributes the same prerequisites to all the targets also. This is useful in two cases. @itemize @bullet @item You want just prerequisites, no commands. For example: @example kbd.o command.o files.o: command.h @end example @noindent gives an additional prerequisite to each of the three object files mentioned. @item Similar commands work for all the targets. The commands do not need to be absolutely identical, since the automatic variable @samp{$@@} can be used to substitute the particular target to be remade into the commands (@pxref{Automatic Variables}). For example: @example @group bigoutput littleoutput : text.g generate text.g -$(subst output,,$@@) > $@@ @end group @end example @findex subst @noindent is equivalent to @example bigoutput : text.g generate text.g -big > bigoutput littleoutput : text.g generate text.g -little > littleoutput @end example @noindent Here we assume the hypothetical program @code{generate} makes two types of output, one if given @samp{-big} and one if given @samp{-little}. @xref{Text Functions, ,Functions for String Substitution and Analysis}, for an explanation of the @code{subst} function. @end itemize Suppose you would like to vary the prerequisites according to the target, much as the variable @samp{$@@} allows you to vary the commands. You cannot do this with multiple targets in an ordinary rule, but you can do it with a @dfn{static pattern rule}. @xref{Static Pattern, ,Static Pattern Rules}. @node Multiple Rules, Static Pattern, Multiple Targets, Rules @section Multiple Rules for One Target @cindex multiple rules for one target @cindex several rules for one target @cindex rule, multiple for one target @cindex target, multiple rules for one One file can be the target of several rules. All the prerequisites mentioned in all the rules are merged into one list of prerequisites for the target. If the target is older than any prerequisite from any rule, the commands are executed. There can only be one set of commands to be executed for a file. If more than one rule gives commands for the same file, @code{make} uses the last set given and prints an error message. (As a special case, if the file's name begins with a dot, no error message is printed. This odd behavior is only for compatibility with other implementations of @code{make}... you should avoid using it). Occasionally it is useful to have the same target invoke multiple commands which are defined in different parts of your makefile; you can use @dfn{double-colon rules} (@pxref{Double-Colon}) for this. An extra rule with just prerequisites can be used to give a few extra prerequisites to many files at once. For example, makefiles often have a variable, such as @code{objects}, containing a list of all the compiler output files in the system being made. An easy way to say that all of them must be recompiled if @file{config.h} changes is to write the following: @example objects = foo.o bar.o foo.o : defs.h bar.o : defs.h test.h $(objects) : config.h @end example This could be inserted or taken out without changing the rules that really specify how to make the object files, making it a convenient form to use if you wish to add the additional prerequisite intermittently. Another wrinkle is that the additional prerequisites could be specified with a variable that you set with a command argument to @code{make} (@pxref{Overriding, ,Overriding Variables}). For example, @example @group extradeps= $(objects) : $(extradeps) @end group @end example @noindent means that the command @samp{make extradeps=foo.h} will consider @file{foo.h} as a prerequisite of each object file, but plain @samp{make} will not. If none of the explicit rules for a target has commands, then @code{make} searches for an applicable implicit rule to find some commands @pxref{Implicit Rules, ,Using Implicit Rules}). @node Static Pattern, Double-Colon, Multiple Rules, Rules @section Static Pattern Rules @cindex static pattern rule @cindex rule, static pattern @cindex pattern rules, static (not implicit) @cindex varying prerequisites @cindex prerequisites, varying (static pattern) @dfn{Static pattern rules} are rules which specify multiple targets and construct the prerequisite names for each target based on the target name. They are more general than ordinary rules with multiple targets because the targets do not have to have identical prerequisites. Their prerequisites must be @emph{analogous}, but not necessarily @emph{identical}. @menu * Static Usage:: The syntax of static pattern rules. * Static versus Implicit:: When are they better than implicit rules? @end menu @node Static Usage, Static versus Implicit, Static Pattern, Static Pattern @subsection Syntax of Static Pattern Rules @cindex static pattern rule, syntax of @cindex pattern rules, static, syntax of Here is the syntax of a static pattern rule: @example @var{targets} @dots{}: @var{target-pattern}: @var{prereq-patterns} @dots{} @var{commands} @dots{} @end example @noindent The @var{targets} list specifies the targets that the rule applies to. The targets can contain wildcard characters, just like the targets of ordinary rules (@pxref{Wildcards, ,Using Wildcard Characters in File Names}). @cindex target pattern, static (not implicit) @cindex stem The @var{target-pattern} and @var{prereq-patterns} say how to compute the prerequisites of each target. Each target is matched against the @var{target-pattern} to extract a part of the target name, called the @dfn{stem}. This stem is substituted into each of the @var{prereq-patterns} to make the prerequisite names (one from each @var{prereq-pattern}). Each pattern normally contains the character @samp{%} just once. When the @var{target-pattern} matches a target, the @samp{%} can match any part of the target name; this part is called the @dfn{stem}. The rest of the pattern must match exactly. For example, the target @file{foo.o} matches the pattern @samp{%.o}, with @samp{foo} as the stem. The targets @file{foo.c} and @file{foo.out} do not match that pattern.@refill @cindex prerequisite pattern, static (not implicit) The prerequisite names for each target are made by substituting the stem for the @samp{%} in each prerequisite pattern. For example, if one prerequisite pattern is @file{%.c}, then substitution of the stem @samp{foo} gives the prerequisite name @file{foo.c}. It is legitimate to write a prerequisite pattern that does not contain @samp{%}; then this prerequisite is the same for all targets. @cindex @code{%}, quoting in static pattern @cindex @code{%}, quoting with @code{\} (backslash) @cindex @code{\} (backslash), to quote @code{%} @cindex backslash (@code{\}), to quote @code{%} @cindex quoting @code{%}, in static pattern @samp{%} characters in pattern rules can be quoted with preceding backslashes (@samp{\}). Backslashes that would otherwise quote @samp{%} characters can be quoted with more backslashes. Backslashes that quote @samp{%} characters or other backslashes are removed from the pattern before it is compared to file names or has a stem substituted into it. Backslashes that are not in danger of quoting @samp{%} characters go unmolested. For example, the pattern @file{the\%weird\\%pattern\\} has @samp{the%weird\} preceding the operative @samp{%} character, and @samp{pattern\\} following it. The final two backslashes are left alone because they cannot affect any @samp{%} character.@refill Here is an example, which compiles each of @file{foo.o} and @file{bar.o} from the corresponding @file{.c} file: @example @group objects = foo.o bar.o all: $(objects) $(objects): %.o: %.c $(CC) -c $(CFLAGS) $< -o $@@ @end group @end example @noindent Here @samp{$<} is the automatic variable that holds the name of the prerequisite and @samp{$@@} is the automatic variable that holds the name of the target; see @ref{Automatic Variables}. Each target specified must match the target pattern; a warning is issued for each target that does not. If you have a list of files, only some of which will match the pattern, you can use the @code{filter} function to remove nonmatching file names (@pxref{Text Functions, ,Functions for String Substitution and Analysis}): @example files = foo.elc bar.o lose.o $(filter %.o,$(files)): %.o: %.c $(CC) -c $(CFLAGS) $< -o $@@ $(filter %.elc,$(files)): %.elc: %.el emacs -f batch-byte-compile $< @end example @noindent In this example the result of @samp{$(filter %.o,$(files))} is @file{bar.o lose.o}, and the first static pattern rule causes each of these object files to be updated by compiling the corresponding C source file. The result of @w{@samp{$(filter %.elc,$(files))}} is @file{foo.elc}, so that file is made from @file{foo.el}.@refill Another example shows how to use @code{$*} in static pattern rules: @vindex $*@r{, and static pattern} @example @group bigoutput littleoutput : %output : text.g generate text.g -$* > $@@ @end group @end example @noindent When the @code{generate} command is run, @code{$*} will expand to the stem, either @samp{big} or @samp{little}. @node Static versus Implicit, , Static Usage, Static Pattern @subsection Static Pattern Rules versus Implicit Rules @cindex rule, static pattern versus implicit @cindex static pattern rule, versus implicit A static pattern rule has much in common with an implicit rule defined as a pattern rule (@pxref{Pattern Rules, ,Defining and Redefining Pattern Rules}). Both have a pattern for the target and patterns for constructing the names of prerequisites. The difference is in how @code{make} decides @emph{when} the rule applies. An implicit rule @emph{can} apply to any target that matches its pattern, but it @emph{does} apply only when the target has no commands otherwise specified, and only when the prerequisites can be found. If more than one implicit rule appears applicable, only one applies; the choice depends on the order of rules. By contrast, a static pattern rule applies to the precise list of targets that you specify in the rule. It cannot apply to any other target and it invariably does apply to each of the targets specified. If two conflicting rules apply, and both have commands, that's an error. The static pattern rule can be better than an implicit rule for these reasons: @itemize @bullet @item You may wish to override the usual implicit rule for a few files whose names cannot be categorized syntactically but can be given in an explicit list. @item If you cannot be sure of the precise contents of the directories you are using, you may not be sure which other irrelevant files might lead @code{make} to use the wrong implicit rule. The choice might depend on the order in which the implicit rule search is done. With static pattern rules, there is no uncertainty: each rule applies to precisely the targets specified. @end itemize @node Double-Colon, Automatic Prerequisites, Static Pattern, Rules @section Double-Colon Rules @cindex double-colon rules @cindex rule, double-colon (@code{::}) @cindex multiple rules for one target (@code{::}) @cindex @code{::} rules (double-colon) @dfn{Double-colon} rules are rules written with @samp{::} instead of @samp{:} after the target names. They are handled differently from ordinary rules when the same target appears in more than one rule. When a target appears in multiple rules, all the rules must be the same type: all ordinary, or all double-colon. If they are double-colon, each of them is independent of the others. Each double-colon rule's commands are executed if the target is older than any prerequisites of that rule. If there are no prerequisites for that rule, its commands are always executed (even if the target already exists). This can result in executing none, any, or all of the double-colon rules. Double-colon rules with the same target are in fact completely separate from one another. Each double-colon rule is processed individually, just as rules with different targets are processed. The double-colon rules for a target are executed in the order they appear in the makefile. However, the cases where double-colon rules really make sense are those where the order of executing the commands would not matter. Double-colon rules are somewhat obscure and not often very useful; they provide a mechanism for cases in which the method used to update a target differs depending on which prerequisite files caused the update, and such cases are rare. Each double-colon rule should specify commands; if it does not, an implicit rule will be used if one applies. @xref{Implicit Rules, ,Using Implicit Rules}. @node Automatic Prerequisites, , Double-Colon, Rules @section Generating Prerequisites Automatically @cindex prerequisites, automatic generation @cindex automatic generation of prerequisites @cindex generating prerequisites automatically In the makefile for a program, many of the rules you need to write often say only that some object file depends on some header file. For example, if @file{main.c} uses @file{defs.h} via an @code{#include}, you would write: @example main.o: defs.h @end example @noindent You need this rule so that @code{make} knows that it must remake @file{main.o} whenever @file{defs.h} changes. You can see that for a large program you would have to write dozens of such rules in your makefile. And, you must always be very careful to update the makefile every time you add or remove an @code{#include}. @cindex @code{#include} @cindex @code{-M} (to compiler) To avoid this hassle, most modern C compilers can write these rules for you, by looking at the @code{#include} lines in the source files. Usually this is done with the @samp{-M} option to the compiler. For example, the command: @example cc -M main.c @end example @noindent generates the output: @example main.o : main.c defs.h @end example @noindent Thus you no longer have to write all those rules yourself. The compiler will do it for you. Note that such a prerequisite constitutes mentioning @file{main.o} in a makefile, so it can never be considered an intermediate file by implicit rule search. This means that @code{make} won't ever remove the file after using it; @pxref{Chained Rules, ,Chains of Implicit Rules}. @cindex @code{make depend} With old @code{make} programs, it was traditional practice to use this compiler feature to generate prerequisites on demand with a command like @samp{make depend}. That command would create a file @file{depend} containing all the automatically-generated prerequisites; then the makefile could use @code{include} to read them in (@pxref{Include}). In GNU @code{make}, the feature of remaking makefiles makes this practice obsolete---you need never tell @code{make} explicitly to regenerate the prerequisites, because it always regenerates any makefile that is out of date. @xref{Remaking Makefiles}. The practice we recommend for automatic prerequisite generation is to have one makefile corresponding to each source file. For each source file @file{@var{name}.c} there is a makefile @file{@var{name}.d} which lists what files the object file @file{@var{name}.o} depends on. That way only the source files that have changed need to be rescanned to produce the new prerequisites. Here is the pattern rule to generate a file of prerequisites (i.e., a makefile) called @file{@var{name}.d} from a C source file called @file{@var{name}.c}: @smallexample @group %.d: %.c @@set -e; rm -f $@@; \ $(CC) -M $(CPPFLAGS) $< > $@@.$$$$; \ sed 's,\($*\)\.o[ :]*,\1.o $@@ : ,g' < $@@.$$$$ > $@@; \ rm -f $@@.$$$$ @end group @end smallexample @noindent @xref{Pattern Rules}, for information on defining pattern rules. The @samp{-e} flag to the shell causes it to exit immediately if the @code{$(CC)} command (or any other command) fails (exits with a nonzero status). @cindex @code{-e} (shell flag) @cindex @code{-MM} (to GNU compiler) With the GNU C compiler, you may wish to use the @samp{-MM} flag instead of @samp{-M}. This omits prerequisites on system header files. @xref{Preprocessor Options, , Options Controlling the Preprocessor, gcc.info, Using GNU CC}, for details. @cindex @code{sed} (shell command) The purpose of the @code{sed} command is to translate (for example): @example main.o : main.c defs.h @end example @noindent into: @example main.o main.d : main.c defs.h @end example @noindent @cindex @code{.d} This makes each @samp{.d} file depend on all the source and header files that the corresponding @samp{.o} file depends on. @code{make} then knows it must regenerate the prerequisites whenever any of the source or header files changes. Once you've defined the rule to remake the @samp{.d} files, you then use the @code{include} directive to read them all in. @xref{Include}. For example: @example @group sources = foo.c bar.c include $(sources:.c=.d) @end group @end example @noindent (This example uses a substitution variable reference to translate the list of source files @samp{foo.c bar.c} into a list of prerequisite makefiles, @samp{foo.d bar.d}. @xref{Substitution Refs}, for full information on substitution references.) Since the @samp{.d} files are makefiles like any others, @code{make} will remake them as necessary with no further work from you. @xref{Remaking Makefiles}. Note that the @samp{.d} files contain target definitions; you should be sure to place the @code{include} directive @emph{after} the first, default goal in your makefiles or run the risk of having a random object file become the default goal. @xref{How Make Works}. @node Commands, Using Variables, Rules, Top @chapter Writing the Commands in Rules @cindex commands, how to write @cindex rule commands @cindex writing rule commands The commands of a rule consist of one or more shell command lines to be executed, one at a time, in the order they appear. Typically, the result of executing these commands is that the target of the rule is brought up to date. Users use many different shell programs, but commands in makefiles are always interpreted by @file{/bin/sh} unless the makefile specifies otherwise. @xref{Execution, ,Command Execution}. @menu * Command Syntax:: Command syntax features and pitfalls. * Echoing:: How to control when commands are echoed. * Execution:: How commands are executed. * Parallel:: How commands can be executed in parallel. * Errors:: What happens after a command execution error. * Interrupts:: What happens when a command is interrupted. * Recursion:: Invoking @code{make} from makefiles. * Sequences:: Defining canned sequences of commands. * Empty Commands:: Defining useful, do-nothing commands. @end menu @node Command Syntax, Echoing, Commands, Commands @section Command Syntax @cindex command syntax @cindex syntax of commands Makefiles have the unusual property that there are really two distinct syntaxes in one file. Most of the makefile uses @code{make} syntax (@pxref{Makefiles, ,Writing Makefiles}). However, commands are meant to be interpreted by the shell and so they are written using shell syntax. The @code{make} program does not try to understand shell syntax: it performs only a very few specific translations on the content of the command before handing it to the shell. Each command line must start with a tab, except that the first command line may be attached to the target-and-prerequisites line with a semicolon in between. @emph{Any} line in the makefile that begins with a tab and appears in a ``rule context'' (that is, after a rule has been started until another rule or variable definition) will be considered a command line for that rule. Blank lines and lines of just comments may appear among the command lines; they are ignored. Some consequences of these rules include: @itemize @bullet @item A blank line that begins with a tab is not blank: it's an empty command (@pxref{Empty Commands}). @cindex comments, in commands @cindex commands, comments in @cindex @code{#} (comments), in commands @item A comment in a command line is not a @code{make} comment; it will be passed to the shell as-is. Whether the shell treats it as a comment or not depends on your shell. @item A variable definition in a ``rule context'' which is indented by a tab as the first character on the line, will be considered a command line, not a @code{make} variable definition, and passed to the shell. @item A conditional expression (@code{ifdef}, @code{ifeq}, etc. @pxref{Conditional Syntax, ,Syntax of Conditionals}) in a ``rule context'' which is indented by a tab as the first character on the line, will be considered a command line and be passed to the shell. @end itemize @menu * Splitting Lines:: Breaking long command lines for readability. * Variables in Commands:: Using @code{make} variables in commands. @end menu @node Splitting Lines, Variables in Commands, Command Syntax, Command Syntax @subsection Splitting Command Lines @cindex commands, splitting @cindex splitting commands @cindex commands, backslash (@code{\}) in @cindex commands, quoting newlines in @cindex backslash (@code{\}), in commands @cindex @code{\} (backslash), in commands @cindex quoting newline, in commands @cindex newline, quoting, in commands One of the few ways in which @code{make} does interpret command lines is checking for a backslash just before the newline. As in normal makefile syntax, a single command can be split into multiple lines in the makefile by placing a backslash before each newline. A sequence of lines like this is considered a single command, and one instance of the shell will be invoked to run it. However, in contrast to how they are treated in other places in a makefile, backslash-newline pairs are @emph{not} removed from the command. Both the backslash and the newline characters are preserved and passed to the shell. How the backslash-newline is interpreted depends on your shell. If the first character of the next line after the backslash-newline is a tab, then that tab (and only that tab) is removed. Whitespace is never added to the command. For example, this makefile: @example @group all : @@echo no\ space @@echo no\ space @@echo one \ space @@echo one\ space @end group @end example @noindent consists of four separate shell commands where the output is: @example @group nospace nospace one space one space @end group @end example As a more complex example, this makefile: @example @group all : ; @@echo 'hello \ world' ; echo "hello \ world" @end group @end example @noindent will run one shell with a command script of: @example @group echo 'hello \ world' ; echo "hello \ world" @end group @end example @noindent which, according to shell quoting rules, will yield the following output: @example @group hello \ world hello world @end group @end example @noindent Notice how the backslash/newline pair was removed inside the string quoted with double quotes (@code{"..."}), but not from the string quoted with single quotes (@code{'...'}). This is the way the default shell (@file{/bin/sh}) handles backslash/newline pairs. If you specify a different shell in your makefiles it may treat them differently. Sometimes you want to split a long line inside of single quotes, but you don't want the backslash-newline to appear in the quoted content. This is often the case when passing scripts to languages such as Perl, where extraneous backslashes inside the script can change its meaning or even be a syntax error. One simple way of handling this is to place the quoted string, or even the entire command, into a @code{make} variable then use the variable in the command. In this situation the newline quoting rules for makefiles will be used, and the backslash-newline will be removed. If we rewrite our example above using this method: @example @group HELLO = 'hello \ world' all : ; @@echo $(HELLO) @end group @end example @noindent we will get output like this: @example @group hello world @end group @end example If you like, you can also use target-specific variables (@pxref{Target-specific, ,Target-specific Variable Values}) to obtain a tighter correspondence between the variable and the command that uses it. @node Variables in Commands, , Splitting Lines, Command Syntax @subsection Using Variables in Commands @cindex variable references in commands @cindex commands, using variables in The other way in which @code{make} processes commands is by expanding any variable references in them (@pxref{Reference,Basics of Variable References}). This occurs after make has finished reading all the makefiles and the target is determined to be out of date; so, the commands for targets which are not rebuilt are never expanded. Variable and function references in commands have identical syntax and semantics to references elsewhere in the makefile. They also have the same quoting rules: if you want a dollar sign to appear in your command, you must double it (@samp{$$}). For shells like the default shell, that use dollar signs to introduce variables, it's important to keep clear in your mind whether the variable you want to reference is a @code{make} variable (use a single dollar sign) or a shell variable (use two dollar signs). For example: @example @group LIST = one two three all: for i in $(LIST); do \ echo $$i; \ done @end group @end example @noindent results in the following command being passed to the shell: @example @group for i in one two three; do \ echo $i; \ done @end group @end example @noindent which generates the expected result: @example @group one two three @end group @end example @node Echoing, Execution, Command Syntax, Commands @section Command Echoing @cindex echoing of commands @cindex silent operation @cindex @code{@@} (in commands) @cindex commands, echoing @cindex printing of commands Normally @code{make} prints each command line before it is executed. We call this @dfn{echoing} because it gives the appearance that you are typing the commands yourself. When a line starts with @samp{@@}, the echoing of that line is suppressed. The @samp{@@} is discarded before the command is passed to the shell. Typically you would use this for a command whose only effect is to print something, such as an @code{echo} command to indicate progress through the makefile: @example @@echo About to make distribution files @end example @cindex @code{-n} @cindex @code{--just-print} @cindex @code{--dry-run} @cindex @code{--recon} When @code{make} is given the flag @samp{-n} or @samp{--just-print} it only echoes commands, it won't execute them. @xref{Options Summary, ,Summary of Options}. In this case and only this case, even the commands starting with @samp{@@} are printed. This flag is useful for finding out which commands @code{make} thinks are necessary without actually doing them. @cindex @code{-s} @cindex @code{--silent} @cindex @code{--quiet} @findex .SILENT The @samp{-s} or @samp{--silent} flag to @code{make} prevents all echoing, as if all commands started with @samp{@@}. A rule in the makefile for the special target @code{.SILENT} without prerequisites has the same effect (@pxref{Special Targets, ,Special Built-in Target Names}). @code{.SILENT} is essentially obsolete since @samp{@@} is more flexible.@refill @node Execution, Parallel, Echoing, Commands @section Command Execution @cindex commands, execution @cindex execution, of commands @cindex shell command, execution @vindex @code{SHELL} @r{(command execution)} When it is time to execute commands to update a target, they are executed by invoking a new subshell for each command line. (In practice, @code{make} may take shortcuts that do not affect the results.) @cindex @code{cd} (shell command) @cindex shell variables, setting in commands @cindex commands setting shell variables @strong{Please note:} this implies that setting shell variables and invoking shell commands such as @code{cd} that set a context local to each process will not affect the following command lines.@footnote{On MS-DOS, the value of current working directory is @strong{global}, so changing it @emph{will} affect the following command lines on those systems.} If you want to use @code{cd} to affect the next statement, put both statements in a single command line. Then @code{make} will invoke one shell to run the entire line, and the shell will execute the statements in sequence. For example: @example foo : bar/lose cd $(@@D) && gobble $(@@F) > ../$@@ @end example @noindent Here we use the shell AND operator (@code{&&}) so that if the @code{cd} command fails, the script will fail without trying to invoke the @code{gobble} command in the wrong directory, which could cause problems (in this case it would certainly cause @file{../foo} to be truncated, at least). @menu * Choosing the Shell:: How @code{make} chooses the shell used to run commands. @end menu @node Choosing the Shell, , Execution, Execution @subsection Choosing the Shell @cindex shell, choosing the @cindex @code{SHELL}, value of @vindex SHELL The program used as the shell is taken from the variable @code{SHELL}. If this variable is not set in your makefile, the program @file{/bin/sh} is used as the shell. @cindex environment, @code{SHELL} in Unlike most variables, the variable @code{SHELL} is never set from the environment. This is because the @code{SHELL} environment variable is used to specify your personal choice of shell program for interactive use. It would be very bad for personal choices like this to affect the functioning of makefiles. @xref{Environment, ,Variables from the Environment}. Furthermore, when you do set @code{SHELL} in your makefile that value is @emph{not} exported in the environment to commands that @code{make} invokes. Instead, the value inherited from the user's environment, if any, is exported. You can override this behavior by explicitly exporting @code{SHELL} (@pxref{Variables/Recursion, ,Communicating Variables to a Sub-@code{make}}), forcing it to be passed in the environment to commands. @vindex @code{MAKESHELL} @r{(MS-DOS alternative to @code{SHELL})} However, on MS-DOS and MS-Windows the value of @code{SHELL} in the environment @strong{is} used, since on those systems most users do not set this variable, and therefore it is most likely set specifically to be used by @code{make}. On MS-DOS, if the setting of @code{SHELL} is not suitable for @code{make}, you can set the variable @code{MAKESHELL} to the shell that @code{make} should use; if set it will be used as the shell instead of the value of @code{SHELL}. @subsubheading Choosing a Shell in DOS and Windows @cindex shell, in DOS and Windows @cindex DOS, choosing a shell in @cindex Windows, choosing a shell in Choosing a shell in MS-DOS and MS-Windows is much more complex than on other systems. @vindex COMSPEC On MS-DOS, if @code{SHELL} is not set, the value of the variable @code{COMSPEC} (which is always set) is used instead. @cindex @code{SHELL}, MS-DOS specifics The processing of lines that set the variable @code{SHELL} in Makefiles is different on MS-DOS. The stock shell, @file{command.com}, is ridiculously limited in its functionality and many users of @code{make} tend to install a replacement shell. Therefore, on MS-DOS, @code{make} examines the value of @code{SHELL}, and changes its behavior based on whether it points to a Unix-style or DOS-style shell. This allows reasonable functionality even if @code{SHELL} points to @file{command.com}. If @code{SHELL} points to a Unix-style shell, @code{make} on MS-DOS additionally checks whether that shell can indeed be found; if not, it ignores the line that sets @code{SHELL}. In MS-DOS, GNU @code{make} searches for the shell in the following places: @enumerate @item In the precise place pointed to by the value of @code{SHELL}. For example, if the makefile specifies @samp{SHELL = /bin/sh}, @code{make} will look in the directory @file{/bin} on the current drive. @item In the current directory. @item In each of the directories in the @code{PATH} variable, in order. @end enumerate In every directory it examines, @code{make} will first look for the specific file (@file{sh} in the example above). If this is not found, it will also look in that directory for that file with one of the known extensions which identify executable files. For example @file{.exe}, @file{.com}, @file{.bat}, @file{.btm}, @file{.sh}, and some others. If any of these attempts is successful, the value of @code{SHELL} will be set to the full pathname of the shell as found. However, if none of these is found, the value of @code{SHELL} will not be changed, and thus the line that sets it will be effectively ignored. This is so @code{make} will only support features specific to a Unix-style shell if such a shell is actually installed on the system where @code{make} runs. Note that this extended search for the shell is limited to the cases where @code{SHELL} is set from the Makefile; if it is set in the environment or command line, you are expected to set it to the full pathname of the shell, exactly as things are on Unix. The effect of the above DOS-specific processing is that a Makefile that contains @samp{SHELL = /bin/sh} (as many Unix makefiles do), will work on MS-DOS unaltered if you have e.g.@: @file{sh.exe} installed in some directory along your @code{PATH}. @node Parallel, Errors, Execution, Commands @section Parallel Execution @cindex commands, execution in parallel @cindex parallel execution @cindex execution, in parallel @cindex job slots @cindex @code{-j} @cindex @code{--jobs} GNU @code{make} knows how to execute several commands at once. Normally, @code{make} will execute only one command at a time, waiting for it to finish before executing the next. However, the @samp{-j} or @samp{--jobs} option tells @code{make} to execute many commands simultaneously.@refill On MS-DOS, the @samp{-j} option has no effect, since that system doesn't support multi-processing. If the @samp{-j} option is followed by an integer, this is the number of commands to execute at once; this is called the number of @dfn{job slots}. If there is nothing looking like an integer after the @samp{-j} option, there is no limit on the number of job slots. The default number of job slots is one, which means serial execution (one thing at a time). One unpleasant consequence of running several commands simultaneously is that output generated by the commands appears whenever each command sends it, so messages from different commands may be interspersed. Another problem is that two processes cannot both take input from the same device; so to make sure that only one command tries to take input from the terminal at once, @code{make} will invalidate the standard input streams of all but one running command. This means that attempting to read from standard input will usually be a fatal error (a @samp{Broken pipe} signal) for most child processes if there are several. @cindex broken pipe @cindex standard input It is unpredictable which command will have a valid standard input stream (which will come from the terminal, or wherever you redirect the standard input of @code{make}). The first command run will always get it first, and the first command started after that one finishes will get it next, and so on. We will change how this aspect of @code{make} works if we find a better alternative. In the mean time, you should not rely on any command using standard input at all if you are using the parallel execution feature; but if you are not using this feature, then standard input works normally in all commands. Finally, handling recursive @code{make} invocations raises issues. For more information on this, see @ref{Options/Recursion, ,Communicating Options to a Sub-@code{make}}. If a command fails (is killed by a signal or exits with a nonzero status), and errors are not ignored for that command (@pxref{Errors, ,Errors in Commands}), the remaining command lines to remake the same target will not be run. If a command fails and the @samp{-k} or @samp{--keep-going} option was not given (@pxref{Options Summary, ,Summary of Options}), @code{make} aborts execution. If make terminates for any reason (including a signal) with child processes running, it waits for them to finish before actually exiting.@refill @cindex load average @cindex limiting jobs based on load @cindex jobs, limiting based on load @cindex @code{-l} (load average) @cindex @code{--max-load} @cindex @code{--load-average} When the system is heavily loaded, you will probably want to run fewer jobs than when it is lightly loaded. You can use the @samp{-l} option to tell @code{make} to limit the number of jobs to run at once, based on the load average. The @samp{-l} or @samp{--max-load} option is followed by a floating-point number. For example, @example -l 2.5 @end example @noindent will not let @code{make} start more than one job if the load average is above 2.5. The @samp{-l} option with no following number removes the load limit, if one was given with a previous @samp{-l} option.@refill More precisely, when @code{make} goes to start up a job, and it already has at least one job running, it checks the current load average; if it is not lower than the limit given with @samp{-l}, @code{make} waits until the load average goes below that limit, or until all the other jobs finish. By default, there is no load limit. @node Errors, Interrupts, Parallel, Commands @section Errors in Commands @cindex errors (in commands) @cindex commands, errors in @cindex exit status (errors) After each shell command returns, @code{make} looks at its exit status. If the command completed successfully, the next command line is executed in a new shell; after the last command line is finished, the rule is finished. If there is an error (the exit status is nonzero), @code{make} gives up on the current rule, and perhaps on all rules. Sometimes the failure of a certain command does not indicate a problem. For example, you may use the @code{mkdir} command to ensure that a directory exists. If the directory already exists, @code{mkdir} will report an error, but you probably want @code{make} to continue regardless. @cindex @code{-} (in commands) To ignore errors in a command line, write a @samp{-} at the beginning of the line's text (after the initial tab). The @samp{-} is discarded before the command is passed to the shell for execution. For example, @example @group clean: -rm -f *.o @end group @end example @cindex @code{rm} (shell command) @noindent This causes @code{rm} to continue even if it is unable to remove a file. @cindex @code{-i} @cindex @code{--ignore-errors} @findex .IGNORE When you run @code{make} with the @samp{-i} or @samp{--ignore-errors} flag, errors are ignored in all commands of all rules. A rule in the makefile for the special target @code{.IGNORE} has the same effect, if there are no prerequisites. These ways of ignoring errors are obsolete because @samp{-} is more flexible. When errors are to be ignored, because of either a @samp{-} or the @samp{-i} flag, @code{make} treats an error return just like success, except that it prints out a message that tells you the status code the command exited with, and says that the error has been ignored. When an error happens that @code{make} has not been told to ignore, it implies that the current target cannot be correctly remade, and neither can any other that depends on it either directly or indirectly. No further commands will be executed for these targets, since their preconditions have not been achieved. @cindex @code{-k} @cindex @code{--keep-going} Normally @code{make} gives up immediately in this circumstance, returning a nonzero status. However, if the @samp{-k} or @samp{--keep-going} flag is specified, @code{make} continues to consider the other prerequisites of the pending targets, remaking them if necessary, before it gives up and returns nonzero status. For example, after an error in compiling one object file, @samp{make -k} will continue compiling other object files even though it already knows that linking them will be impossible. @xref{Options Summary, ,Summary of Options}. The usual behavior assumes that your purpose is to get the specified targets up to date; once @code{make} learns that this is impossible, it might as well report the failure immediately. The @samp{-k} option says that the real purpose is to test as many of the changes made in the program as possible, perhaps to find several independent problems so that you can correct them all before the next attempt to compile. This is why Emacs' @code{compile} command passes the @samp{-k} flag by default. @cindex Emacs (@code{M-x compile}) @findex .DELETE_ON_ERROR @cindex deletion of target files @cindex removal of target files @cindex target, deleting on error Usually when a command fails, if it has changed the target file at all, the file is corrupted and cannot be used---or at least it is not completely updated. Yet the file's time stamp says that it is now up to date, so the next time @code{make} runs, it will not try to update that file. The situation is just the same as when the command is killed by a signal; @pxref{Interrupts}. So generally the right thing to do is to delete the target file if the command fails after beginning to change the file. @code{make} will do this if @code{.DELETE_ON_ERROR} appears as a target. This is almost always what you want @code{make} to do, but it is not historical practice; so for compatibility, you must explicitly request it. @node Interrupts, Recursion, Errors, Commands @section Interrupting or Killing @code{make} @cindex interrupt @cindex signal @cindex deletion of target files @cindex removal of target files @cindex target, deleting on interrupt @cindex killing (interruption) If @code{make} gets a fatal signal while a command is executing, it may delete the target file that the command was supposed to update. This is done if the target file's last-modification time has changed since @code{make} first checked it. The purpose of deleting the target is to make sure that it is remade from scratch when @code{make} is next run. Why is this? Suppose you type @kbd{Ctrl-c} while a compiler is running, and it has begun to write an object file @file{foo.o}. The @kbd{Ctrl-c} kills the compiler, resulting in an incomplete file whose last-modification time is newer than the source file @file{foo.c}. But @code{make} also receives the @kbd{Ctrl-c} signal and deletes this incomplete file. If @code{make} did not do this, the next invocation of @code{make} would think that @file{foo.o} did not require updating---resulting in a strange error message from the linker when it tries to link an object file half of which is missing. @findex .PRECIOUS You can prevent the deletion of a target file in this way by making the special target @code{.PRECIOUS} depend on it. Before remaking a target, @code{make} checks to see whether it appears on the prerequisites of @code{.PRECIOUS}, and thereby decides whether the target should be deleted if a signal happens. Some reasons why you might do this are that the target is updated in some atomic fashion, or exists only to record a modification-time (its contents do not matter), or must exist at all times to prevent other sorts of trouble. @node Recursion, Sequences, Interrupts, Commands @section Recursive Use of @code{make} @cindex recursion @cindex subdirectories, recursion for Recursive use of @code{make} means using @code{make} as a command in a makefile. This technique is useful when you want separate makefiles for various subsystems that compose a larger system. For example, suppose you have a subdirectory @file{subdir} which has its own makefile, and you would like the containing directory's makefile to run @code{make} on the subdirectory. You can do it by writing this: @example subsystem: cd subdir && $(MAKE) @end example @noindent or, equivalently, this (@pxref{Options Summary, ,Summary of Options}): @example subsystem: $(MAKE) -C subdir @end example @cindex @code{-C} @cindex @code{--directory} You can write recursive @code{make} commands just by copying this example, but there are many things to know about how they work and why, and about how the sub-@code{make} relates to the top-level @code{make}. You may also find it useful to declare targets that invoke recursive @code{make} commands as @samp{.PHONY} (for more discussion on when this is useful, see @ref{Phony Targets}). @vindex @code{CURDIR} For your convenience, when GNU @code{make} starts (after it has processed any @code{-C} options) it sets the variable @code{CURDIR} to the pathname of the current working directory. This value is never touched by @code{make} again: in particular note that if you include files from other directories the value of @code{CURDIR} does not change. The value has the same precedence it would have if it were set in the makefile (by default, an environment variable @code{CURDIR} will not override this value). Note that setting this variable has no impact on the operation of @code{make} (it does not cause @code{make} to change its working directory, for example). @menu * MAKE Variable:: The special effects of using @samp{$(MAKE)}. * Variables/Recursion:: How to communicate variables to a sub-@code{make}. * Options/Recursion:: How to communicate options to a sub-@code{make}. * -w Option:: How the @samp{-w} or @samp{--print-directory} option helps debug use of recursive @code{make} commands. @end menu @node MAKE Variable, Variables/Recursion, Recursion, Recursion @subsection How the @code{MAKE} Variable Works @vindex MAKE @cindex recursion, and @code{MAKE} variable Recursive @code{make} commands should always use the variable @code{MAKE}, not the explicit command name @samp{make}, as shown here: @example @group subsystem: cd subdir && $(MAKE) @end group @end example The value of this variable is the file name with which @code{make} was invoked. If this file name was @file{/bin/make}, then the command executed is @samp{cd subdir && /bin/make}. If you use a special version of @code{make} to run the top-level makefile, the same special version will be executed for recursive invocations. @cindex @code{cd} (shell command) @cindex +, and commands As a special feature, using the variable @code{MAKE} in the commands of a rule alters the effects of the @samp{-t} (@samp{--touch}), @samp{-n} (@samp{--just-print}), or @samp{-q} (@w{@samp{--question}}) option. Using the @code{MAKE} variable has the same effect as using a @samp{+} character at the beginning of the command line. @xref{Instead of Execution, ,Instead of Executing the Commands}. This special feature is only enabled if the @code{MAKE} variable appears directly in the command script: it does not apply if the @code{MAKE} variable is referenced through expansion of another variable. In the latter case you must use the @samp{+} token to get these special effects.@refill Consider the command @samp{make -t} in the above example. (The @samp{-t} option marks targets as up to date without actually running any commands; see @ref{Instead of Execution}.) Following the usual definition of @samp{-t}, a @samp{make -t} command in the example would create a file named @file{subsystem} and do nothing else. What you really want it to do is run @samp{@w{cd subdir &&} @w{make -t}}; but that would require executing the command, and @samp{-t} says not to execute commands.@refill @cindex @code{-t}, and recursion @cindex recursion, and @code{-t} @cindex @code{--touch}, and recursion The special feature makes this do what you want: whenever a command line of a rule contains the variable @code{MAKE}, the flags @samp{-t}, @samp{-n} and @samp{-q} do not apply to that line. Command lines containing @code{MAKE} are executed normally despite the presence of a flag that causes most commands not to be run. The usual @code{MAKEFLAGS} mechanism passes the flags to the sub-@code{make} (@pxref{Options/Recursion, ,Communicating Options to a Sub-@code{make}}), so your request to touch the files, or print the commands, is propagated to the subsystem.@refill @node Variables/Recursion, Options/Recursion, MAKE Variable, Recursion @subsection Communicating Variables to a Sub-@code{make} @cindex sub-@code{make} @cindex environment, and recursion @cindex exporting variables @cindex variables, environment @cindex variables, exporting @cindex recursion, and environment @cindex recursion, and variables Variable values of the top-level @code{make} can be passed to the sub-@code{make} through the environment by explicit request. These variables are defined in the sub-@code{make} as defaults, but do not override what is specified in the makefile used by the sub-@code{make} makefile unless you use the @samp{-e} switch (@pxref{Options Summary, ,Summary of Options}).@refill To pass down, or @dfn{export}, a variable, @code{make} adds the variable and its value to the environment for running each command. The sub-@code{make}, in turn, uses the environment to initialize its table of variable values. @xref{Environment, ,Variables from the Environment}. Except by explicit request, @code{make} exports a variable only if it is either defined in the environment initially or set on the command line, and if its name consists only of letters, numbers, and underscores. Some shells cannot cope with environment variable names consisting of characters other than letters, numbers, and underscores. @cindex SHELL, exported value The value of the @code{make} variable @code{SHELL} is not exported. Instead, the value of the @code{SHELL} variable from the invoking environment is passed to the sub-@code{make}. You can force @code{make} to export its value for @code{SHELL} by using the @code{export} directive, described below. @xref{Choosing the Shell}. The special variable @code{MAKEFLAGS} is always exported (unless you unexport it). @code{MAKEFILES} is exported if you set it to anything. @code{make} automatically passes down variable values that were defined on the command line, by putting them in the @code{MAKEFLAGS} variable. @iftex See the next section. @end iftex @ifnottex @xref{Options/Recursion}. @end ifnottex Variables are @emph{not} normally passed down if they were created by default by @code{make} (@pxref{Implicit Variables, ,Variables Used by Implicit Rules}). The sub-@code{make} will define these for itself.@refill @findex export If you want to export specific variables to a sub-@code{make}, use the @code{export} directive, like this: @example export @var{variable} @dots{} @end example @noindent @findex unexport If you want to @emph{prevent} a variable from being exported, use the @code{unexport} directive, like this: @example unexport @var{variable} @dots{} @end example @noindent In both of these forms, the arguments to @code{export} and @code{unexport} are expanded, and so could be variables or functions which expand to a (list of) variable names to be (un)exported. As a convenience, you can define a variable and export it at the same time by doing: @example export @var{variable} = value @end example @noindent has the same result as: @example @var{variable} = value export @var{variable} @end example @noindent and @example export @var{variable} := value @end example @noindent has the same result as: @example @var{variable} := value export @var{variable} @end example Likewise, @example export @var{variable} += value @end example @noindent is just like: @example @var{variable} += value export @var{variable} @end example @noindent @xref{Appending, ,Appending More Text to Variables}. You may notice that the @code{export} and @code{unexport} directives work in @code{make} in the same way they work in the shell, @code{sh}. If you want all variables to be exported by default, you can use @code{export} by itself: @example export @end example @noindent This tells @code{make} that variables which are not explicitly mentioned in an @code{export} or @code{unexport} directive should be exported. Any variable given in an @code{unexport} directive will still @emph{not} be exported. If you use @code{export} by itself to export variables by default, variables whose names contain characters other than alphanumerics and underscores will not be exported unless specifically mentioned in an @code{export} directive.@refill @findex .EXPORT_ALL_VARIABLES The behavior elicited by an @code{export} directive by itself was the default in older versions of GNU @code{make}. If your makefiles depend on this behavior and you want to be compatible with old versions of @code{make}, you can write a rule for the special target @code{.EXPORT_ALL_VARIABLES} instead of using the @code{export} directive. This will be ignored by old @code{make}s, while the @code{export} directive will cause a syntax error.@refill @cindex compatibility in exporting Likewise, you can use @code{unexport} by itself to tell @code{make} @emph{not} to export variables by default. Since this is the default behavior, you would only need to do this if @code{export} had been used by itself earlier (in an included makefile, perhaps). You @strong{cannot} use @code{export} and @code{unexport} by themselves to have variables exported for some commands and not for others. The last @code{export} or @code{unexport} directive that appears by itself determines the behavior for the entire run of @code{make}.@refill @vindex MAKELEVEL @cindex recursion, level of As a special feature, the variable @code{MAKELEVEL} is changed when it is passed down from level to level. This variable's value is a string which is the depth of the level as a decimal number. The value is @samp{0} for the top-level @code{make}; @samp{1} for a sub-@code{make}, @samp{2} for a sub-sub-@code{make}, and so on. The incrementation happens when @code{make} sets up the environment for a command.@refill The main use of @code{MAKELEVEL} is to test it in a conditional directive (@pxref{Conditionals, ,Conditional Parts of Makefiles}); this way you can write a makefile that behaves one way if run recursively and another way if run directly by you.@refill @vindex MAKEFILES You can use the variable @code{MAKEFILES} to cause all sub-@code{make} commands to use additional makefiles. The value of @code{MAKEFILES} is a whitespace-separated list of file names. This variable, if defined in the outer-level makefile, is passed down through the environment; then it serves as a list of extra makefiles for the sub-@code{make} to read before the usual or specified ones. @xref{MAKEFILES Variable, ,The Variable @code{MAKEFILES}}.@refill @node Options/Recursion, -w Option, Variables/Recursion, Recursion @subsection Communicating Options to a Sub-@code{make} @cindex options, and recursion @cindex recursion, and options @vindex MAKEFLAGS Flags such as @samp{-s} and @samp{-k} are passed automatically to the sub-@code{make} through the variable @code{MAKEFLAGS}. This variable is set up automatically by @code{make} to contain the flag letters that @code{make} received. Thus, if you do @w{@samp{make -ks}} then @code{MAKEFLAGS} gets the value @samp{ks}.@refill As a consequence, every sub-@code{make} gets a value for @code{MAKEFLAGS} in its environment. In response, it takes the flags from that value and processes them as if they had been given as arguments. @xref{Options Summary, ,Summary of Options}. @cindex command line variable definitions, and recursion @cindex variables, command line, and recursion @cindex recursion, and command line variable definitions Likewise variables defined on the command line are passed to the sub-@code{make} through @code{MAKEFLAGS}. Words in the value of @code{MAKEFLAGS} that contain @samp{=}, @code{make} treats as variable definitions just as if they appeared on the command line. @xref{Overriding, ,Overriding Variables}. @cindex @code{-C}, and recursion @cindex @code{-f}, and recursion @cindex @code{-o}, and recursion @cindex @code{-W}, and recursion @cindex @code{--directory}, and recursion @cindex @code{--file}, and recursion @cindex @code{--old-file}, and recursion @cindex @code{--assume-old}, and recursion @cindex @code{--assume-new}, and recursion @cindex @code{--new-file}, and recursion @cindex recursion, and @code{-C} @cindex recursion, and @code{-f} @cindex recursion, and @code{-o} @cindex recursion, and @code{-W} The options @samp{-C}, @samp{-f}, @samp{-o}, and @samp{-W} are not put into @code{MAKEFLAGS}; these options are not passed down.@refill @cindex @code{-j}, and recursion @cindex @code{--jobs}, and recursion @cindex recursion, and @code{-j} @cindex job slots, and recursion The @samp{-j} option is a special case (@pxref{Parallel, ,Parallel Execution}). If you set it to some numeric value @samp{N} and your operating system supports it (most any UNIX system will; others typically won't), the parent @code{make} and all the sub-@code{make}s will communicate to ensure that there are only @samp{N} jobs running at the same time between them all. Note that any job that is marked recursive (@pxref{Instead of Execution, ,Instead of Executing the Commands}) doesn't count against the total jobs (otherwise we could get @samp{N} sub-@code{make}s running and have no slots left over for any real work!) If your operating system doesn't support the above communication, then @samp{-j 1} is always put into @code{MAKEFLAGS} instead of the value you specified. This is because if the @w{@samp{-j}} option were passed down to sub-@code{make}s, you would get many more jobs running in parallel than you asked for. If you give @samp{-j} with no numeric argument, meaning to run as many jobs as possible in parallel, this is passed down, since multiple infinities are no more than one.@refill If you do not want to pass the other flags down, you must change the value of @code{MAKEFLAGS}, like this: @example subsystem: cd subdir && $(MAKE) MAKEFLAGS= @end example @vindex MAKEOVERRIDES The command line variable definitions really appear in the variable @code{MAKEOVERRIDES}, and @code{MAKEFLAGS} contains a reference to this variable. If you do want to pass flags down normally, but don't want to pass down the command line variable definitions, you can reset @code{MAKEOVERRIDES} to empty, like this: @example MAKEOVERRIDES = @end example @noindent @cindex Arg list too long @cindex E2BIG This is not usually useful to do. However, some systems have a small fixed limit on the size of the environment, and putting so much information into the value of @code{MAKEFLAGS} can exceed it. If you see the error message @samp{Arg list too long}, this may be the problem. @findex .POSIX @cindex POSIX.2 (For strict compliance with POSIX.2, changing @code{MAKEOVERRIDES} does not affect @code{MAKEFLAGS} if the special target @samp{.POSIX} appears in the makefile. You probably do not care about this.) @vindex MFLAGS A similar variable @code{MFLAGS} exists also, for historical compatibility. It has the same value as @code{MAKEFLAGS} except that it does not contain the command line variable definitions, and it always begins with a hyphen unless it is empty (@code{MAKEFLAGS} begins with a hyphen only when it begins with an option that has no single-letter version, such as @samp{--warn-undefined-variables}). @code{MFLAGS} was traditionally used explicitly in the recursive @code{make} command, like this: @example subsystem: cd subdir && $(MAKE) $(MFLAGS) @end example @noindent but now @code{MAKEFLAGS} makes this usage redundant. If you want your makefiles to be compatible with old @code{make} programs, use this technique; it will work fine with more modern @code{make} versions too. @cindex setting options from environment @cindex options, setting from environment @cindex setting options in makefiles @cindex options, setting in makefiles The @code{MAKEFLAGS} variable can also be useful if you want to have certain options, such as @samp{-k} (@pxref{Options Summary, ,Summary of Options}), set each time you run @code{make}. You simply put a value for @code{MAKEFLAGS} in your environment. You can also set @code{MAKEFLAGS} in a makefile, to specify additional flags that should also be in effect for that makefile. (Note that you cannot use @code{MFLAGS} this way. That variable is set only for compatibility; @code{make} does not interpret a value you set for it in any way.) When @code{make} interprets the value of @code{MAKEFLAGS} (either from the environment or from a makefile), it first prepends a hyphen if the value does not already begin with one. Then it chops the value into words separated by blanks, and parses these words as if they were options given on the command line (except that @samp{-C}, @samp{-f}, @samp{-h}, @samp{-o}, @samp{-W}, and their long-named versions are ignored; and there is no error for an invalid option). If you do put @code{MAKEFLAGS} in your environment, you should be sure not to include any options that will drastically affect the actions of @code{make} and undermine the purpose of makefiles and of @code{make} itself. For instance, the @samp{-t}, @samp{-n}, and @samp{-q} options, if put in one of these variables, could have disastrous consequences and would certainly have at least surprising and probably annoying effects.@refill @node -w Option, , Options/Recursion, Recursion @subsection The @samp{--print-directory} Option @cindex directories, printing them @cindex printing directories @cindex recursion, and printing directories If you use several levels of recursive @code{make} invocations, the @samp{-w} or @w{@samp{--print-directory}} option can make the output a lot easier to understand by showing each directory as @code{make} starts processing it and as @code{make} finishes processing it. For example, if @samp{make -w} is run in the directory @file{/u/gnu/make}, @code{make} will print a line of the form:@refill @example make: Entering directory `/u/gnu/make'. @end example @noindent before doing anything else, and a line of the form: @example make: Leaving directory `/u/gnu/make'. @end example @noindent when processing is completed. @cindex @code{-C}, and @code{-w} @cindex @code{--directory}, and @code{--print-directory} @cindex recursion, and @code{-w} @cindex @code{-w}, and @code{-C} @cindex @code{-w}, and recursion @cindex @code{--print-directory}, and @code{--directory} @cindex @code{--print-directory}, and recursion @cindex @code{--no-print-directory} @cindex @code{--print-directory}, disabling @cindex @code{-w}, disabling Normally, you do not need to specify this option because @samp{make} does it for you: @samp{-w} is turned on automatically when you use the @samp{-C} option, and in sub-@code{make}s. @code{make} will not automatically turn on @samp{-w} if you also use @samp{-s}, which says to be silent, or if you use @samp{--no-print-directory} to explicitly disable it. @node Sequences, Empty Commands, Recursion, Commands @section Defining Canned Command Sequences @cindex sequences of commands @cindex commands, sequences of When the same sequence of commands is useful in making various targets, you can define it as a canned sequence with the @code{define} directive, and refer to the canned sequence from the rules for those targets. The canned sequence is actually a variable, so the name must not conflict with other variable names. Here is an example of defining a canned sequence of commands: @example define run-yacc yacc $(firstword $^) mv y.tab.c $@@ endef @end example @cindex @code{yacc} @noindent Here @code{run-yacc} is the name of the variable being defined; @code{endef} marks the end of the definition; the lines in between are the commands. The @code{define} directive does not expand variable references and function calls in the canned sequence; the @samp{$} characters, parentheses, variable names, and so on, all become part of the value of the variable you are defining. @xref{Defining, ,Defining Variables Verbatim}, for a complete explanation of @code{define}. The first command in this example runs Yacc on the first prerequisite of whichever rule uses the canned sequence. The output file from Yacc is always named @file{y.tab.c}. The second command moves the output to the rule's target file name. To use the canned sequence, substitute the variable into the commands of a rule. You can substitute it like any other variable (@pxref{Reference, ,Basics of Variable References}). Because variables defined by @code{define} are recursively expanded variables, all the variable references you wrote inside the @code{define} are expanded now. For example: @example foo.c : foo.y $(run-yacc) @end example @noindent @samp{foo.y} will be substituted for the variable @samp{$^} when it occurs in @code{run-yacc}'s value, and @samp{foo.c} for @samp{$@@}.@refill This is a realistic example, but this particular one is not needed in practice because @code{make} has an implicit rule to figure out these commands based on the file names involved (@pxref{Implicit Rules, ,Using Implicit Rules}). @cindex @@, and @code{define} @cindex -, and @code{define} @cindex +, and @code{define} In command execution, each line of a canned sequence is treated just as if the line appeared on its own in the rule, preceded by a tab. In particular, @code{make} invokes a separate subshell for each line. You can use the special prefix characters that affect command lines (@samp{@@}, @samp{-}, and @samp{+}) on each line of a canned sequence. @xref{Commands, ,Writing the Commands in Rules}. For example, using this canned sequence: @example define frobnicate @@echo "frobnicating target $@@" frob-step-1 $< -o $@@-step-1 frob-step-2 $@@-step-1 -o $@@ endef @end example @noindent @code{make} will not echo the first line, the @code{echo} command. But it @emph{will} echo the following two command lines. On the other hand, prefix characters on the command line that refers to a canned sequence apply to every line in the sequence. So the rule: @example frob.out: frob.in @@$(frobnicate) @end example @noindent does not echo @emph{any} commands. (@xref{Echoing, ,Command Echoing}, for a full explanation of @samp{@@}.) @node Empty Commands, , Sequences, Commands @section Using Empty Commands @cindex empty commands @cindex commands, empty It is sometimes useful to define commands which do nothing. This is done simply by giving a command that consists of nothing but whitespace. For example: @example target: ; @end example @noindent defines an empty command string for @file{target}. You could also use a line beginning with a tab character to define an empty command string, but this would be confusing because such a line looks empty. @findex .DEFAULT@r{, and empty commands} You may be wondering why you would want to define a command string that does nothing. The only reason this is useful is to prevent a target from getting implicit commands (from implicit rules or the @code{.DEFAULT} special target; @pxref{Implicit Rules} and @pxref{Last Resort, ,Defining Last-Resort Default Rules}).@refill @c !!! another reason is for canonical stamp files: @ignore @example foo: stamp-foo ; stamp-foo: foo.in create foo frm foo.in touch $@ @end example @end ignore You may be inclined to define empty command strings for targets that are not actual files, but only exist so that their prerequisites can be remade. However, this is not the best way to do that, because the prerequisites may not be remade properly if the target file actually does exist. @xref{Phony Targets, ,Phony Targets}, for a better way to do this. @node Using Variables, Conditionals, Commands, Top @chapter How to Use Variables @cindex variable @cindex value @cindex recursive variable expansion @cindex simple variable expansion A @dfn{variable} is a name defined in a makefile to represent a string of text, called the variable's @dfn{value}. These values are substituted by explicit request into targets, prerequisites, commands, and other parts of the makefile. (In some other versions of @code{make}, variables are called @dfn{macros}.) @cindex macro Variables and functions in all parts of a makefile are expanded when read, except for the shell commands in rules, the right-hand sides of variable definitions using @samp{=}, and the bodies of variable definitions using the @code{define} directive.@refill Variables can represent lists of file names, options to pass to compilers, programs to run, directories to look in for source files, directories to write output in, or anything else you can imagine. A variable name may be any sequence of characters not containing @samp{:}, @samp{#}, @samp{=}, or leading or trailing whitespace. However, variable names containing characters other than letters, numbers, and underscores should be avoided, as they may be given special meanings in the future, and with some shells they cannot be passed through the environment to a sub-@code{make} (@pxref{Variables/Recursion, ,Communicating Variables to a Sub-@code{make}}). Variable names are case-sensitive. The names @samp{foo}, @samp{FOO}, and @samp{Foo} all refer to different variables. It is traditional to use upper case letters in variable names, but we recommend using lower case letters for variable names that serve internal purposes in the makefile, and reserving upper case for parameters that control implicit rules or for parameters that the user should override with command options (@pxref{Overriding, ,Overriding Variables}). A few variables have names that are a single punctuation character or just a few characters. These are the @dfn{automatic variables}, and they have particular specialized uses. @xref{Automatic Variables}. @menu * Reference:: How to use the value of a variable. * Flavors:: Variables come in two flavors. * Advanced:: Advanced features for referencing a variable. * Values:: All the ways variables get their values. * Setting:: How to set a variable in the makefile. * Appending:: How to append more text to the old value of a variable. * Override Directive:: How to set a variable in the makefile even if the user has set it with a command argument. * Defining:: An alternate way to set a variable to a verbatim string. * Environment:: Variable values can come from the environment. * Target-specific:: Variable values can be defined on a per-target basis. * Pattern-specific:: Target-specific variable values can be applied to a group of targets that match a pattern. @end menu @node Reference, Flavors, Using Variables, Using Variables @section Basics of Variable References @cindex variables, how to reference @cindex reference to variables @cindex @code{$}, in variable reference @cindex dollar sign (@code{$}), in variable reference To substitute a variable's value, write a dollar sign followed by the name of the variable in parentheses or braces: either @samp{$(foo)} or @samp{$@{foo@}} is a valid reference to the variable @code{foo}. This special significance of @samp{$} is why you must write @samp{$$} to have the effect of a single dollar sign in a file name or command. Variable references can be used in any context: targets, prerequisites, commands, most directives, and new variable values. Here is an example of a common case, where a variable holds the names of all the object files in a program: @example @group objects = program.o foo.o utils.o program : $(objects) cc -o program $(objects) $(objects) : defs.h @end group @end example Variable references work by strict textual substitution. Thus, the rule @example @group foo = c prog.o : prog.$(foo) $(foo)$(foo) -$(foo) prog.$(foo) @end group @end example @noindent could be used to compile a C program @file{prog.c}. Since spaces before the variable value are ignored in variable assignments, the value of @code{foo} is precisely @samp{c}. (Don't actually write your makefiles this way!) A dollar sign followed by a character other than a dollar sign, open-parenthesis or open-brace treats that single character as the variable name. Thus, you could reference the variable @code{x} with @samp{$x}. However, this practice is strongly discouraged, except in the case of the automatic variables (@pxref{Automatic Variables}). @node Flavors, Advanced, Reference, Using Variables @section The Two Flavors of Variables @cindex flavors of variables @cindex recursive variable expansion @cindex variables, flavors @cindex recursively expanded variables @cindex variables, recursively expanded There are two ways that a variable in GNU @code{make} can have a value; we call them the two @dfn{flavors} of variables. The two flavors are distinguished in how they are defined and in what they do when expanded. @cindex = The first flavor of variable is a @dfn{recursively expanded} variable. Variables of this sort are defined by lines using @samp{=} (@pxref{Setting, ,Setting Variables}) or by the @code{define} directive (@pxref{Defining, ,Defining Variables Verbatim}). The value you specify is installed verbatim; if it contains references to other variables, these references are expanded whenever this variable is substituted (in the course of expanding some other string). When this happens, it is called @dfn{recursive expansion}.@refill For example, @example foo = $(bar) bar = $(ugh) ugh = Huh? all:;echo $(foo) @end example @noindent will echo @samp{Huh?}: @samp{$(foo)} expands to @samp{$(bar)} which expands to @samp{$(ugh)} which finally expands to @samp{Huh?}.@refill This flavor of variable is the only sort supported by other versions of @code{make}. It has its advantages and its disadvantages. An advantage (most would say) is that: @example CFLAGS = $(include_dirs) -O include_dirs = -Ifoo -Ibar @end example @noindent will do what was intended: when @samp{CFLAGS} is expanded in a command, it will expand to @samp{-Ifoo -Ibar -O}. A major disadvantage is that you cannot append something on the end of a variable, as in @example CFLAGS = $(CFLAGS) -O @end example @noindent because it will cause an infinite loop in the variable expansion. (Actually @code{make} detects the infinite loop and reports an error.) @cindex loops in variable expansion @cindex variables, loops in expansion Another disadvantage is that any functions (@pxref{Functions, ,Functions for Transforming Text}) referenced in the definition will be executed every time the variable is expanded. This makes @code{make} run slower; worse, it causes the @code{wildcard} and @code{shell} functions to give unpredictable results because you cannot easily control when they are called, or even how many times. To avoid all the problems and inconveniences of recursively expanded variables, there is another flavor: simply expanded variables. @cindex simply expanded variables @cindex variables, simply expanded @cindex := @dfn{Simply expanded variables} are defined by lines using @samp{:=} (@pxref{Setting, ,Setting Variables}). The value of a simply expanded variable is scanned once and for all, expanding any references to other variables and functions, when the variable is defined. The actual value of the simply expanded variable is the result of expanding the text that you write. It does not contain any references to other variables; it contains their values @emph{as of the time this variable was defined}. Therefore, @example x := foo y := $(x) bar x := later @end example @noindent is equivalent to @example y := foo bar x := later @end example When a simply expanded variable is referenced, its value is substituted verbatim. Here is a somewhat more complicated example, illustrating the use of @samp{:=} in conjunction with the @code{shell} function. (@xref{Shell Function, , The @code{shell} Function}.) This example also shows use of the variable @code{MAKELEVEL}, which is changed when it is passed down from level to level. (@xref{Variables/Recursion, , Communicating Variables to a Sub-@code{make}}, for information about @code{MAKELEVEL}.) @vindex MAKELEVEL @vindex MAKE @example @group ifeq (0,$@{MAKELEVEL@}) whoami := $(shell whoami) host-type := $(shell arch) MAKE := $@{MAKE@} host-type=$@{host-type@} whoami=$@{whoami@} endif @end group @end example @noindent An advantage of this use of @samp{:=} is that a typical `descend into a directory' command then looks like this: @example @group $@{subdirs@}: $@{MAKE@} -C $@@ all @end group @end example Simply expanded variables generally make complicated makefile programming more predictable because they work like variables in most programming languages. They allow you to redefine a variable using its own value (or its value processed in some way by one of the expansion functions) and to use the expansion functions much more efficiently (@pxref{Functions, ,Functions for Transforming Text}). @cindex spaces, in variable values @cindex whitespace, in variable values @cindex variables, spaces in values You can also use them to introduce controlled leading whitespace into variable values. Leading whitespace characters are discarded from your input before substitution of variable references and function calls; this means you can include leading spaces in a variable value by protecting them with variable references, like this: @example nullstring := space := $(nullstring) # end of the line @end example @noindent Here the value of the variable @code{space} is precisely one space. The comment @w{@samp{# end of the line}} is included here just for clarity. Since trailing space characters are @emph{not} stripped from variable values, just a space at the end of the line would have the same effect (but be rather hard to read). If you put whitespace at the end of a variable value, it is a good idea to put a comment like that at the end of the line to make your intent clear. Conversely, if you do @emph{not} want any whitespace characters at the end of your variable value, you must remember not to put a random comment on the end of the line after some whitespace, such as this: @example dir := /foo/bar # directory to put the frobs in @end example @noindent Here the value of the variable @code{dir} is @w{@samp{/foo/bar }} (with four trailing spaces), which was probably not the intention. (Imagine something like @w{@samp{$(dir)/file}} with this definition!) @cindex conditional variable assignment @cindex variables, conditional assignment @cindex ?= There is another assignment operator for variables, @samp{?=}. This is called a conditional variable assignment operator, because it only has an effect if the variable is not yet defined. This statement: @example FOO ?= bar @end example @noindent is exactly equivalent to this (@pxref{Origin Function, ,The @code{origin} Function}): @example ifeq ($(origin FOO), undefined) FOO = bar endif @end example Note that a variable set to an empty value is still defined, so @samp{?=} will not set that variable. @node Advanced, Values, Flavors, Using Variables @section Advanced Features for Reference to Variables @cindex reference to variables This section describes some advanced features you can use to reference variables in more flexible ways. @menu * Substitution Refs:: Referencing a variable with substitutions on the value. * Computed Names:: Computing the name of the variable to refer to. @end menu @node Substitution Refs, Computed Names, Advanced, Advanced @subsection Substitution References @cindex modified variable reference @cindex substitution variable reference @cindex variables, modified reference @cindex variables, substitution reference @cindex variables, substituting suffix in @cindex suffix, substituting in variables A @dfn{substitution reference} substitutes the value of a variable with alterations that you specify. It has the form @samp{$(@var{var}:@var{a}=@var{b})} (or @samp{$@{@var{var}:@var{a}=@var{b}@}}) and its meaning is to take the value of the variable @var{var}, replace every @var{a} at the end of a word with @var{b} in that value, and substitute the resulting string. When we say ``at the end of a word'', we mean that @var{a} must appear either followed by whitespace or at the end of the value in order to be replaced; other occurrences of @var{a} in the value are unaltered. For example:@refill @example foo := a.o b.o c.o bar := $(foo:.o=.c) @end example @noindent sets @samp{bar} to @samp{a.c b.c c.c}. @xref{Setting, ,Setting Variables}. A substitution reference is actually an abbreviation for use of the @code{patsubst} expansion function (@pxref{Text Functions, ,Functions for String Substitution and Analysis}). We provide substitution references as well as @code{patsubst} for compatibility with other implementations of @code{make}. @findex patsubst Another type of substitution reference lets you use the full power of the @code{patsubst} function. It has the same form @samp{$(@var{var}:@var{a}=@var{b})} described above, except that now @var{a} must contain a single @samp{%} character. This case is equivalent to @samp{$(patsubst @var{a},@var{b},$(@var{var}))}. @xref{Text Functions, ,Functions for String Substitution and Analysis}, for a description of the @code{patsubst} function.@refill @example @group @exdent For example: foo := a.o b.o c.o bar := $(foo:%.o=%.c) @end group @end example @noindent sets @samp{bar} to @samp{a.c b.c c.c}. @node Computed Names, , Substitution Refs, Advanced @subsection Computed Variable Names @cindex nested variable reference @cindex computed variable name @cindex variables, computed names @cindex variables, nested references @cindex variables, @samp{$} in name @cindex @code{$}, in variable name @cindex dollar sign (@code{$}), in variable name Computed variable names are a complicated concept needed only for sophisticated makefile programming. For most purposes you need not consider them, except to know that making a variable with a dollar sign in its name might have strange results. However, if you are the type that wants to understand everything, or you are actually interested in what they do, read on. Variables may be referenced inside the name of a variable. This is called a @dfn{computed variable name} or a @dfn{nested variable reference}. For example, @example x = y y = z a := $($(x)) @end example @noindent defines @code{a} as @samp{z}: the @samp{$(x)} inside @samp{$($(x))} expands to @samp{y}, so @samp{$($(x))} expands to @samp{$(y)} which in turn expands to @samp{z}. Here the name of the variable to reference is not stated explicitly; it is computed by expansion of @samp{$(x)}. The reference @samp{$(x)} here is nested within the outer variable reference. The previous example shows two levels of nesting, but any number of levels is possible. For example, here are three levels: @example x = y y = z z = u a := $($($(x))) @end example @noindent Here the innermost @samp{$(x)} expands to @samp{y}, so @samp{$($(x))} expands to @samp{$(y)} which in turn expands to @samp{z}; now we have @samp{$(z)}, which becomes @samp{u}. References to recursively-expanded variables within a variable name are reexpanded in the usual fashion. For example: @example x = $(y) y = z z = Hello a := $($(x)) @end example @noindent defines @code{a} as @samp{Hello}: @samp{$($(x))} becomes @samp{$($(y))} which becomes @samp{$(z)} which becomes @samp{Hello}. Nested variable references can also contain modified references and function invocations (@pxref{Functions, ,Functions for Transforming Text}), just like any other reference. For example, using the @code{subst} function (@pxref{Text Functions, ,Functions for String Substitution and Analysis}): @example @group x = variable1 variable2 := Hello y = $(subst 1,2,$(x)) z = y a := $($($(z))) @end group @end example @noindent eventually defines @code{a} as @samp{Hello}. It is doubtful that anyone would ever want to write a nested reference as convoluted as this one, but it works: @samp{$($($(z)))} expands to @samp{$($(y))} which becomes @samp{$($(subst 1,2,$(x)))}. This gets the value @samp{variable1} from @code{x} and changes it by substitution to @samp{variable2}, so that the entire string becomes @samp{$(variable2)}, a simple variable reference whose value is @samp{Hello}.@refill A computed variable name need not consist entirely of a single variable reference. It can contain several variable references, as well as some invariant text. For example, @example @group a_dirs := dira dirb 1_dirs := dir1 dir2 @end group @group a_files := filea fileb 1_files := file1 file2 @end group @group ifeq "$(use_a)" "yes" a1 := a else a1 := 1 endif @end group @group ifeq "$(use_dirs)" "yes" df := dirs else df := files endif dirs := $($(a1)_$(df)) @end group @end example @noindent will give @code{dirs} the same value as @code{a_dirs}, @code{1_dirs}, @code{a_files} or @code{1_files} depending on the settings of @code{use_a} and @code{use_dirs}.@refill Computed variable names can also be used in substitution references: @example @group a_objects := a.o b.o c.o 1_objects := 1.o 2.o 3.o sources := $($(a1)_objects:.o=.c) @end group @end example @noindent defines @code{sources} as either @samp{a.c b.c c.c} or @samp{1.c 2.c 3.c}, depending on the value of @code{a1}. The only restriction on this sort of use of nested variable references is that they cannot specify part of the name of a function to be called. This is because the test for a recognized function name is done before the expansion of nested references. For example, @example @group ifdef do_sort func := sort else func := strip endif @end group @group bar := a d b g q c @end group @group foo := $($(func) $(bar)) @end group @end example @noindent attempts to give @samp{foo} the value of the variable @samp{sort a d b g q c} or @samp{strip a d b g q c}, rather than giving @samp{a d b g q c} as the argument to either the @code{sort} or the @code{strip} function. This restriction could be removed in the future if that change is shown to be a good idea. You can also use computed variable names in the left-hand side of a variable assignment, or in a @code{define} directive, as in: @example dir = foo $(dir)_sources := $(wildcard $(dir)/*.c) define $(dir)_print lpr $($(dir)_sources) endef @end example @noindent This example defines the variables @samp{dir}, @samp{foo_sources}, and @samp{foo_print}. Note that @dfn{nested variable references} are quite different from @dfn{recursively expanded variables} (@pxref{Flavors, ,The Two Flavors of Variables}), though both are used together in complex ways when doing makefile programming.@refill @node Values, Setting, Advanced, Using Variables @section How Variables Get Their Values @cindex variables, how they get their values @cindex value, how a variable gets it Variables can get values in several different ways: @itemize @bullet @item You can specify an overriding value when you run @code{make}. @xref{Overriding, ,Overriding Variables}. @item You can specify a value in the makefile, either with an assignment (@pxref{Setting, ,Setting Variables}) or with a verbatim definition (@pxref{Defining, ,Defining Variables Verbatim}).@refill @item Variables in the environment become @code{make} variables. @xref{Environment, ,Variables from the Environment}. @item Several @dfn{automatic} variables are given new values for each rule. Each of these has a single conventional use. @xref{Automatic Variables}. @item Several variables have constant initial values. @xref{Implicit Variables, ,Variables Used by Implicit Rules}. @end itemize @node Setting, Appending, Values, Using Variables @section Setting Variables @cindex setting variables @cindex variables, setting @cindex = @cindex := @cindex ?= To set a variable from the makefile, write a line starting with the variable name followed by @samp{=} or @samp{:=}. Whatever follows the @samp{=} or @samp{:=} on the line becomes the value. For example, @example objects = main.o foo.o bar.o utils.o @end example @noindent defines a variable named @code{objects}. Whitespace around the variable name and immediately after the @samp{=} is ignored. Variables defined with @samp{=} are @dfn{recursively expanded} variables. Variables defined with @samp{:=} are @dfn{simply expanded} variables; these definitions can contain variable references which will be expanded before the definition is made. @xref{Flavors, ,The Two Flavors of Variables}. The variable name may contain function and variable references, which are expanded when the line is read to find the actual variable name to use. There is no limit on the length of the value of a variable except the amount of swapping space on the computer. When a variable definition is long, it is a good idea to break it into several lines by inserting backslash-newline at convenient places in the definition. This will not affect the functioning of @code{make}, but it will make the makefile easier to read. Most variable names are considered to have the empty string as a value if you have never set them. Several variables have built-in initial values that are not empty, but you can set them in the usual ways (@pxref{Implicit Variables, ,Variables Used by Implicit Rules}). Several special variables are set automatically to a new value for each rule; these are called the @dfn{automatic} variables (@pxref{Automatic Variables}). If you'd like a variable to be set to a value only if it's not already set, then you can use the shorthand operator @samp{?=} instead of @samp{=}. These two settings of the variable @samp{FOO} are identical (@pxref{Origin Function, ,The @code{origin} Function}): @example FOO ?= bar @end example @noindent and @example ifeq ($(origin FOO), undefined) FOO = bar endif @end example @node Appending, Override Directive, Setting, Using Variables @section Appending More Text to Variables @cindex += @cindex appending to variables @cindex variables, appending to Often it is useful to add more text to the value of a variable already defined. You do this with a line containing @samp{+=}, like this: @example objects += another.o @end example @noindent This takes the value of the variable @code{objects}, and adds the text @samp{another.o} to it (preceded by a single space). Thus: @example objects = main.o foo.o bar.o utils.o objects += another.o @end example @noindent sets @code{objects} to @samp{main.o foo.o bar.o utils.o another.o}. Using @samp{+=} is similar to: @example objects = main.o foo.o bar.o utils.o objects := $(objects) another.o @end example @noindent but differs in ways that become important when you use more complex values. When the variable in question has not been defined before, @samp{+=} acts just like normal @samp{=}: it defines a recursively-expanded variable. However, when there @emph{is} a previous definition, exactly what @samp{+=} does depends on what flavor of variable you defined originally. @xref{Flavors, ,The Two Flavors of Variables}, for an explanation of the two flavors of variables. When you add to a variable's value with @samp{+=}, @code{make} acts essentially as if you had included the extra text in the initial definition of the variable. If you defined it first with @samp{:=}, making it a simply-expanded variable, @samp{+=} adds to that simply-expanded definition, and expands the new text before appending it to the old value just as @samp{:=} does (see @ref{Setting, ,Setting Variables}, for a full explanation of @samp{:=}). In fact, @example variable := value variable += more @end example @noindent is exactly equivalent to: @noindent @example variable := value variable := $(variable) more @end example On the other hand, when you use @samp{+=} with a variable that you defined first to be recursively-expanded using plain @samp{=}, @code{make} does something a bit different. Recall that when you define a recursively-expanded variable, @code{make} does not expand the value you set for variable and function references immediately. Instead it stores the text verbatim, and saves these variable and function references to be expanded later, when you refer to the new variable (@pxref{Flavors, ,The Two Flavors of Variables}). When you use @samp{+=} on a recursively-expanded variable, it is this unexpanded text to which @code{make} appends the new text you specify. @example @group variable = value variable += more @end group @end example @noindent is roughly equivalent to: @example @group temp = value variable = $(temp) more @end group @end example @noindent except that of course it never defines a variable called @code{temp}. The importance of this comes when the variable's old value contains variable references. Take this common example: @example CFLAGS = $(includes) -O @dots{} CFLAGS += -pg # enable profiling @end example @noindent The first line defines the @code{CFLAGS} variable with a reference to another variable, @code{includes}. (@code{CFLAGS} is used by the rules for C compilation; @pxref{Catalogue of Rules, ,Catalogue of Implicit Rules}.) Using @samp{=} for the definition makes @code{CFLAGS} a recursively-expanded variable, meaning @w{@samp{$(includes) -O}} is @emph{not} expanded when @code{make} processes the definition of @code{CFLAGS}. Thus, @code{includes} need not be defined yet for its value to take effect. It only has to be defined before any reference to @code{CFLAGS}. If we tried to append to the value of @code{CFLAGS} without using @samp{+=}, we might do it like this: @example CFLAGS := $(CFLAGS) -pg # enable profiling @end example @noindent This is pretty close, but not quite what we want. Using @samp{:=} redefines @code{CFLAGS} as a simply-expanded variable; this means @code{make} expands the text @w{@samp{$(CFLAGS) -pg}} before setting the variable. If @code{includes} is not yet defined, we get @w{@samp{ -O -pg}}, and a later definition of @code{includes} will have no effect. Conversely, by using @samp{+=} we set @code{CFLAGS} to the @emph{unexpanded} value @w{@samp{$(includes) -O -pg}}. Thus we preserve the reference to @code{includes}, so if that variable gets defined at any later point, a reference like @samp{$(CFLAGS)} still uses its value. @node Override Directive, Defining, Appending, Using Variables @section The @code{override} Directive @findex override @cindex overriding with @code{override} @cindex variables, overriding If a variable has been set with a command argument (@pxref{Overriding, ,Overriding Variables}), then ordinary assignments in the makefile are ignored. If you want to set the variable in the makefile even though it was set with a command argument, you can use an @code{override} directive, which is a line that looks like this:@refill @example override @var{variable} = @var{value} @end example @noindent or @example override @var{variable} := @var{value} @end example To append more text to a variable defined on the command line, use: @example override @var{variable} += @var{more text} @end example @noindent @xref{Appending, ,Appending More Text to Variables}. The @code{override} directive was not invented for escalation in the war between makefiles and command arguments. It was invented so you can alter and add to values that the user specifies with command arguments. For example, suppose you always want the @samp{-g} switch when you run the C compiler, but you would like to allow the user to specify the other switches with a command argument just as usual. You could use this @code{override} directive: @example override CFLAGS += -g @end example You can also use @code{override} directives with @code{define} directives. This is done as you might expect: @example override define foo bar endef @end example @noindent @iftex See the next section for information about @code{define}. @end iftex @ifnottex @xref{Defining, ,Defining Variables Verbatim}. @end ifnottex @node Defining, Environment, Override Directive, Using Variables @section Defining Variables Verbatim @findex define @findex endef @cindex verbatim variable definition @cindex defining variables verbatim @cindex variables, defining verbatim Another way to set the value of a variable is to use the @code{define} directive. This directive has an unusual syntax which allows newline characters to be included in the value, which is convenient for defining both canned sequences of commands (@pxref{Sequences, ,Defining Canned Command Sequences}), and also sections of makefile syntax to use with @code{eval} (@pxref{Eval Function}). The @code{define} directive is followed on the same line by the name of the variable and nothing more. The value to give the variable appears on the following lines. The end of the value is marked by a line containing just the word @code{endef}. Aside from this difference in syntax, @code{define} works just like @samp{=}: it creates a recursively-expanded variable (@pxref{Flavors, ,The Two Flavors of Variables}). The variable name may contain function and variable references, which are expanded when the directive is read to find the actual variable name to use. You may nest @code{define} directives: @code{make} will keep track of nested directives and report an error if they are not all properly closed with @code{endef}. Note that lines beginning with tab characters are considered part of a command script, so any @code{define} or @code{endef} strings appearing on such a line will not be considered @code{make} operators. @example define two-lines echo foo echo $(bar) endef @end example The value in an ordinary assignment cannot contain a newline; but the newlines that separate the lines of the value in a @code{define} become part of the variable's value (except for the final newline which precedes the @code{endef} and is not considered part of the value).@refill @need 800 When used in a command script, the previous example is functionally equivalent to this: @example two-lines = echo foo; echo $(bar) @end example @noindent since two commands separated by semicolon behave much like two separate shell commands. However, note that using two separate lines means @code{make} will invoke the shell twice, running an independent subshell for each line. @xref{Execution, ,Command Execution}. If you want variable definitions made with @code{define} to take precedence over command-line variable definitions, you can use the @code{override} directive together with @code{define}: @example override define two-lines foo $(bar) endef @end example @noindent @xref{Override Directive, ,The @code{override} Directive}. @node Environment, Target-specific, Defining, Using Variables @section Variables from the Environment @cindex variables, environment @cindex environment Variables in @code{make} can come from the environment in which @code{make} is run. Every environment variable that @code{make} sees when it starts up is transformed into a @code{make} variable with the same name and value. However, an explicit assignment in the makefile, or with a command argument, overrides the environment. (If the @samp{-e} flag is specified, then values from the environment override assignments in the makefile. @xref{Options Summary, ,Summary of Options}. But this is not recommended practice.) Thus, by setting the variable @code{CFLAGS} in your environment, you can cause all C compilations in most makefiles to use the compiler switches you prefer. This is safe for variables with standard or conventional meanings because you know that no makefile will use them for other things. (Note this is not totally reliable; some makefiles set @code{CFLAGS} explicitly and therefore are not affected by the value in the environment.) When @code{make} runs a command script, variables defined in the makefile are placed into the environment of that command. This allows you to pass values to sub-@code{make} invocations (@pxref{Recursion, ,Recursive Use of @code{make}}). By default, only variables that came from the environment or the command line are passed to recursive invocations. You can use the @code{export} directive to pass other variables. @xref{Variables/Recursion, , Communicating Variables to a Sub-@code{make}}, for full details. Other use of variables from the environment is not recommended. It is not wise for makefiles to depend for their functioning on environment variables set up outside their control, since this would cause different users to get different results from the same makefile. This is against the whole purpose of most makefiles. @cindex SHELL, import from environment Such problems would be especially likely with the variable @code{SHELL}, which is normally present in the environment to specify the user's choice of interactive shell. It would be very undesirable for this choice to affect @code{make}; so, @code{make} handles the @code{SHELL} environment variable in a special way; see @ref{Choosing the Shell}.@refill @node Target-specific, Pattern-specific, Environment, Using Variables @section Target-specific Variable Values @cindex target-specific variables @cindex variables, target-specific Variable values in @code{make} are usually global; that is, they are the same regardless of where they are evaluated (unless they're reset, of course). One exception to that is automatic variables (@pxref{Automatic Variables}). The other exception is @dfn{target-specific variable values}. This feature allows you to define different values for the same variable, based on the target that @code{make} is currently building. As with automatic variables, these values are only available within the context of a target's command script (and in other target-specific assignments). Set a target-specific variable value like this: @example @var{target} @dots{} : @var{variable-assignment} @end example @noindent or like this: @example @var{target} @dots{} : override @var{variable-assignment} @end example @noindent or like this: @example @var{target} @dots{} : export @var{variable-assignment} @end example Multiple @var{target} values create a target-specific variable value for each member of the target list individually. The @var{variable-assignment} can be any valid form of assignment; recursive (@samp{=}), static (@samp{:=}), appending (@samp{+=}), or conditional (@samp{?=}). All variables that appear within the @var{variable-assignment} are evaluated within the context of the target: thus, any previously-defined target-specific variable values will be in effect. Note that this variable is actually distinct from any ``global'' value: the two variables do not have to have the same flavor (recursive vs.@: static). Target-specific variables have the same priority as any other makefile variable. Variables provided on the command-line (and in the environment if the @samp{-e} option is in force) will take precedence. Specifying the @code{override} directive will allow the target-specific variable value to be preferred. There is one more special feature of target-specific variables: when you define a target-specific variable that variable value is also in effect for all prerequisites of this target, and all their prerequisites, etc.@: (unless those prerequisites override that variable with their own target-specific variable value). So, for example, a statement like this: @example prog : CFLAGS = -g prog : prog.o foo.o bar.o @end example @noindent will set @code{CFLAGS} to @samp{-g} in the command script for @file{prog}, but it will also set @code{CFLAGS} to @samp{-g} in the command scripts that create @file{prog.o}, @file{foo.o}, and @file{bar.o}, and any command scripts which create their prerequisites. Be aware that a given prerequisite will only be built once per invocation of make, at most. If the same file is a prerequisite of multiple targets, and each of those targets has a different value for the same target-specific variable, then the first target to be built will cause that prerequisite to be built and the prerequisite will inherit the target-specific value from the first target. It will ignore the target-specific values from any other targets. @node Pattern-specific, , Target-specific, Using Variables @section Pattern-specific Variable Values @cindex pattern-specific variables @cindex variables, pattern-specific In addition to target-specific variable values (@pxref{Target-specific, ,Target-specific Variable Values}), GNU @code{make} supports pattern-specific variable values. In this form, the variable is defined for any target that matches the pattern specified. If a target matches more than one pattern, all the matching pattern-specific variables are interpreted in the order in which they were defined in the makefile, and collected together into one set. Variables defined in this way are searched after any target-specific variables defined explicitly for that target, and before target-specific variables defined for the parent target. Set a pattern-specific variable value like this: @example @var{pattern} @dots{} : @var{variable-assignment} @end example @noindent or like this: @example @var{pattern} @dots{} : override @var{variable-assignment} @end example @noindent where @var{pattern} is a %-pattern. As with target-specific variable values, multiple @var{pattern} values create a pattern-specific variable value for each pattern individually. The @var{variable-assignment} can be any valid form of assignment. Any command-line variable setting will take precedence, unless @code{override} is specified. For example: @example %.o : CFLAGS = -O @end example @noindent will assign @code{CFLAGS} the value of @samp{-O} for all targets matching the pattern @code{%.o}. @node Conditionals, Functions, Using Variables, Top @chapter Conditional Parts of Makefiles @cindex conditionals A @dfn{conditional} causes part of a makefile to be obeyed or ignored depending on the values of variables. Conditionals can compare the value of one variable to another, or the value of a variable to a constant string. Conditionals control what @code{make} actually ``sees'' in the makefile, so they @emph{cannot} be used to control shell commands at the time of execution.@refill @menu * Conditional Example:: Example of a conditional * Conditional Syntax:: The syntax of conditionals. * Testing Flags:: Conditionals that test flags. @end menu @node Conditional Example, Conditional Syntax, Conditionals, Conditionals @section Example of a Conditional The following example of a conditional tells @code{make} to use one set of libraries if the @code{CC} variable is @samp{gcc}, and a different set of libraries otherwise. It works by controlling which of two command lines will be used as the command for a rule. The result is that @samp{CC=gcc} as an argument to @code{make} changes not only which compiler is used but also which libraries are linked. @example libs_for_gcc = -lgnu normal_libs = foo: $(objects) ifeq ($(CC),gcc) $(CC) -o foo $(objects) $(libs_for_gcc) else $(CC) -o foo $(objects) $(normal_libs) endif @end example This conditional uses three directives: one @code{ifeq}, one @code{else} and one @code{endif}. The @code{ifeq} directive begins the conditional, and specifies the condition. It contains two arguments, separated by a comma and surrounded by parentheses. Variable substitution is performed on both arguments and then they are compared. The lines of the makefile following the @code{ifeq} are obeyed if the two arguments match; otherwise they are ignored. The @code{else} directive causes the following lines to be obeyed if the previous conditional failed. In the example above, this means that the second alternative linking command is used whenever the first alternative is not used. It is optional to have an @code{else} in a conditional. The @code{endif} directive ends the conditional. Every conditional must end with an @code{endif}. Unconditional makefile text follows. As this example illustrates, conditionals work at the textual level: the lines of the conditional are treated as part of the makefile, or ignored, according to the condition. This is why the larger syntactic units of the makefile, such as rules, may cross the beginning or the end of the conditional. When the variable @code{CC} has the value @samp{gcc}, the above example has this effect: @example foo: $(objects) $(CC) -o foo $(objects) $(libs_for_gcc) @end example @noindent When the variable @code{CC} has any other value, the effect is this: @example foo: $(objects) $(CC) -o foo $(objects) $(normal_libs) @end example Equivalent results can be obtained in another way by conditionalizing a variable assignment and then using the variable unconditionally: @example libs_for_gcc = -lgnu normal_libs = ifeq ($(CC),gcc) libs=$(libs_for_gcc) else libs=$(normal_libs) endif foo: $(objects) $(CC) -o foo $(objects) $(libs) @end example @node Conditional Syntax, Testing Flags, Conditional Example, Conditionals @section Syntax of Conditionals @findex ifdef @findex ifeq @findex ifndef @findex ifneq @findex else @findex endif The syntax of a simple conditional with no @code{else} is as follows: @example @var{conditional-directive} @var{text-if-true} endif @end example @noindent The @var{text-if-true} may be any lines of text, to be considered as part of the makefile if the condition is true. If the condition is false, no text is used instead. The syntax of a complex conditional is as follows: @example @var{conditional-directive} @var{text-if-true} else @var{text-if-false} endif @end example or: @example @var{conditional-directive} @var{text-if-one-is-true} else @var{conditional-directive} @var{text-if-true} else @var{text-if-false} endif @end example @noindent There can be as many ``@code{else} @var{conditional-directive}'' clauses as necessary. Once a given condition is true, @var{text-if-true} is used and no other clause is used; if no condition is true then @var{text-if-false} is used. The @var{text-if-true} and @var{text-if-false} can be any number of lines of text. The syntax of the @var{conditional-directive} is the same whether the conditional is simple or complex; after an @code{else} or not. There are four different directives that test different conditions. Here is a table of them: @table @code @item ifeq (@var{arg1}, @var{arg2}) @itemx ifeq '@var{arg1}' '@var{arg2}' @itemx ifeq "@var{arg1}" "@var{arg2}" @itemx ifeq "@var{arg1}" '@var{arg2}' @itemx ifeq '@var{arg1}' "@var{arg2}" Expand all variable references in @var{arg1} and @var{arg2} and compare them. If they are identical, the @var{text-if-true} is effective; otherwise, the @var{text-if-false}, if any, is effective. Often you want to test if a variable has a non-empty value. When the value results from complex expansions of variables and functions, expansions you would consider empty may actually contain whitespace characters and thus are not seen as empty. However, you can use the @code{strip} function (@pxref{Text Functions}) to avoid interpreting whitespace as a non-empty value. For example: @example @group ifeq ($(strip $(foo)),) @var{text-if-empty} endif @end group @end example @noindent will evaluate @var{text-if-empty} even if the expansion of @code{$(foo)} contains whitespace characters. @item ifneq (@var{arg1}, @var{arg2}) @itemx ifneq '@var{arg1}' '@var{arg2}' @itemx ifneq "@var{arg1}" "@var{arg2}" @itemx ifneq "@var{arg1}" '@var{arg2}' @itemx ifneq '@var{arg1}' "@var{arg2}" Expand all variable references in @var{arg1} and @var{arg2} and compare them. If they are different, the @var{text-if-true} is effective; otherwise, the @var{text-if-false}, if any, is effective. @item ifdef @var{variable-name} The @code{ifdef} form takes the @emph{name} of a variable as its argument, not a reference to a variable. The value of that variable has a non-empty value, the @var{text-if-true} is effective; otherwise, the @var{text-if-false}, if any, is effective. Variables that have never been defined have an empty value. The text @var{variable-name} is expanded, so it could be a variable or function that expands to the name of a variable. For example: @example bar = true foo = bar ifdef $(foo) frobozz = yes endif @end example The variable reference @code{$(foo)} is expanded, yielding @code{bar}, which is considered to be the name of a variable. The variable @code{bar} is not expanded, but its value is examined to determine if it is non-empty. Note that @code{ifdef} only tests whether a variable has a value. It does not expand the variable to see if that value is nonempty. Consequently, tests using @code{ifdef} return true for all definitions except those like @code{foo =}. To test for an empty value, use @w{@code{ifeq ($(foo),)}}. For example, @example bar = foo = $(bar) ifdef foo frobozz = yes else frobozz = no endif @end example @noindent sets @samp{frobozz} to @samp{yes}, while: @example foo = ifdef foo frobozz = yes else frobozz = no endif @end example @noindent sets @samp{frobozz} to @samp{no}. @item ifndef @var{variable-name} If the variable @var{variable-name} has an empty value, the @var{text-if-true} is effective; otherwise, the @var{text-if-false}, if any, is effective. The rules for expansion and testing of @var{variable-name} are identical to the @code{ifdef} directive. @end table Extra spaces are allowed and ignored at the beginning of the conditional directive line, but a tab is not allowed. (If the line begins with a tab, it will be considered a command for a rule.) Aside from this, extra spaces or tabs may be inserted with no effect anywhere except within the directive name or within an argument. A comment starting with @samp{#} may appear at the end of the line. The other two directives that play a part in a conditional are @code{else} and @code{endif}. Each of these directives is written as one word, with no arguments. Extra spaces are allowed and ignored at the beginning of the line, and spaces or tabs at the end. A comment starting with @samp{#} may appear at the end of the line. Conditionals affect which lines of the makefile @code{make} uses. If the condition is true, @code{make} reads the lines of the @var{text-if-true} as part of the makefile; if the condition is false, @code{make} ignores those lines completely. It follows that syntactic units of the makefile, such as rules, may safely be split across the beginning or the end of the conditional.@refill @code{make} evaluates conditionals when it reads a makefile. Consequently, you cannot use automatic variables in the tests of conditionals because they are not defined until commands are run (@pxref{Automatic Variables}). To prevent intolerable confusion, it is not permitted to start a conditional in one makefile and end it in another. However, you may write an @code{include} directive within a conditional, provided you do not attempt to terminate the conditional inside the included file. @node Testing Flags, , Conditional Syntax, Conditionals @section Conditionals that Test Flags You can write a conditional that tests @code{make} command flags such as @samp{-t} by using the variable @code{MAKEFLAGS} together with the @code{findstring} function (@pxref{Text Functions, , Functions for String Substitution and Analysis}). This is useful when @code{touch} is not enough to make a file appear up to date. The @code{findstring} function determines whether one string appears as a substring of another. If you want to test for the @samp{-t} flag, use @samp{t} as the first string and the value of @code{MAKEFLAGS} as the other. For example, here is how to arrange to use @samp{ranlib -t} to finish marking an archive file up to date: @example archive.a: @dots{} ifneq (,$(findstring t,$(MAKEFLAGS))) +touch archive.a +ranlib -t archive.a else ranlib archive.a endif @end example @noindent The @samp{+} prefix marks those command lines as ``recursive'' so that they will be executed despite use of the @samp{-t} flag. @xref{Recursion, ,Recursive Use of @code{make}}. @node Functions, Running, Conditionals, Top @chapter Functions for Transforming Text @cindex functions @dfn{Functions} allow you to do text processing in the makefile to compute the files to operate on or the commands to use. You use a function in a @dfn{function call}, where you give the name of the function and some text (the @dfn{arguments}) for the function to operate on. The result of the function's processing is substituted into the makefile at the point of the call, just as a variable might be substituted. @menu * Syntax of Functions:: How to write a function call. * Text Functions:: General-purpose text manipulation functions. * File Name Functions:: Functions for manipulating file names. * Conditional Functions:: Functions that implement conditions. * Foreach Function:: Repeat some text with controlled variation. * Call Function:: Expand a user-defined function. * Value Function:: Return the un-expanded value of a variable. * Eval Function:: Evaluate the arguments as makefile syntax. * Origin Function:: Find where a variable got its value. * Flavor Function:: Find out the flavor of a variable. * Shell Function:: Substitute the output of a shell command. * Make Control Functions:: Functions that control how make runs. @end menu @node Syntax of Functions, Text Functions, Functions, Functions @section Function Call Syntax @cindex @code{$}, in function call @cindex dollar sign (@code{$}), in function call @cindex arguments of functions @cindex functions, syntax of A function call resembles a variable reference. It looks like this: @example $(@var{function} @var{arguments}) @end example @noindent or like this: @example $@{@var{function} @var{arguments}@} @end example Here @var{function} is a function name; one of a short list of names that are part of @code{make}. You can also essentially create your own functions by using the @code{call} builtin function. The @var{arguments} are the arguments of the function. They are separated from the function name by one or more spaces or tabs, and if there is more than one argument, then they are separated by commas. Such whitespace and commas are not part of an argument's value. The delimiters which you use to surround the function call, whether parentheses or braces, can appear in an argument only in matching pairs; the other kind of delimiters may appear singly. If the arguments themselves contain other function calls or variable references, it is wisest to use the same kind of delimiters for all the references; write @w{@samp{$(subst a,b,$(x))}}, not @w{@samp{$(subst a,b,$@{x@})}}. This is because it is clearer, and because only one type of delimiter is matched to find the end of the reference. The text written for each argument is processed by substitution of variables and function calls to produce the argument value, which is the text on which the function acts. The substitution is done in the order in which the arguments appear. Commas and unmatched parentheses or braces cannot appear in the text of an argument as written; leading spaces cannot appear in the text of the first argument as written. These characters can be put into the argument value by variable substitution. First define variables @code{comma} and @code{space} whose values are isolated comma and space characters, then substitute these variables where such characters are wanted, like this: @example @group comma:= , empty:= space:= $(empty) $(empty) foo:= a b c bar:= $(subst $(space),$(comma),$(foo)) # @r{bar is now `a,b,c'.} @end group @end example @noindent Here the @code{subst} function replaces each space with a comma, through the value of @code{foo}, and substitutes the result. @node Text Functions, File Name Functions, Syntax of Functions, Functions @section Functions for String Substitution and Analysis @cindex functions, for text Here are some functions that operate on strings: @table @code @item $(subst @var{from},@var{to},@var{text}) @findex subst Performs a textual replacement on the text @var{text}: each occurrence of @var{from} is replaced by @var{to}. The result is substituted for the function call. For example, @example $(subst ee,EE,feet on the street) @end example substitutes the string @samp{fEEt on the strEEt}. @item $(patsubst @var{pattern},@var{replacement},@var{text}) @findex patsubst Finds whitespace-separated words in @var{text} that match @var{pattern} and replaces them with @var{replacement}. Here @var{pattern} may contain a @samp{%} which acts as a wildcard, matching any number of any characters within a word. If @var{replacement} also contains a @samp{%}, the @samp{%} is replaced by the text that matched the @samp{%} in @var{pattern}. Only the first @samp{%} in the @var{pattern} and @var{replacement} is treated this way; any subsequent @samp{%} is unchanged.@refill @cindex @code{%}, quoting in @code{patsubst} @cindex @code{%}, quoting with @code{\} (backslash) @cindex @code{\} (backslash), to quote @code{%} @cindex backslash (@code{\}), to quote @code{%} @cindex quoting @code{%}, in @code{patsubst} @samp{%} characters in @code{patsubst} function invocations can be quoted with preceding backslashes (@samp{\}). Backslashes that would otherwise quote @samp{%} characters can be quoted with more backslashes. Backslashes that quote @samp{%} characters or other backslashes are removed from the pattern before it is compared file names or has a stem substituted into it. Backslashes that are not in danger of quoting @samp{%} characters go unmolested. For example, the pattern @file{the\%weird\\%pattern\\} has @samp{the%weird\} preceding the operative @samp{%} character, and @samp{pattern\\} following it. The final two backslashes are left alone because they cannot affect any @samp{%} character.@refill Whitespace between words is folded into single space characters; leading and trailing whitespace is discarded. For example, @example $(patsubst %.c,%.o,x.c.c bar.c) @end example @noindent produces the value @samp{x.c.o bar.o}. Substitution references (@pxref{Substitution Refs, ,Substitution References}) are a simpler way to get the effect of the @code{patsubst} function: @example $(@var{var}:@var{pattern}=@var{replacement}) @end example @noindent is equivalent to @example $(patsubst @var{pattern},@var{replacement},$(@var{var})) @end example The second shorthand simplifies one of the most common uses of @code{patsubst}: replacing the suffix at the end of file names. @example $(@var{var}:@var{suffix}=@var{replacement}) @end example @noindent is equivalent to @example $(patsubst %@var{suffix},%@var{replacement},$(@var{var})) @end example @noindent For example, you might have a list of object files: @example objects = foo.o bar.o baz.o @end example @noindent To get the list of corresponding source files, you could simply write: @example $(objects:.o=.c) @end example @noindent instead of using the general form: @example $(patsubst %.o,%.c,$(objects)) @end example @item $(strip @var{string}) @cindex stripping whitespace @cindex whitespace, stripping @cindex spaces, stripping @findex strip Removes leading and trailing whitespace from @var{string} and replaces each internal sequence of one or more whitespace characters with a single space. Thus, @samp{$(strip a b c )} results in @w{@samp{a b c}}. The function @code{strip} can be very useful when used in conjunction with conditionals. When comparing something with the empty string @samp{} using @code{ifeq} or @code{ifneq}, you usually want a string of just whitespace to match the empty string (@pxref{Conditionals}). Thus, the following may fail to have the desired results: @example .PHONY: all ifneq "$(needs_made)" "" all: $(needs_made) else all:;@@echo 'Nothing to make!' endif @end example @noindent Replacing the variable reference @w{@samp{$(needs_made)}} with the function call @w{@samp{$(strip $(needs_made))}} in the @code{ifneq} directive would make it more robust.@refill @item $(findstring @var{find},@var{in}) @findex findstring @cindex searching for strings @cindex finding strings @cindex strings, searching for Searches @var{in} for an occurrence of @var{find}. If it occurs, the value is @var{find}; otherwise, the value is empty. You can use this function in a conditional to test for the presence of a specific substring in a given string. Thus, the two examples, @example $(findstring a,a b c) $(findstring a,b c) @end example @noindent produce the values @samp{a} and @samp{} (the empty string), respectively. @xref{Testing Flags}, for a practical application of @code{findstring}.@refill @need 750 @findex filter @cindex filtering words @cindex words, filtering @item $(filter @var{pattern}@dots{},@var{text}) Returns all whitespace-separated words in @var{text} that @emph{do} match any of the @var{pattern} words, removing any words that @emph{do not} match. The patterns are written using @samp{%}, just like the patterns used in the @code{patsubst} function above.@refill The @code{filter} function can be used to separate out different types of strings (such as file names) in a variable. For example: @example sources := foo.c bar.c baz.s ugh.h foo: $(sources) cc $(filter %.c %.s,$(sources)) -o foo @end example @noindent says that @file{foo} depends of @file{foo.c}, @file{bar.c}, @file{baz.s} and @file{ugh.h} but only @file{foo.c}, @file{bar.c} and @file{baz.s} should be specified in the command to the compiler.@refill @item $(filter-out @var{pattern}@dots{},@var{text}) @findex filter-out @cindex filtering out words @cindex words, filtering out Returns all whitespace-separated words in @var{text} that @emph{do not} match any of the @var{pattern} words, removing the words that @emph{do} match one or more. This is the exact opposite of the @code{filter} function.@refill For example, given: @example @group objects=main1.o foo.o main2.o bar.o mains=main1.o main2.o @end group @end example @noindent the following generates a list which contains all the object files not in @samp{mains}: @example $(filter-out $(mains),$(objects)) @end example @need 1500 @findex sort @cindex sorting words @item $(sort @var{list}) Sorts the words of @var{list} in lexical order, removing duplicate words. The output is a list of words separated by single spaces. Thus, @example $(sort foo bar lose) @end example @noindent returns the value @samp{bar foo lose}. @cindex removing duplicate words @cindex duplicate words, removing @cindex words, removing duplicates Incidentally, since @code{sort} removes duplicate words, you can use it for this purpose even if you don't care about the sort order. @item $(word @var{n},@var{text}) @findex word @cindex word, selecting a @cindex selecting a word Returns the @var{n}th word of @var{text}. The legitimate values of @var{n} start from 1. If @var{n} is bigger than the number of words in @var{text}, the value is empty. For example, @example $(word 2, foo bar baz) @end example @noindent returns @samp{bar}. @item $(wordlist @var{s},@var{e},@var{text}) @findex wordlist @cindex words, selecting lists of @cindex selecting word lists Returns the list of words in @var{text} starting with word @var{s} and ending with word @var{e} (inclusive). The legitimate values of @var{s} start from 1; @var{e} may start from 0. If @var{s} is bigger than the number of words in @var{text}, the value is empty. If @var{e} is bigger than the number of words in @var{text}, words up to the end of @var{text} are returned. If @var{s} is greater than @var{e}, nothing is returned. For example, @example $(wordlist 2, 3, foo bar baz) @end example @noindent returns @samp{bar baz}. @c Following item phrased to prevent overfull hbox. --RJC 17 Jul 92 @item $(words @var{text}) @findex words @cindex words, finding number Returns the number of words in @var{text}. Thus, the last word of @var{text} is @w{@code{$(word $(words @var{text}),@var{text})}}.@refill @item $(firstword @var{names}@dots{}) @findex firstword @cindex words, extracting first The argument @var{names} is regarded as a series of names, separated by whitespace. The value is the first name in the series. The rest of the names are ignored. For example, @example $(firstword foo bar) @end example @noindent produces the result @samp{foo}. Although @code{$(firstword @var{text})} is the same as @code{$(word 1,@var{text})}, the @code{firstword} function is retained for its simplicity.@refill @item $(lastword @var{names}@dots{}) @findex lastword @cindex words, extracting last The argument @var{names} is regarded as a series of names, separated by whitespace. The value is the last name in the series. For example, @example $(lastword foo bar) @end example @noindent produces the result @samp{bar}. Although @code{$(lastword @var{text})} is the same as @code{$(word $(words @var{text}),@var{text})}, the @code{lastword} function was added for its simplicity and better performance.@refill @end table Here is a realistic example of the use of @code{subst} and @code{patsubst}. Suppose that a makefile uses the @code{VPATH} variable to specify a list of directories that @code{make} should search for prerequisite files (@pxref{General Search, , @code{VPATH} Search Path for All Prerequisites}). This example shows how to tell the C compiler to search for header files in the same list of directories.@refill The value of @code{VPATH} is a list of directories separated by colons, such as @samp{src:../headers}. First, the @code{subst} function is used to change the colons to spaces: @example $(subst :, ,$(VPATH)) @end example @noindent This produces @samp{src ../headers}. Then @code{patsubst} is used to turn each directory name into a @samp{-I} flag. These can be added to the value of the variable @code{CFLAGS}, which is passed automatically to the C compiler, like this: @example override CFLAGS += $(patsubst %,-I%,$(subst :, ,$(VPATH))) @end example @noindent The effect is to append the text @samp{-Isrc -I../headers} to the previously given value of @code{CFLAGS}. The @code{override} directive is used so that the new value is assigned even if the previous value of @code{CFLAGS} was specified with a command argument (@pxref{Override Directive, , The @code{override} Directive}). @node File Name Functions, Conditional Functions, Text Functions, Functions @section Functions for File Names @cindex functions, for file names @cindex file name functions Several of the built-in expansion functions relate specifically to taking apart file names or lists of file names. Each of the following functions performs a specific transformation on a file name. The argument of the function is regarded as a series of file names, separated by whitespace. (Leading and trailing whitespace is ignored.) Each file name in the series is transformed in the same way and the results are concatenated with single spaces between them. @table @code @item $(dir @var{names}@dots{}) @findex dir @cindex directory part @cindex file name, directory part Extracts the directory-part of each file name in @var{names}. The directory-part of the file name is everything up through (and including) the last slash in it. If the file name contains no slash, the directory part is the string @samp{./}. For example, @example $(dir src/foo.c hacks) @end example @noindent produces the result @samp{src/ ./}. @item $(notdir @var{names}@dots{}) @findex notdir @cindex file name, nondirectory part @cindex nondirectory part Extracts all but the directory-part of each file name in @var{names}. If the file name contains no slash, it is left unchanged. Otherwise, everything through the last slash is removed from it. A file name that ends with a slash becomes an empty string. This is unfortunate, because it means that the result does not always have the same number of whitespace-separated file names as the argument had; but we do not see any other valid alternative. For example, @example $(notdir src/foo.c hacks) @end example @noindent produces the result @samp{foo.c hacks}. @item $(suffix @var{names}@dots{}) @findex suffix @cindex suffix, function to find @cindex file name suffix Extracts the suffix of each file name in @var{names}. If the file name contains a period, the suffix is everything starting with the last period. Otherwise, the suffix is the empty string. This frequently means that the result will be empty when @var{names} is not, and if @var{names} contains multiple file names, the result may contain fewer file names. For example, @example $(suffix src/foo.c src-1.0/bar.c hacks) @end example @noindent produces the result @samp{.c .c}. @item $(basename @var{names}@dots{}) @findex basename @cindex basename @cindex file name, basename of Extracts all but the suffix of each file name in @var{names}. If the file name contains a period, the basename is everything starting up to (and not including) the last period. Periods in the directory part are ignored. If there is no period, the basename is the entire file name. For example, @example $(basename src/foo.c src-1.0/bar hacks) @end example @noindent produces the result @samp{src/foo src-1.0/bar hacks}. @c plural convention with dots (be consistent) @item $(addsuffix @var{suffix},@var{names}@dots{}) @findex addsuffix @cindex suffix, adding @cindex file name suffix, adding The argument @var{names} is regarded as a series of names, separated by whitespace; @var{suffix} is used as a unit. The value of @var{suffix} is appended to the end of each individual name and the resulting larger names are concatenated with single spaces between them. For example, @example $(addsuffix .c,foo bar) @end example @noindent produces the result @samp{foo.c bar.c}. @item $(addprefix @var{prefix},@var{names}@dots{}) @findex addprefix @cindex prefix, adding @cindex file name prefix, adding The argument @var{names} is regarded as a series of names, separated by whitespace; @var{prefix} is used as a unit. The value of @var{prefix} is prepended to the front of each individual name and the resulting larger names are concatenated with single spaces between them. For example, @example $(addprefix src/,foo bar) @end example @noindent produces the result @samp{src/foo src/bar}. @item $(join @var{list1},@var{list2}) @findex join @cindex joining lists of words @cindex words, joining lists Concatenates the two arguments word by word: the two first words (one from each argument) concatenated form the first word of the result, the two second words form the second word of the result, and so on. So the @var{n}th word of the result comes from the @var{n}th word of each argument. If one argument has more words that the other, the extra words are copied unchanged into the result. For example, @samp{$(join a b,.c .o)} produces @samp{a.c b.o}. Whitespace between the words in the lists is not preserved; it is replaced with a single space. This function can merge the results of the @code{dir} and @code{notdir} functions, to produce the original list of files which was given to those two functions.@refill @item $(wildcard @var{pattern}) @findex wildcard @cindex wildcard, function The argument @var{pattern} is a file name pattern, typically containing wildcard characters (as in shell file name patterns). The result of @code{wildcard} is a space-separated list of the names of existing files that match the pattern. @xref{Wildcards, ,Using Wildcard Characters in File Names}. @item $(realpath @var{names}@dots{}) @findex realpath @cindex realpath @cindex file name, realpath of For each file name in @var{names} return the canonical absolute name. A canonical name does not contain any @code{.} or @code{..} components, nor any repeated path separators (@code{/}) or symlinks. In case of a failure the empty string is returned. Consult the @code{realpath(3)} documentation for a list of possible failure causes. @item $(abspath @var{names}@dots{}) @findex abspath @cindex abspath @cindex file name, abspath of For each file name in @var{names} return an absolute name that does not contain any @code{.} or @code{..} components, nor any repeated path separators (@code{/}). Note that, in contrast to @code{realpath} function, @code{abspath} does not resolve symlinks and does not require the file names to refer to an existing file or directory. Use the @code{wildcard} function to test for existence. @end table @node Conditional Functions, Foreach Function, File Name Functions, Functions @section Functions for Conditionals @findex if @cindex conditional expansion There are three functions that provide conditional expansion. A key aspect of these functions is that not all of the arguments are expanded initially. Only those arguments which need to be expanded, will be expanded. @table @code @item $(if @var{condition},@var{then-part}[,@var{else-part}]) @findex if The @code{if} function provides support for conditional expansion in a functional context (as opposed to the GNU @code{make} makefile conditionals such as @code{ifeq} (@pxref{Conditional Syntax, ,Syntax of Conditionals}). The first argument, @var{condition}, first has all preceding and trailing whitespace stripped, then is expanded. If it expands to any non-empty string, then the condition is considered to be true. If it expands to an empty string, the condition is considered to be false. If the condition is true then the second argument, @var{then-part}, is evaluated and this is used as the result of the evaluation of the entire @code{if} function. If the condition is false then the third argument, @var{else-part}, is evaluated and this is the result of the @code{if} function. If there is no third argument, the @code{if} function evaluates to nothing (the empty string). Note that only one of the @var{then-part} or the @var{else-part} will be evaluated, never both. Thus, either can contain side-effects (such as @code{shell} function calls, etc.) @item $(or @var{condition1}[,@var{condition2}[,@var{condition3}@dots{}]]) @findex or The @code{or} function provides a ``short-circuiting'' OR operation. Each argument is expanded, in order. If an argument expands to a non-empty string the processing stops and the result of the expansion is that string. If, after all arguments are expanded, all of them are false (empty), then the result of the expansion is the empty string. @item $(and @var{condition1}[,@var{condition2}[,@var{condition3}@dots{}]]) @findex and The @code{and} function provides a ``short-circuiting'' AND operation. Each argument is expanded, in order. If an argument expands to an empty string the processing stops and the result of the expansion is the empty string. If all arguments expand to a non-empty string then the result of the expansion is the expansion of the last argument. @end table @node Foreach Function, Call Function, Conditional Functions, Functions @section The @code{foreach} Function @findex foreach @cindex words, iterating over The @code{foreach} function is very different from other functions. It causes one piece of text to be used repeatedly, each time with a different substitution performed on it. It resembles the @code{for} command in the shell @code{sh} and the @code{foreach} command in the C-shell @code{csh}. The syntax of the @code{foreach} function is: @example $(foreach @var{var},@var{list},@var{text}) @end example @noindent The first two arguments, @var{var} and @var{list}, are expanded before anything else is done; note that the last argument, @var{text}, is @strong{not} expanded at the same time. Then for each word of the expanded value of @var{list}, the variable named by the expanded value of @var{var} is set to that word, and @var{text} is expanded. Presumably @var{text} contains references to that variable, so its expansion will be different each time. The result is that @var{text} is expanded as many times as there are whitespace-separated words in @var{list}. The multiple expansions of @var{text} are concatenated, with spaces between them, to make the result of @code{foreach}. This simple example sets the variable @samp{files} to the list of all files in the directories in the list @samp{dirs}: @example dirs := a b c d files := $(foreach dir,$(dirs),$(wildcard $(dir)/*)) @end example Here @var{text} is @samp{$(wildcard $(dir)/*)}. The first repetition finds the value @samp{a} for @code{dir}, so it produces the same result as @samp{$(wildcard a/*)}; the second repetition produces the result of @samp{$(wildcard b/*)}; and the third, that of @samp{$(wildcard c/*)}. This example has the same result (except for setting @samp{dirs}) as the following example: @example files := $(wildcard a/* b/* c/* d/*) @end example When @var{text} is complicated, you can improve readability by giving it a name, with an additional variable: @example find_files = $(wildcard $(dir)/*) dirs := a b c d files := $(foreach dir,$(dirs),$(find_files)) @end example @noindent Here we use the variable @code{find_files} this way. We use plain @samp{=} to define a recursively-expanding variable, so that its value contains an actual function call to be reexpanded under the control of @code{foreach}; a simply-expanded variable would not do, since @code{wildcard} would be called only once at the time of defining @code{find_files}. The @code{foreach} function has no permanent effect on the variable @var{var}; its value and flavor after the @code{foreach} function call are the same as they were beforehand. The other values which are taken from @var{list} are in effect only temporarily, during the execution of @code{foreach}. The variable @var{var} is a simply-expanded variable during the execution of @code{foreach}. If @var{var} was undefined before the @code{foreach} function call, it is undefined after the call. @xref{Flavors, ,The Two Flavors of Variables}.@refill You must take care when using complex variable expressions that result in variable names because many strange things are valid variable names, but are probably not what you intended. For example, @smallexample files := $(foreach Esta escrito en espanol!,b c ch,$(find_files)) @end smallexample @noindent might be useful if the value of @code{find_files} references the variable whose name is @samp{Esta escrito en espanol!} (es un nombre bastante largo, no?), but it is more likely to be a mistake. @node Call Function, Value Function, Foreach Function, Functions @section The @code{call} Function @findex call @cindex functions, user defined @cindex user defined functions The @code{call} function is unique in that it can be used to create new parameterized functions. You can write a complex expression as the value of a variable, then use @code{call} to expand it with different values. The syntax of the @code{call} function is: @example $(call @var{variable},@var{param},@var{param},@dots{}) @end example When @code{make} expands this function, it assigns each @var{param} to temporary variables @code{$(1)}, @code{$(2)}, etc. The variable @code{$(0)} will contain @var{variable}. There is no maximum number of parameter arguments. There is no minimum, either, but it doesn't make sense to use @code{call} with no parameters. Then @var{variable} is expanded as a @code{make} variable in the context of these temporary assignments. Thus, any reference to @code{$(1)} in the value of @var{variable} will resolve to the first @var{param} in the invocation of @code{call}. Note that @var{variable} is the @emph{name} of a variable, not a @emph{reference} to that variable. Therefore you would not normally use a @samp{$} or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.) If @var{variable} is the name of a builtin function, the builtin function is always invoked (even if a @code{make} variable by that name also exists). The @code{call} function expands the @var{param} arguments before assigning them to temporary variables. This means that @var{variable} values containing references to builtin functions that have special expansion rules, like @code{foreach} or @code{if}, may not work as you expect. Some examples may make this clearer. This macro simply reverses its arguments: @smallexample reverse = $(2) $(1) foo = $(call reverse,a,b) @end smallexample @noindent Here @var{foo} will contain @samp{b a}. This one is slightly more interesting: it defines a macro to search for the first instance of a program in @code{PATH}: @smallexample pathsearch = $(firstword $(wildcard $(addsuffix /$(1),$(subst :, ,$(PATH))))) LS := $(call pathsearch,ls) @end smallexample @noindent Now the variable LS contains @code{/bin/ls} or similar. The @code{call} function can be nested. Each recursive invocation gets its own local values for @code{$(1)}, etc.@: that mask the values of higher-level @code{call}. For example, here is an implementation of a @dfn{map} function: @smallexample map = $(foreach a,$(2),$(call $(1),$(a))) @end smallexample Now you can @var{map} a function that normally takes only one argument, such as @code{origin}, to multiple values in one step: @smallexample o = $(call map,origin,o map MAKE) @end smallexample and end up with @var{o} containing something like @samp{file file default}. A final caution: be careful when adding whitespace to the arguments to @code{call}. As with other functions, any whitespace contained in the second and subsequent arguments is kept; this can cause strange effects. It's generally safest to remove all extraneous whitespace when providing parameters to @code{call}. @node Value Function, Eval Function, Call Function, Functions @comment node-name, next, previous, up @section The @code{value} Function @findex value @cindex variables, unexpanded value The @code{value} function provides a way for you to use the value of a variable @emph{without} having it expanded. Please note that this does not undo expansions which have already occurred; for example if you create a simply expanded variable its value is expanded during the definition; in that case the @code{value} function will return the same result as using the variable directly. The syntax of the @code{value} function is: @example $(value @var{variable}) @end example Note that @var{variable} is the @emph{name} of a variable; not a @emph{reference} to that variable. Therefore you would not normally use a @samp{$} or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.) The result of this function is a string containing the value of @var{variable}, without any expansion occurring. For example, in this makefile: @example @group FOO = $PATH all: @@echo $(FOO) @@echo $(value FOO) @end group @end example @noindent The first output line would be @code{ATH}, since the ``$P'' would be expanded as a @code{make} variable, while the second output line would be the current value of your @code{$PATH} environment variable, since the @code{value} function avoided the expansion. The @code{value} function is most often used in conjunction with the @code{eval} function (@pxref{Eval Function}). @node Eval Function, Origin Function, Value Function, Functions @comment node-name, next, previous, up @section The @code{eval} Function @findex eval @cindex evaluating makefile syntax @cindex makefile syntax, evaluating The @code{eval} function is very special: it allows you to define new makefile constructs that are not constant; which are the result of evaluating other variables and functions. The argument to the @code{eval} function is expanded, then the results of that expansion are parsed as makefile syntax. The expanded results can define new @code{make} variables, targets, implicit or explicit rules, etc. The result of the @code{eval} function is always the empty string; thus, it can be placed virtually anywhere in a makefile without causing syntax errors. It's important to realize that the @code{eval} argument is expanded @emph{twice}; first by the @code{eval} function, then the results of that expansion are expanded again when they are parsed as makefile syntax. This means you may need to provide extra levels of escaping for ``$'' characters when using @code{eval}. The @code{value} function (@pxref{Value Function}) can sometimes be useful in these situations, to circumvent unwanted expansions. Here is an example of how @code{eval} can be used; this example combines a number of concepts and other functions. Although it might seem overly complex to use @code{eval} in this example, rather than just writing out the rules, consider two things: first, the template definition (in @code{PROGRAM_template}) could need to be much more complex than it is here; and second, you might put the complex, ``generic'' part of this example into another makefile, then include it in all the individual makefiles. Now your individual makefiles are quite straightforward. @example @group PROGRAMS = server client server_OBJS = server.o server_priv.o server_access.o server_LIBS = priv protocol client_OBJS = client.o client_api.o client_mem.o client_LIBS = protocol # Everything after this is generic .PHONY: all all: $(PROGRAMS) define PROGRAM_template $(1): $$($(1)_OBJS) $$($(1)_LIBS:%=-l%) ALL_OBJS += $$($(1)_OBJS) endef $(foreach prog,$(PROGRAMS),$(eval $(call PROGRAM_template,$(prog)))) $(PROGRAMS): $(LINK.o) $^ $(LDLIBS) -o $@@ clean: rm -f $(ALL_OBJS) $(PROGRAMS) @end group @end example @node Origin Function, Flavor Function, Eval Function, Functions @section The @code{origin} Function @findex origin @cindex variables, origin of @cindex origin of variable The @code{origin} function is unlike most other functions in that it does not operate on the values of variables; it tells you something @emph{about} a variable. Specifically, it tells you where it came from. The syntax of the @code{origin} function is: @example $(origin @var{variable}) @end example Note that @var{variable} is the @emph{name} of a variable to inquire about; not a @emph{reference} to that variable. Therefore you would not normally use a @samp{$} or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.) The result of this function is a string telling you how the variable @var{variable} was defined: @table @samp @item undefined if @var{variable} was never defined. @item default if @var{variable} has a default definition, as is usual with @code{CC} and so on. @xref{Implicit Variables, ,Variables Used by Implicit Rules}. Note that if you have redefined a default variable, the @code{origin} function will return the origin of the later definition. @item environment if @var{variable} was defined as an environment variable and the @samp{-e} option is @emph{not} turned on (@pxref{Options Summary, ,Summary of Options}). @item environment override if @var{variable} was defined as an environment variable and the @w{@samp{-e}} option @emph{is} turned on (@pxref{Options Summary, ,Summary of Options}).@refill @item file if @var{variable} was defined in a makefile. @item command line if @var{variable} was defined on the command line. @item override if @var{variable} was defined with an @code{override} directive in a makefile (@pxref{Override Directive, ,The @code{override} Directive}). @item automatic if @var{variable} is an automatic variable defined for the execution of the commands for each rule (@pxref{Automatic Variables}). @end table This information is primarily useful (other than for your curiosity) to determine if you want to believe the value of a variable. For example, suppose you have a makefile @file{foo} that includes another makefile @file{bar}. You want a variable @code{bletch} to be defined in @file{bar} if you run the command @w{@samp{make -f bar}}, even if the environment contains a definition of @code{bletch}. However, if @file{foo} defined @code{bletch} before including @file{bar}, you do not want to override that definition. This could be done by using an @code{override} directive in @file{foo}, giving that definition precedence over the later definition in @file{bar}; unfortunately, the @code{override} directive would also override any command line definitions. So, @file{bar} could include:@refill @example @group ifdef bletch ifeq "$(origin bletch)" "environment" bletch = barf, gag, etc. endif endif @end group @end example @noindent If @code{bletch} has been defined from the environment, this will redefine it. If you want to override a previous definition of @code{bletch} if it came from the environment, even under @samp{-e}, you could instead write: @example @group ifneq "$(findstring environment,$(origin bletch))" "" bletch = barf, gag, etc. endif @end group @end example Here the redefinition takes place if @samp{$(origin bletch)} returns either @samp{environment} or @samp{environment override}. @xref{Text Functions, , Functions for String Substitution and Analysis}. @node Flavor Function, Shell Function, Origin Function, Functions @section The @code{flavor} Function @findex flavor @cindex variables, flavor of @cindex flavor of variable The @code{flavor} function is unlike most other functions (and like @code{origin} function) in that it does not operate on the values of variables; it tells you something @emph{about} a variable. Specifically, it tells you the flavor of a variable (@pxref{Flavors, ,The Two Flavors of Variables}). The syntax of the @code{flavor} function is: @example $(flavor @var{variable}) @end example Note that @var{variable} is the @emph{name} of a variable to inquire about; not a @emph{reference} to that variable. Therefore you would not normally use a @samp{$} or parentheses when writing it. (You can, however, use a variable reference in the name if you want the name not to be a constant.) The result of this function is a string that identifies the flavor of the variable @var{variable}: @table @samp @item undefined if @var{variable} was never defined. @item recursive if @var{variable} is a recursively expanded variable. @item simple if @var{variable} is a simply expanded variable. @end table @node Shell Function, Make Control Functions, Flavor Function, Functions @section The @code{shell} Function @findex shell @cindex commands, expansion @cindex backquotes @cindex shell command, function for The @code{shell} function is unlike any other function other than the @code{wildcard} function (@pxref{Wildcard Function, ,The Function @code{wildcard}}) in that it communicates with the world outside of @code{make}. The @code{shell} function performs the same function that backquotes (@samp{`}) perform in most shells: it does @dfn{command expansion}. This means that it takes as an argument a shell command and evaluates to the output of the command. The only processing @code{make} does on the result is to convert each newline (or carriage-return / newline pair) to a single space. If there is a trailing (carriage-return and) newline it will simply be removed.@refill The commands run by calls to the @code{shell} function are run when the function calls are expanded (@pxref{Reading Makefiles, , How @code{make} Reads a Makefile}). Because this function involves spawning a new shell, you should carefully consider the performance implications of using the @code{shell} function within recursively expanded variables vs.@: simply expanded variables (@pxref{Flavors, ,The Two Flavors of Variables}). Here are some examples of the use of the @code{shell} function: @example contents := $(shell cat foo) @end example @noindent sets @code{contents} to the contents of the file @file{foo}, with a space (rather than a newline) separating each line. @example files := $(shell echo *.c) @end example @noindent sets @code{files} to the expansion of @samp{*.c}. Unless @code{make} is using a very strange shell, this has the same result as @w{@samp{$(wildcard *.c)}} (as long as at least one @samp{.c} file exists).@refill @node Make Control Functions, , Shell Function, Functions @section Functions That Control Make @cindex functions, for controlling make @cindex controlling make These functions control the way make runs. Generally, they are used to provide information to the user of the makefile or to cause make to stop if some sort of environmental error is detected. @table @code @item $(error @var{text}@dots{}) @findex error @cindex error, stopping on @cindex stopping make Generates a fatal error where the message is @var{text}. Note that the error is generated whenever this function is evaluated. So, if you put it inside a command script or on the right side of a recursive variable assignment, it won't be evaluated until later. The @var{text} will be expanded before the error is generated. For example, @example ifdef ERROR1 $(error error is $(ERROR1)) endif @end example @noindent will generate a fatal error during the read of the makefile if the @code{make} variable @code{ERROR1} is defined. Or, @example ERR = $(error found an error!) .PHONY: err err: ; $(ERR) @end example @noindent will generate a fatal error while @code{make} is running, if the @code{err} target is invoked. @item $(warning @var{text}@dots{}) @findex warning @cindex warnings, printing @cindex printing user warnings This function works similarly to the @code{error} function, above, except that @code{make} doesn't exit. Instead, @var{text} is expanded and the resulting message is displayed, but processing of the makefile continues. The result of the expansion of this function is the empty string. @item $(info @var{text}@dots{}) @findex info @cindex printing messages This function does nothing more than print its (expanded) argument(s) to standard output. No makefile name or line number is added. The result of the expansion of this function is the empty string. @end table @node Running, Implicit Rules, Functions, Top @chapter How to Run @code{make} A makefile that says how to recompile a program can be used in more than one way. The simplest use is to recompile every file that is out of date. Usually, makefiles are written so that if you run @code{make} with no arguments, it does just that. But you might want to update only some of the files; you might want to use a different compiler or different compiler options; you might want just to find out which files are out of date without changing them. By giving arguments when you run @code{make}, you can do any of these things and many others. @cindex exit status of make The exit status of @code{make} is always one of three values: @table @code @item 0 The exit status is zero if @code{make} is successful. @item 2 The exit status is two if @code{make} encounters any errors. It will print messages describing the particular errors. @item 1 The exit status is one if you use the @samp{-q} flag and @code{make} determines that some target is not already up to date. @xref{Instead of Execution, ,Instead of Executing the Commands}. @end table @menu * Makefile Arguments:: How to specify which makefile to use. * Goals:: How to use goal arguments to specify which parts of the makefile to use. * Instead of Execution:: How to use mode flags to specify what kind of thing to do with the commands in the makefile other than simply execute them. * Avoiding Compilation:: How to avoid recompiling certain files. * Overriding:: How to override a variable to specify an alternate compiler and other things. * Testing:: How to proceed past some errors, to test compilation. * Options Summary:: Summary of Options @end menu @node Makefile Arguments, Goals, Running, Running @section Arguments to Specify the Makefile @cindex @code{--file} @cindex @code{--makefile} @cindex @code{-f} The way to specify the name of the makefile is with the @samp{-f} or @samp{--file} option (@samp{--makefile} also works). For example, @samp{-f altmake} says to use the file @file{altmake} as the makefile. If you use the @samp{-f} flag several times and follow each @samp{-f} with an argument, all the specified files are used jointly as makefiles. If you do not use the @samp{-f} or @samp{--file} flag, the default is to try @file{GNUmakefile}, @file{makefile}, and @file{Makefile}, in that order, and use the first of these three which exists or can be made (@pxref{Makefiles, ,Writing Makefiles}).@refill @node Goals, Instead of Execution, Makefile Arguments, Running @section Arguments to Specify the Goals @cindex goal, how to specify The @dfn{goals} are the targets that @code{make} should strive ultimately to update. Other targets are updated as well if they appear as prerequisites of goals, or prerequisites of prerequisites of goals, etc. By default, the goal is the first target in the makefile (not counting targets that start with a period). Therefore, makefiles are usually written so that the first target is for compiling the entire program or programs they describe. If the first rule in the makefile has several targets, only the first target in the rule becomes the default goal, not the whole list. You can manage the selection of the default goal from within your makefile using the @code{.DEFAULT_GOAL} variable (@pxref{Special Variables, , Other Special Variables}). You can also specify a different goal or goals with command-line arguments to @code{make}. Use the name of the goal as an argument. If you specify several goals, @code{make} processes each of them in turn, in the order you name them. Any target in the makefile may be specified as a goal (unless it starts with @samp{-} or contains an @samp{=}, in which case it will be parsed as a switch or variable definition, respectively). Even targets not in the makefile may be specified, if @code{make} can find implicit rules that say how to make them. @vindex MAKECMDGOALS @code{Make} will set the special variable @code{MAKECMDGOALS} to the list of goals you specified on the command line. If no goals were given on the command line, this variable is empty. Note that this variable should be used only in special circumstances. An example of appropriate use is to avoid including @file{.d} files during @code{clean} rules (@pxref{Automatic Prerequisites}), so @code{make} won't create them only to immediately remove them again:@refill @example @group sources = foo.c bar.c ifneq ($(MAKECMDGOALS),clean) include $(sources:.c=.d) endif @end group @end example One use of specifying a goal is if you want to compile only a part of the program, or only one of several programs. Specify as a goal each file that you wish to remake. For example, consider a directory containing several programs, with a makefile that starts like this: @example .PHONY: all all: size nm ld ar as @end example If you are working on the program @code{size}, you might want to say @w{@samp{make size}} so that only the files of that program are recompiled. Another use of specifying a goal is to make files that are not normally made. For example, there may be a file of debugging output, or a version of the program that is compiled specially for testing, which has a rule in the makefile but is not a prerequisite of the default goal. Another use of specifying a goal is to run the commands associated with a phony target (@pxref{Phony Targets}) or empty target (@pxref{Empty Targets, ,Empty Target Files to Record Events}). Many makefiles contain a phony target named @file{clean} which deletes everything except source files. Naturally, this is done only if you request it explicitly with @w{@samp{make clean}}. Following is a list of typical phony and empty target names. @xref{Standard Targets}, for a detailed list of all the standard target names which GNU software packages use. @table @file @item all @cindex @code{all} @r{(standard target)} Make all the top-level targets the makefile knows about. @item clean @cindex @code{clean} @r{(standard target)} Delete all files that are normally created by running @code{make}. @item mostlyclean @cindex @code{mostlyclean} @r{(standard target)} Like @samp{clean}, but may refrain from deleting a few files that people normally don't want to recompile. For example, the @samp{mostlyclean} target for GCC does not delete @file{libgcc.a}, because recompiling it is rarely necessary and takes a lot of time. @item distclean @cindex @code{distclean} @r{(standard target)} @itemx realclean @cindex @code{realclean} @r{(standard target)} @itemx clobber @cindex @code{clobber} @r{(standard target)} Any of these targets might be defined to delete @emph{more} files than @samp{clean} does. For example, this would delete configuration files or links that you would normally create as preparation for compilation, even if the makefile itself cannot create these files. @item install @cindex @code{install} @r{(standard target)} Copy the executable file into a directory that users typically search for commands; copy any auxiliary files that the executable uses into the directories where it will look for them. @item print @cindex @code{print} @r{(standard target)} Print listings of the source files that have changed. @item tar @cindex @code{tar} @r{(standard target)} Create a tar file of the source files. @item shar @cindex @code{shar} @r{(standard target)} Create a shell archive (shar file) of the source files. @item dist @cindex @code{dist} @r{(standard target)} Create a distribution file of the source files. This might be a tar file, or a shar file, or a compressed version of one of the above, or even more than one of the above. @item TAGS @cindex @code{TAGS} @r{(standard target)} Update a tags table for this program. @item check @cindex @code{check} @r{(standard target)} @itemx test @cindex @code{test} @r{(standard target)} Perform self tests on the program this makefile builds. @end table @node Instead of Execution, Avoiding Compilation, Goals, Running @section Instead of Executing the Commands @cindex execution, instead of @cindex commands, instead of executing The makefile tells @code{make} how to tell whether a target is up to date, and how to update each target. But updating the targets is not always what you want. Certain options specify other activities for @code{make}. @comment Extra blank lines make it print better. @table @samp @item -n @itemx --just-print @itemx --dry-run @itemx --recon @cindex @code{--just-print} @cindex @code{--dry-run} @cindex @code{--recon} @cindex @code{-n} ``No-op''. The activity is to print what commands would be used to make the targets up to date, but not actually execute them. @item -t @itemx --touch @cindex @code{--touch} @cindex touching files @cindex target, touching @cindex @code{-t} ``Touch''. The activity is to mark the targets as up to date without actually changing them. In other words, @code{make} pretends to compile the targets but does not really change their contents. @item -q @itemx --question @cindex @code{--question} @cindex @code{-q} @cindex question mode ``Question''. The activity is to find out silently whether the targets are up to date already; but execute no commands in either case. In other words, neither compilation nor output will occur. @item -W @var{file} @itemx --what-if=@var{file} @itemx --assume-new=@var{file} @itemx --new-file=@var{file} @cindex @code{--what-if} @cindex @code{-W} @cindex @code{--assume-new} @cindex @code{--new-file} @cindex what if @cindex files, assuming new ``What if''. Each @samp{-W} flag is followed by a file name. The given files' modification times are recorded by @code{make} as being the present time, although the actual modification times remain the same. You can use the @samp{-W} flag in conjunction with the @samp{-n} flag to see what would happen if you were to modify specific files.@refill @end table With the @samp{-n} flag, @code{make} prints the commands that it would normally execute but does not execute them. With the @samp{-t} flag, @code{make} ignores the commands in the rules and uses (in effect) the command @code{touch} for each target that needs to be remade. The @code{touch} command is also printed, unless @samp{-s} or @code{.SILENT} is used. For speed, @code{make} does not actually invoke the program @code{touch}. It does the work directly. With the @samp{-q} flag, @code{make} prints nothing and executes no commands, but the exit status code it returns is zero if and only if the targets to be considered are already up to date. If the exit status is one, then some updating needs to be done. If @code{make} encounters an error, the exit status is two, so you can distinguish an error from a target that is not up to date. It is an error to use more than one of these three flags in the same invocation of @code{make}. @cindex +, and command execution The @samp{-n}, @samp{-t}, and @samp{-q} options do not affect command lines that begin with @samp{+} characters or contain the strings @samp{$(MAKE)} or @samp{$@{MAKE@}}. Note that only the line containing the @samp{+} character or the strings @samp{$(MAKE)} or @samp{$@{MAKE@}} is run regardless of these options. Other lines in the same rule are not run unless they too begin with @samp{+} or contain @samp{$(MAKE)} or @samp{$@{MAKE@}} (@xref{MAKE Variable, ,How the @code{MAKE} Variable Works}.) The @samp{-W} flag provides two features: @itemize @bullet @item If you also use the @samp{-n} or @samp{-q} flag, you can see what @code{make} would do if you were to modify some files. @item Without the @samp{-n} or @samp{-q} flag, when @code{make} is actually executing commands, the @samp{-W} flag can direct @code{make} to act as if some files had been modified, without actually modifying the files.@refill @end itemize Note that the options @samp{-p} and @samp{-v} allow you to obtain other information about @code{make} or about the makefiles in use (@pxref{Options Summary, ,Summary of Options}).@refill @node Avoiding Compilation, Overriding, Instead of Execution, Running @section Avoiding Recompilation of Some Files @cindex @code{-o} @cindex @code{--old-file} @cindex @code{--assume-old} @cindex files, assuming old @cindex files, avoiding recompilation of @cindex recompilation, avoiding Sometimes you may have changed a source file but you do not want to recompile all the files that depend on it. For example, suppose you add a macro or a declaration to a header file that many other files depend on. Being conservative, @code{make} assumes that any change in the header file requires recompilation of all dependent files, but you know that they do not need to be recompiled and you would rather not waste the time waiting for them to compile. If you anticipate the problem before changing the header file, you can use the @samp{-t} flag. This flag tells @code{make} not to run the commands in the rules, but rather to mark the target up to date by changing its last-modification date. You would follow this procedure: @enumerate @item Use the command @samp{make} to recompile the source files that really need recompilation, ensuring that the object files are up-to-date before you begin. @item Make the changes in the header files. @item Use the command @samp{make -t} to mark all the object files as up to date. The next time you run @code{make}, the changes in the header files will not cause any recompilation. @end enumerate If you have already changed the header file at a time when some files do need recompilation, it is too late to do this. Instead, you can use the @w{@samp{-o @var{file}}} flag, which marks a specified file as ``old'' (@pxref{Options Summary, ,Summary of Options}). This means that the file itself will not be remade, and nothing else will be remade on its account. Follow this procedure: @enumerate @item Recompile the source files that need compilation for reasons independent of the particular header file, with @samp{make -o @var{headerfile}}. If several header files are involved, use a separate @samp{-o} option for each header file. @item Touch all the object files with @samp{make -t}. @end enumerate @node Overriding, Testing, Avoiding Compilation, Running @section Overriding Variables @cindex overriding variables with arguments @cindex variables, overriding with arguments @cindex command line variables @cindex variables, command line An argument that contains @samp{=} specifies the value of a variable: @samp{@var{v}=@var{x}} sets the value of the variable @var{v} to @var{x}. If you specify a value in this way, all ordinary assignments of the same variable in the makefile are ignored; we say they have been @dfn{overridden} by the command line argument. The most common way to use this facility is to pass extra flags to compilers. For example, in a properly written makefile, the variable @code{CFLAGS} is included in each command that runs the C compiler, so a file @file{foo.c} would be compiled something like this: @example cc -c $(CFLAGS) foo.c @end example Thus, whatever value you set for @code{CFLAGS} affects each compilation that occurs. The makefile probably specifies the usual value for @code{CFLAGS}, like this: @example CFLAGS=-g @end example Each time you run @code{make}, you can override this value if you wish. For example, if you say @samp{make CFLAGS='-g -O'}, each C compilation will be done with @samp{cc -c -g -O}. (This also illustrates how you can use quoting in the shell to enclose spaces and other special characters in the value of a variable when you override it.) The variable @code{CFLAGS} is only one of many standard variables that exist just so that you can change them this way. @xref{Implicit Variables, , Variables Used by Implicit Rules}, for a complete list. You can also program the makefile to look at additional variables of your own, giving the user the ability to control other aspects of how the makefile works by changing the variables. When you override a variable with a command argument, you can define either a recursively-expanded variable or a simply-expanded variable. The examples shown above make a recursively-expanded variable; to make a simply-expanded variable, write @samp{:=} instead of @samp{=}. But, unless you want to include a variable reference or function call in the @emph{value} that you specify, it makes no difference which kind of variable you create. There is one way that the makefile can change a variable that you have overridden. This is to use the @code{override} directive, which is a line that looks like this: @samp{override @var{variable} = @var{value}} (@pxref{Override Directive, ,The @code{override} Directive}). @node Testing, Options Summary, Overriding, Running @section Testing the Compilation of a Program @cindex testing compilation @cindex compilation, testing Normally, when an error happens in executing a shell command, @code{make} gives up immediately, returning a nonzero status. No further commands are executed for any target. The error implies that the goal cannot be correctly remade, and @code{make} reports this as soon as it knows. When you are compiling a program that you have just changed, this is not what you want. Instead, you would rather that @code{make} try compiling every file that can be tried, to show you as many compilation errors as possible. @cindex @code{-k} @cindex @code{--keep-going} On these occasions, you should use the @samp{-k} or @samp{--keep-going} flag. This tells @code{make} to continue to consider the other prerequisites of the pending targets, remaking them if necessary, before it gives up and returns nonzero status. For example, after an error in compiling one object file, @samp{make -k} will continue compiling other object files even though it already knows that linking them will be impossible. In addition to continuing after failed shell commands, @samp{make -k} will continue as much as possible after discovering that it does not know how to make a target or prerequisite file. This will always cause an error message, but without @samp{-k}, it is a fatal error (@pxref{Options Summary, ,Summary of Options}).@refill The usual behavior of @code{make} assumes that your purpose is to get the goals up to date; once @code{make} learns that this is impossible, it might as well report the failure immediately. The @samp{-k} flag says that the real purpose is to test as much as possible of the changes made in the program, perhaps to find several independent problems so that you can correct them all before the next attempt to compile. This is why Emacs' @kbd{M-x compile} command passes the @samp{-k} flag by default. @node Options Summary, , Testing, Running @section Summary of Options @cindex options @cindex flags @cindex switches Here is a table of all the options @code{make} understands: @table @samp @item -b @cindex @code{-b} @itemx -m @cindex @code{-m} These options are ignored for compatibility with other versions of @code{make}. @item -B @cindex @code{-B} @itemx --always-make @cindex @code{--always-make} Consider all targets out-of-date. GNU @code{make} proceeds to consider targets and their prerequisites using the normal algorithms; however, all targets so considered are always remade regardless of the status of their prerequisites. To avoid infinite recursion, if @code{MAKE_RESTARTS} (@pxref{Special Variables, , Other Special Variables}) is set to a number greater than 0 this option is disabled when considering whether to remake makefiles (@pxref{Remaking Makefiles, , How Makefiles Are Remade}). @item -C @var{dir} @cindex @code{-C} @itemx --directory=@var{dir} @cindex @code{--directory} Change to directory @var{dir} before reading the makefiles. If multiple @samp{-C} options are specified, each is interpreted relative to the previous one: @samp{-C / -C etc} is equivalent to @samp{-C /etc}. This is typically used with recursive invocations of @code{make} (@pxref{Recursion, ,Recursive Use of @code{make}}). @item -d @cindex @code{-d} @c Extra blank line here makes the table look better. Print debugging information in addition to normal processing. The debugging information says which files are being considered for remaking, which file-times are being compared and with what results, which files actually need to be remade, which implicit rules are considered and which are applied---everything interesting about how @code{make} decides what to do. The @code{-d} option is equivalent to @samp{--debug=a} (see below). @item --debug[=@var{options}] @cindex @code{--debug} @c Extra blank line here makes the table look better. Print debugging information in addition to normal processing. Various levels and types of output can be chosen. With no arguments, print the ``basic'' level of debugging. Possible arguments are below; only the first character is considered, and values must be comma- or space-separated. @table @code @item a (@i{all}) All types of debugging output are enabled. This is equivalent to using @samp{-d}. @item b (@i{basic}) Basic debugging prints each target that was found to be out-of-date, and whether the build was successful or not. @item v (@i{verbose}) A level above @samp{basic}; includes messages about which makefiles were parsed, prerequisites that did not need to be rebuilt, etc. This option also enables @samp{basic} messages. @item i (@i{implicit}) Prints messages describing the implicit rule searches for each target. This option also enables @samp{basic} messages. @item j (@i{jobs}) Prints messages giving details on the invocation of specific subcommands. @item m (@i{makefile}) By default, the above messages are not enabled while trying to remake the makefiles. This option enables messages while rebuilding makefiles, too. Note that the @samp{all} option does enable this option. This option also enables @samp{basic} messages. @end table @item -e @cindex @code{-e} @itemx --environment-overrides @cindex @code{--environment-overrides} Give variables taken from the environment precedence over variables from makefiles. @xref{Environment, ,Variables from the Environment}. @item -f @var{file} @cindex @code{-f} @itemx --file=@var{file} @cindex @code{--file} @itemx --makefile=@var{file} @cindex @code{--makefile} Read the file named @var{file} as a makefile. @xref{Makefiles, ,Writing Makefiles}. @item -h @cindex @code{-h} @itemx --help @cindex @code{--help} @c Extra blank line here makes the table look better. Remind you of the options that @code{make} understands and then exit. @item -i @cindex @code{-i} @itemx --ignore-errors @cindex @code{--ignore-errors} Ignore all errors in commands executed to remake files. @xref{Errors, ,Errors in Commands}. @item -I @var{dir} @cindex @code{-I} @itemx --include-dir=@var{dir} @cindex @code{--include-dir} Specifies a directory @var{dir} to search for included makefiles. @xref{Include, ,Including Other Makefiles}. If several @samp{-I} options are used to specify several directories, the directories are searched in the order specified. @item -j [@var{jobs}] @cindex @code{-j} @itemx --jobs[=@var{jobs}] @cindex @code{--jobs} Specifies the number of jobs (commands) to run simultaneously. With no argument, @code{make} runs as many jobs simultaneously as possible. If there is more than one @samp{-j} option, the last one is effective. @xref{Parallel, ,Parallel Execution}, for more information on how commands are run. Note that this option is ignored on MS-DOS. @item -k @cindex @code{-k} @itemx --keep-going @cindex @code{--keep-going} Continue as much as possible after an error. While the target that failed, and those that depend on it, cannot be remade, the other prerequisites of these targets can be processed all the same. @xref{Testing, ,Testing the Compilation of a Program}. @item -l [@var{load}] @cindex @code{-l} @itemx --load-average[=@var{load}] @cindex @code{--load-average} @itemx --max-load[=@var{load}] @cindex @code{--max-load} Specifies that no new jobs (commands) should be started if there are other jobs running and the load average is at least @var{load} (a floating-point number). With no argument, removes a previous load limit. @xref{Parallel, ,Parallel Execution}. @item -L @cindex @code{-L} @itemx --check-symlink-times @cindex @code{--check-symlink-times} On systems that support symbolic links, this option causes @code{make} to consider the timestamps on any symbolic links in addition to the timestamp on the file referenced by those links. When this option is provided, the most recent timestamp among the file and the symbolic links is taken as the modification time for this target file. @item -n @cindex @code{-n} @itemx --just-print @cindex @code{--just-print} @itemx --dry-run @cindex @code{--dry-run} @itemx --recon @cindex @code{--recon} @c Extra blank line here makes the table look better. Print the commands that would be executed, but do not execute them. @xref{Instead of Execution, ,Instead of Executing the Commands}. @item -o @var{file} @cindex @code{-o} @itemx --old-file=@var{file} @cindex @code{--old-file} @itemx --assume-old=@var{file} @cindex @code{--assume-old} Do not remake the file @var{file} even if it is older than its prerequisites, and do not remake anything on account of changes in @var{file}. Essentially the file is treated as very old and its rules are ignored. @xref{Avoiding Compilation, ,Avoiding Recompilation of Some Files}.@refill @item -p @cindex @code{-p} @itemx --print-data-base @cindex @code{--print-data-base} @cindex data base of @code{make} rules @cindex predefined rules and variables, printing Print the data base (rules and variable values) that results from reading the makefiles; then execute as usual or as otherwise specified. This also prints the version information given by the @samp{-v} switch (see below). To print the data base without trying to remake any files, use @w{@samp{make -qp}}. To print the data base of predefined rules and variables, use @w{@samp{make -p -f /dev/null}}. The data base output contains filename and linenumber information for command and variable definitions, so it can be a useful debugging tool in complex environments. @item -q @cindex @code{-q} @itemx --question @cindex @code{--question} ``Question mode''. Do not run any commands, or print anything; just return an exit status that is zero if the specified targets are already up to date, one if any remaking is required, or two if an error is encountered. @xref{Instead of Execution, ,Instead of Executing the Commands}.@refill @item -r @cindex @code{-r} @itemx --no-builtin-rules @cindex @code{--no-builtin-rules} Eliminate use of the built-in implicit rules (@pxref{Implicit Rules, ,Using Implicit Rules}). You can still define your own by writing pattern rules (@pxref{Pattern Rules, ,Defining and Redefining Pattern Rules}). The @samp{-r} option also clears out the default list of suffixes for suffix rules (@pxref{Suffix Rules, ,Old-Fashioned Suffix Rules}). But you can still define your own suffixes with a rule for @code{.SUFFIXES}, and then define your own suffix rules. Note that only @emph{rules} are affected by the @code{-r} option; default variables remain in effect (@pxref{Implicit Variables, ,Variables Used by Implicit Rules}); see the @samp{-R} option below. @item -R @cindex @code{-R} @itemx --no-builtin-variables @cindex @code{--no-builtin-variables} Eliminate use of the built-in rule-specific variables (@pxref{Implicit Variables, ,Variables Used by Implicit Rules}). You can still define your own, of course. The @samp{-R} option also automatically enables the @samp{-r} option (see above), since it doesn't make sense to have implicit rules without any definitions for the variables that they use. @item -s @cindex @code{-s} @itemx --silent @cindex @code{--silent} @itemx --quiet @cindex @code{--quiet} @c Extra blank line here makes the table look better. Silent operation; do not print the commands as they are executed. @xref{Echoing, ,Command Echoing}. @item -S @cindex @code{-S} @itemx --no-keep-going @cindex @code{--no-keep-going} @itemx --stop @cindex @code{--stop} @c Extra blank line here makes the table look better. Cancel the effect of the @samp{-k} option. This is never necessary except in a recursive @code{make} where @samp{-k} might be inherited from the top-level @code{make} via @code{MAKEFLAGS} (@pxref{Recursion, ,Recursive Use of @code{make}}) or if you set @samp{-k} in @code{MAKEFLAGS} in your environment.@refill @item -t @cindex @code{-t} @itemx --touch @cindex @code{--touch} @c Extra blank line here makes the table look better. Touch files (mark them up to date without really changing them) instead of running their commands. This is used to pretend that the commands were done, in order to fool future invocations of @code{make}. @xref{Instead of Execution, ,Instead of Executing the Commands}. @item -v @cindex @code{-v} @itemx --version @cindex @code{--version} Print the version of the @code{make} program plus a copyright, a list of authors, and a notice that there is no warranty; then exit. @item -w @cindex @code{-w} @itemx --print-directory @cindex @code{--print-directory} Print a message containing the working directory both before and after executing the makefile. This may be useful for tracking down errors from complicated nests of recursive @code{make} commands. @xref{Recursion, ,Recursive Use of @code{make}}. (In practice, you rarely need to specify this option since @samp{make} does it for you; see @ref{-w Option, ,The @samp{--print-directory} Option}.) @itemx --no-print-directory @cindex @code{--no-print-directory} Disable printing of the working directory under @code{-w}. This option is useful when @code{-w} is turned on automatically, but you do not want to see the extra messages. @xref{-w Option, ,The @samp{--print-directory} Option}. @item -W @var{file} @cindex @code{-W} @itemx --what-if=@var{file} @cindex @code{--what-if} @itemx --new-file=@var{file} @cindex @code{--new-file} @itemx --assume-new=@var{file} @cindex @code{--assume-new} Pretend that the target @var{file} has just been modified. When used with the @samp{-n} flag, this shows you what would happen if you were to modify that file. Without @samp{-n}, it is almost the same as running a @code{touch} command on the given file before running @code{make}, except that the modification time is changed only in the imagination of @code{make}. @xref{Instead of Execution, ,Instead of Executing the Commands}. @item --warn-undefined-variables @cindex @code{--warn-undefined-variables} @cindex variables, warning for undefined @cindex undefined variables, warning message Issue a warning message whenever @code{make} sees a reference to an undefined variable. This can be helpful when you are trying to debug makefiles which use variables in complex ways. @end table @node Implicit Rules, Archives, Running, Top @chapter Using Implicit Rules @cindex implicit rule @cindex rule, implicit Certain standard ways of remaking target files are used very often. For example, one customary way to make an object file is from a C source file using the C compiler, @code{cc}. @dfn{Implicit rules} tell @code{make} how to use customary techniques so that you do not have to specify them in detail when you want to use them. For example, there is an implicit rule for C compilation. File names determine which implicit rules are run. For example, C compilation typically takes a @file{.c} file and makes a @file{.o} file. So @code{make} applies the implicit rule for C compilation when it sees this combination of file name endings.@refill A chain of implicit rules can apply in sequence; for example, @code{make} will remake a @file{.o} file from a @file{.y} file by way of a @file{.c} file. @iftex @xref{Chained Rules, ,Chains of Implicit Rules}. @end iftex The built-in implicit rules use several variables in their commands so that, by changing the values of the variables, you can change the way the implicit rule works. For example, the variable @code{CFLAGS} controls the flags given to the C compiler by the implicit rule for C compilation. @iftex @xref{Implicit Variables, ,Variables Used by Implicit Rules}. @end iftex You can define your own implicit rules by writing @dfn{pattern rules}. @iftex @xref{Pattern Rules, ,Defining and Redefining Pattern Rules}. @end iftex @dfn{Suffix rules} are a more limited way to define implicit rules. Pattern rules are more general and clearer, but suffix rules are retained for compatibility. @iftex @xref{Suffix Rules, ,Old-Fashioned Suffix Rules}. @end iftex @menu * Using Implicit:: How to use an existing implicit rule to get the commands for updating a file. * Catalogue of Rules:: A list of built-in implicit rules. * Implicit Variables:: How to change what predefined rules do. * Chained Rules:: How to use a chain of implicit rules. * Pattern Rules:: How to define new implicit rules. * Last Resort:: How to define commands for rules which cannot find any. * Suffix Rules:: The old-fashioned style of implicit rule. * Implicit Rule Search:: The precise algorithm for applying implicit rules. @end menu @node Using Implicit, Catalogue of Rules, Implicit Rules, Implicit Rules @section Using Implicit Rules @cindex implicit rule, how to use @cindex rule, implicit, how to use To allow @code{make} to find a customary method for updating a target file, all you have to do is refrain from specifying commands yourself. Either write a rule with no command lines, or don't write a rule at all. Then @code{make} will figure out which implicit rule to use based on which kind of source file exists or can be made. For example, suppose the makefile looks like this: @example foo : foo.o bar.o cc -o foo foo.o bar.o $(CFLAGS) $(LDFLAGS) @end example @noindent Because you mention @file{foo.o} but do not give a rule for it, @code{make} will automatically look for an implicit rule that tells how to update it. This happens whether or not the file @file{foo.o} currently exists. If an implicit rule is found, it can supply both commands and one or more prerequisites (the source files). You would want to write a rule for @file{foo.o} with no command lines if you need to specify additional prerequisites, such as header files, that the implicit rule cannot supply. Each implicit rule has a target pattern and prerequisite patterns. There may be many implicit rules with the same target pattern. For example, numerous rules make @samp{.o} files: one, from a @samp{.c} file with the C compiler; another, from a @samp{.p} file with the Pascal compiler; and so on. The rule that actually applies is the one whose prerequisites exist or can be made. So, if you have a file @file{foo.c}, @code{make} will run the C compiler; otherwise, if you have a file @file{foo.p}, @code{make} will run the Pascal compiler; and so on. Of course, when you write the makefile, you know which implicit rule you want @code{make} to use, and you know it will choose that one because you know which possible prerequisite files are supposed to exist. @xref{Catalogue of Rules, ,Catalogue of Implicit Rules}, for a catalogue of all the predefined implicit rules. Above, we said an implicit rule applies if the required prerequisites ``exist or can be made''. A file ``can be made'' if it is mentioned explicitly in the makefile as a target or a prerequisite, or if an implicit rule can be recursively found for how to make it. When an implicit prerequisite is the result of another implicit rule, we say that @dfn{chaining} is occurring. @xref{Chained Rules, ,Chains of Implicit Rules}. In general, @code{make} searches for an implicit rule for each target, and for each double-colon rule, that has no commands. A file that is mentioned only as a prerequisite is considered a target whose rule specifies nothing, so implicit rule search happens for it. @xref{Implicit Rule Search, ,Implicit Rule Search Algorithm}, for the details of how the search is done. Note that explicit prerequisites do not influence implicit rule search. For example, consider this explicit rule: @example foo.o: foo.p @end example @noindent The prerequisite on @file{foo.p} does not necessarily mean that @code{make} will remake @file{foo.o} according to the implicit rule to make an object file, a @file{.o} file, from a Pascal source file, a @file{.p} file. For example, if @file{foo.c} also exists, the implicit rule to make an object file from a C source file is used instead, because it appears before the Pascal rule in the list of predefined implicit rules (@pxref{Catalogue of Rules, , Catalogue of Implicit Rules}). If you do not want an implicit rule to be used for a target that has no commands, you can give that target empty commands by writing a semicolon (@pxref{Empty Commands, ,Defining Empty Commands}). @node Catalogue of Rules, Implicit Variables, Using Implicit, Implicit Rules @section Catalogue of Implicit Rules @cindex implicit rule, predefined @cindex rule, implicit, predefined Here is a catalogue of predefined implicit rules which are always available unless the makefile explicitly overrides or cancels them. @xref{Canceling Rules, ,Canceling Implicit Rules}, for information on canceling or overriding an implicit rule. The @samp{-r} or @samp{--no-builtin-rules} option cancels all predefined rules. This manual only documents the default rules available on POSIX-based operating systems. Other operating systems, such as VMS, Windows, OS/2, etc. may have different sets of default rules. To see the full list of default rules and variables available in your version of GNU @code{make}, run @samp{make -p} in a directory with no makefile. Not all of these rules will always be defined, even when the @samp{-r} option is not given. Many of the predefined implicit rules are implemented in @code{make} as suffix rules, so which ones will be defined depends on the @dfn{suffix list} (the list of prerequisites of the special target @code{.SUFFIXES}). The default suffix list is: @code{.out}, @code{.a}, @code{.ln}, @code{.o}, @code{.c}, @code{.cc}, @code{.C}, @code{.cpp}, @code{.p}, @code{.f}, @code{.F}, @code{.r}, @code{.y}, @code{.l}, @code{.s}, @code{.S}, @code{.mod}, @code{.sym}, @code{.def}, @code{.h}, @code{.info}, @code{.dvi}, @code{.tex}, @code{.texinfo}, @code{.texi}, @code{.txinfo}, @code{.w}, @code{.ch} @code{.web}, @code{.sh}, @code{.elc}, @code{.el}. All of the implicit rules described below whose prerequisites have one of these suffixes are actually suffix rules. If you modify the suffix list, the only predefined suffix rules in effect will be those named by one or two of the suffixes that are on the list you specify; rules whose suffixes fail to be on the list are disabled. @xref{Suffix Rules, ,Old-Fashioned Suffix Rules}, for full details on suffix rules. @table @asis @item Compiling C programs @cindex C, rule to compile @pindex cc @pindex gcc @pindex .o @pindex .c @file{@var{n}.o} is made automatically from @file{@var{n}.c} with a command of the form @samp{$(CC) -c $(CPPFLAGS) $(CFLAGS)}.@refill @item Compiling C++ programs @cindex C++, rule to compile @pindex g++ @pindex .cc @pindex .cpp @pindex .C @file{@var{n}.o} is made automatically from @file{@var{n}.cc}, @file{@var{n}.cpp}, or @file{@var{n}.C} with a command of the form @samp{$(CXX) -c $(CPPFLAGS) $(CXXFLAGS)}. We encourage you to use the suffix @samp{.cc} for C++ source files instead of @samp{.C}.@refill @item Compiling Pascal programs @cindex Pascal, rule to compile @pindex pc @pindex .p @file{@var{n}.o} is made automatically from @file{@var{n}.p} with the command @samp{$(PC) -c $(PFLAGS)}.@refill @item Compiling Fortran and Ratfor programs @cindex Fortran, rule to compile @cindex Ratfor, rule to compile @pindex f77 @pindex .f @pindex .r @pindex .F @file{@var{n}.o} is made automatically from @file{@var{n}.r}, @file{@var{n}.F} or @file{@var{n}.f} by running the Fortran compiler. The precise command used is as follows:@refill @table @samp @item .f @samp{$(FC) -c $(FFLAGS)}. @item .F @samp{$(FC) -c $(FFLAGS) $(CPPFLAGS)}. @item .r @samp{$(FC) -c $(FFLAGS) $(RFLAGS)}. @end table @item Preprocessing Fortran and Ratfor programs @file{@var{n}.f} is made automatically from @file{@var{n}.r} or @file{@var{n}.F}. This rule runs just the preprocessor to convert a Ratfor or preprocessable Fortran program into a strict Fortran program. The precise command used is as follows:@refill @table @samp @item .F @samp{$(FC) -F $(CPPFLAGS) $(FFLAGS)}. @item .r @samp{$(FC) -F $(FFLAGS) $(RFLAGS)}. @end table @item Compiling Modula-2 programs @cindex Modula-2, rule to compile @pindex m2c @pindex .sym @pindex .def @pindex .mod @file{@var{n}.sym} is made from @file{@var{n}.def} with a command of the form @samp{$(M2C) $(M2FLAGS) $(DEFFLAGS)}. @file{@var{n}.o} is made from @file{@var{n}.mod}; the form is: @w{@samp{$(M2C) $(M2FLAGS) $(MODFLAGS)}}.@refill @need 1200 @item Assembling and preprocessing assembler programs @cindex assembly, rule to compile @pindex as @pindex .s @file{@var{n}.o} is made automatically from @file{@var{n}.s} by running the assembler, @code{as}. The precise command is @samp{$(AS) $(ASFLAGS)}.@refill @pindex .S @file{@var{n}.s} is made automatically from @file{@var{n}.S} by running the C preprocessor, @code{cpp}. The precise command is @w{@samp{$(CPP) $(CPPFLAGS)}}. @item Linking a single object file @cindex linking, predefined rule for @pindex ld @pindex .o @file{@var{n}} is made automatically from @file{@var{n}.o} by running the linker (usually called @code{ld}) via the C compiler. The precise command used is @w{@samp{$(CC) $(LDFLAGS) @var{n}.o $(LOADLIBES) $(LDLIBS)}}. This rule does the right thing for a simple program with only one source file. It will also do the right thing if there are multiple object files (presumably coming from various other source files), one of which has a name matching that of the executable file. Thus, @example x: y.o z.o @end example @noindent when @file{x.c}, @file{y.c} and @file{z.c} all exist will execute: @example @group cc -c x.c -o x.o cc -c y.c -o y.o cc -c z.c -o z.o cc x.o y.o z.o -o x rm -f x.o rm -f y.o rm -f z.o @end group @end example @noindent In more complicated cases, such as when there is no object file whose name derives from the executable file name, you must write an explicit command for linking. Each kind of file automatically made into @samp{.o} object files will be automatically linked by using the compiler (@samp{$(CC)}, @samp{$(FC)} or @samp{$(PC)}; the C compiler @samp{$(CC)} is used to assemble @samp{.s} files) without the @samp{-c} option. This could be done by using the @samp{.o} object files as intermediates, but it is faster to do the compiling and linking in one step, so that's how it's done.@refill @item Yacc for C programs @pindex yacc @cindex Yacc, rule to run @pindex .y @file{@var{n}.c} is made automatically from @file{@var{n}.y} by running Yacc with the command @samp{$(YACC) $(YFLAGS)}. @item Lex for C programs @pindex lex @cindex Lex, rule to run @pindex .l @file{@var{n}.c} is made automatically from @file{@var{n}.l} by running Lex. The actual command is @samp{$(LEX) $(LFLAGS)}. @item Lex for Ratfor programs @file{@var{n}.r} is made automatically from @file{@var{n}.l} by running Lex. The actual command is @samp{$(LEX) $(LFLAGS)}. The convention of using the same suffix @samp{.l} for all Lex files regardless of whether they produce C code or Ratfor code makes it impossible for @code{make} to determine automatically which of the two languages you are using in any particular case. If @code{make} is called upon to remake an object file from a @samp{.l} file, it must guess which compiler to use. It will guess the C compiler, because that is more common. If you are using Ratfor, make sure @code{make} knows this by mentioning @file{@var{n}.r} in the makefile. Or, if you are using Ratfor exclusively, with no C files, remove @samp{.c} from the list of implicit rule suffixes with:@refill @example @group .SUFFIXES: .SUFFIXES: .o .r .f .l @dots{} @end group @end example @item Making Lint Libraries from C, Yacc, or Lex programs @pindex lint @cindex @code{lint}, rule to run @pindex .ln @file{@var{n}.ln} is made from @file{@var{n}.c} by running @code{lint}. The precise command is @w{@samp{$(LINT) $(LINTFLAGS) $(CPPFLAGS) -i}}. The same command is used on the C code produced from @file{@var{n}.y} or @file{@var{n}.l}.@refill @item @TeX{} and Web @cindex @TeX{}, rule to run @cindex Web, rule to run @pindex tex @pindex cweave @pindex weave @pindex tangle @pindex ctangle @pindex .dvi @pindex .tex @pindex .web @pindex .w @pindex .ch @file{@var{n}.dvi} is made from @file{@var{n}.tex} with the command @samp{$(TEX)}. @file{@var{n}.tex} is made from @file{@var{n}.web} with @samp{$(WEAVE)}, or from @file{@var{n}.w} (and from @file{@var{n}.ch} if it exists or can be made) with @samp{$(CWEAVE)}. @file{@var{n}.p} is made from @file{@var{n}.web} with @samp{$(TANGLE)} and @file{@var{n}.c} is made from @file{@var{n}.w} (and from @file{@var{n}.ch} if it exists or can be made) with @samp{$(CTANGLE)}.@refill @item Texinfo and Info @cindex Texinfo, rule to format @cindex Info, rule to format @pindex texi2dvi @pindex makeinfo @pindex .texinfo @pindex .info @pindex .texi @pindex .txinfo @file{@var{n}.dvi} is made from @file{@var{n}.texinfo}, @file{@var{n}.texi}, or @file{@var{n}.txinfo}, with the command @w{@samp{$(TEXI2DVI) $(TEXI2DVI_FLAGS)}}. @file{@var{n}.info} is made from @file{@var{n}.texinfo}, @file{@var{n}.texi}, or @file{@var{n}.txinfo}, with the command @w{@samp{$(MAKEINFO) $(MAKEINFO_FLAGS)}}. @item RCS @cindex RCS, rule to extract from @pindex co @pindex ,v @r{(RCS file extension)} Any file @file{@var{n}} is extracted if necessary from an RCS file named either @file{@var{n},v} or @file{RCS/@var{n},v}. The precise command used is @w{@samp{$(CO) $(COFLAGS)}}. @file{@var{n}} will not be extracted from RCS if it already exists, even if the RCS file is newer. The rules for RCS are terminal (@pxref{Match-Anything Rules, ,Match-Anything Pattern Rules}), so RCS files cannot be generated from another source; they must actually exist.@refill @item SCCS @cindex SCCS, rule to extract from @pindex get @pindex s. @r{(SCCS file prefix)} Any file @file{@var{n}} is extracted if necessary from an SCCS file named either @file{s.@var{n}} or @file{SCCS/s.@var{n}}. The precise command used is @w{@samp{$(GET) $(GFLAGS)}}. The rules for SCCS are terminal (@pxref{Match-Anything Rules, ,Match-Anything Pattern Rules}), so SCCS files cannot be generated from another source; they must actually exist.@refill @pindex .sh For the benefit of SCCS, a file @file{@var{n}} is copied from @file{@var{n}.sh} and made executable (by everyone). This is for shell scripts that are checked into SCCS. Since RCS preserves the execution permission of a file, you do not need to use this feature with RCS.@refill We recommend that you avoid using of SCCS. RCS is widely held to be superior, and is also free. By choosing free software in place of comparable (or inferior) proprietary software, you support the free software movement. @end table Usually, you want to change only the variables listed in the table above, which are documented in the following section. However, the commands in built-in implicit rules actually use variables such as @code{COMPILE.c}, @code{LINK.p}, and @code{PREPROCESS.S}, whose values contain the commands listed above. @code{make} follows the convention that the rule to compile a @file{.@var{x}} source file uses the variable @code{COMPILE.@var{x}}. Similarly, the rule to produce an executable from a @file{.@var{x}} file uses @code{LINK.@var{x}}; and the rule to preprocess a @file{.@var{x}} file uses @code{PREPROCESS.@var{x}}. @vindex OUTPUT_OPTION Every rule that produces an object file uses the variable @code{OUTPUT_OPTION}. @code{make} defines this variable either to contain @samp{-o $@@}, or to be empty, depending on a compile-time option. You need the @samp{-o} option to ensure that the output goes into the right file when the source file is in a different directory, as when using @code{VPATH} (@pxref{Directory Search}). However, compilers on some systems do not accept a @samp{-o} switch for object files. If you use such a system, and use @code{VPATH}, some compilations will put their output in the wrong place. A possible workaround for this problem is to give @code{OUTPUT_OPTION} the value @w{@samp{; mv $*.o $@@}}. @node Implicit Variables, Chained Rules, Catalogue of Rules, Implicit Rules @section Variables Used by Implicit Rules @cindex flags for compilers The commands in built-in implicit rules make liberal use of certain predefined variables. You can alter the values of these variables in the makefile, with arguments to @code{make}, or in the environment to alter how the implicit rules work without redefining the rules themselves. You can cancel all variables used by implicit rules with the @samp{-R} or @samp{--no-builtin-variables} option. For example, the command used to compile a C source file actually says @samp{$(CC) -c $(CFLAGS) $(CPPFLAGS)}. The default values of the variables used are @samp{cc} and nothing, resulting in the command @samp{cc -c}. By redefining @samp{CC} to @samp{ncc}, you could cause @samp{ncc} to be used for all C compilations performed by the implicit rule. By redefining @samp{CFLAGS} to be @samp{-g}, you could pass the @samp{-g} option to each compilation. @emph{All} implicit rules that do C compilation use @samp{$(CC)} to get the program name for the compiler and @emph{all} include @samp{$(CFLAGS)} among the arguments given to the compiler.@refill The variables used in implicit rules fall into two classes: those that are names of programs (like @code{CC}) and those that contain arguments for the programs (like @code{CFLAGS}). (The ``name of a program'' may also contain some command arguments, but it must start with an actual executable program name.) If a variable value contains more than one argument, separate them with spaces. The following tables describe of some of the more commonly-used predefined variables. This list is not exhaustive, and the default values shown here may not be what are selected by @code{make} for your environment. To see the complete list of predefined variables for your instance of GNU @code{make} you can run @samp{make -p} in a directory with no makefiles. Here is a table of some of the more common variables used as names of programs in built-in rules: makefiles. @table @code @item AR @vindex AR Archive-maintaining program; default @samp{ar}. @pindex ar @item AS @vindex AS Program for compiling assembly files; default @samp{as}. @pindex as @item CC @vindex CC Program for compiling C programs; default @samp{cc}. @pindex cc @item CO @vindex CO Program for checking out files from RCS; default @samp{co}. @pindex cc @item CXX @vindex CXX Program for compiling C++ programs; default @samp{g++}. @pindex g++ @item CO @vindex CO Program for extracting a file from RCS; default @samp{co}. @pindex co @item CPP @vindex CPP Program for running the C preprocessor, with results to standard output; default @samp{$(CC) -E}. @item FC @vindex FC Program for compiling or preprocessing Fortran and Ratfor programs; default @samp{f77}. @pindex f77 @item GET @vindex GET Program for extracting a file from SCCS; default @samp{get}. @pindex get @item LEX @vindex LEX Program to use to turn Lex grammars into source code; default @samp{lex}. @pindex lex @item YACC @vindex YACC Program to use to turn Yacc grammars into source code; default @samp{yacc}. @pindex yacc @item LINT @vindex LINT Program to use to run lint on source code; default @samp{lint}. @pindex lint @item M2C @vindex M2C Program to use to compile Modula-2 source code; default @samp{m2c}. @pindex m2c @item PC @vindex PC Program for compiling Pascal programs; default @samp{pc}. @pindex pc @item MAKEINFO @vindex MAKEINFO Program to convert a Texinfo source file into an Info file; default @samp{makeinfo}. @pindex makeinfo @item TEX @vindex TEX Program to make @TeX{} @sc{dvi} files from @TeX{} source; default @samp{tex}. @pindex tex @item TEXI2DVI @vindex TEXI2DVI Program to make @TeX{} @sc{dvi} files from Texinfo source; default @samp{texi2dvi}. @pindex texi2dvi @item WEAVE @vindex WEAVE Program to translate Web into @TeX{}; default @samp{weave}. @pindex weave @item CWEAVE @vindex CWEAVE Program to translate C Web into @TeX{}; default @samp{cweave}. @pindex cweave @item TANGLE @vindex TANGLE Program to translate Web into Pascal; default @samp{tangle}. @pindex tangle @item CTANGLE @vindex CTANGLE Program to translate C Web into C; default @samp{ctangle}. @pindex ctangle @item RM @vindex RM Command to remove a file; default @samp{rm -f}. @pindex rm @end table Here is a table of variables whose values are additional arguments for the programs above. The default values for all of these is the empty string, unless otherwise noted. @table @code @item ARFLAGS @vindex ARFLAGS Flags to give the archive-maintaining program; default @samp{rv}. @item ASFLAGS @vindex ASFLAGS Extra flags to give to the assembler (when explicitly invoked on a @samp{.s} or @samp{.S} file). @item CFLAGS @vindex CFLAGS Extra flags to give to the C compiler. @item CXXFLAGS @vindex CXXFLAGS Extra flags to give to the C++ compiler. @item COFLAGS @vindex COFLAGS Extra flags to give to the RCS @code{co} program. @item CPPFLAGS @vindex CPPFLAGS Extra flags to give to the C preprocessor and programs that use it (the C and Fortran compilers). @item FFLAGS @vindex FFLAGS Extra flags to give to the Fortran compiler. @item GFLAGS @vindex GFLAGS Extra flags to give to the SCCS @code{get} program. @item LDFLAGS @vindex LDFLAGS Extra flags to give to compilers when they are supposed to invoke the linker, @samp{ld}. @item LFLAGS @vindex LFLAGS Extra flags to give to Lex. @item YFLAGS @vindex YFLAGS Extra flags to give to Yacc. @item PFLAGS @vindex PFLAGS Extra flags to give to the Pascal compiler. @item RFLAGS @vindex RFLAGS Extra flags to give to the Fortran compiler for Ratfor programs. @item LINTFLAGS @vindex LINTFLAGS Extra flags to give to lint. @end table @node Chained Rules, Pattern Rules, Implicit Variables, Implicit Rules @section Chains of Implicit Rules @cindex chains of rules @cindex rule, implicit, chains of Sometimes a file can be made by a sequence of implicit rules. For example, a file @file{@var{n}.o} could be made from @file{@var{n}.y} by running first Yacc and then @code{cc}. Such a sequence is called a @dfn{chain}. If the file @file{@var{n}.c} exists, or is mentioned in the makefile, no special searching is required: @code{make} finds that the object file can be made by C compilation from @file{@var{n}.c}; later on, when considering how to make @file{@var{n}.c}, the rule for running Yacc is used. Ultimately both @file{@var{n}.c} and @file{@var{n}.o} are updated.@refill @cindex intermediate files @cindex files, intermediate However, even if @file{@var{n}.c} does not exist and is not mentioned, @code{make} knows how to envision it as the missing link between @file{@var{n}.o} and @file{@var{n}.y}! In this case, @file{@var{n}.c} is called an @dfn{intermediate file}. Once @code{make} has decided to use the intermediate file, it is entered in the data base as if it had been mentioned in the makefile, along with the implicit rule that says how to create it.@refill Intermediate files are remade using their rules just like all other files. But intermediate files are treated differently in two ways. The first difference is what happens if the intermediate file does not exist. If an ordinary file @var{b} does not exist, and @code{make} considers a target that depends on @var{b}, it invariably creates @var{b} and then updates the target from @var{b}. But if @var{b} is an intermediate file, then @code{make} can leave well enough alone. It won't bother updating @var{b}, or the ultimate target, unless some prerequisite of @var{b} is newer than that target or there is some other reason to update that target. The second difference is that if @code{make} @emph{does} create @var{b} in order to update something else, it deletes @var{b} later on after it is no longer needed. Therefore, an intermediate file which did not exist before @code{make} also does not exist after @code{make}. @code{make} reports the deletion to you by printing a @samp{rm -f} command showing which file it is deleting. Ordinarily, a file cannot be intermediate if it is mentioned in the makefile as a target or prerequisite. However, you can explicitly mark a file as intermediate by listing it as a prerequisite of the special target @code{.INTERMEDIATE}. This takes effect even if the file is mentioned explicitly in some other way. @cindex intermediate files, preserving @cindex preserving intermediate files @cindex secondary files You can prevent automatic deletion of an intermediate file by marking it as a @dfn{secondary} file. To do this, list it as a prerequisite of the special target @code{.SECONDARY}. When a file is secondary, @code{make} will not create the file merely because it does not already exist, but @code{make} does not automatically delete the file. Marking a file as secondary also marks it as intermediate. You can list the target pattern of an implicit rule (such as @samp{%.o}) as a prerequisite of the special target @code{.PRECIOUS} to preserve intermediate files made by implicit rules whose target patterns match that file's name; see @ref{Interrupts}.@refill @cindex preserving with @code{.PRECIOUS} @cindex @code{.PRECIOUS} intermediate files A chain can involve more than two implicit rules. For example, it is possible to make a file @file{foo} from @file{RCS/foo.y,v} by running RCS, Yacc and @code{cc}. Then both @file{foo.y} and @file{foo.c} are intermediate files that are deleted at the end.@refill No single implicit rule can appear more than once in a chain. This means that @code{make} will not even consider such a ridiculous thing as making @file{foo} from @file{foo.o.o} by running the linker twice. This constraint has the added benefit of preventing any infinite loop in the search for an implicit rule chain. There are some special implicit rules to optimize certain cases that would otherwise be handled by rule chains. For example, making @file{foo} from @file{foo.c} could be handled by compiling and linking with separate chained rules, using @file{foo.o} as an intermediate file. But what actually happens is that a special rule for this case does the compilation and linking with a single @code{cc} command. The optimized rule is used in preference to the step-by-step chain because it comes earlier in the ordering of rules. @node Pattern Rules, Last Resort, Chained Rules, Implicit Rules @section Defining and Redefining Pattern Rules You define an implicit rule by writing a @dfn{pattern rule}. A pattern rule looks like an ordinary rule, except that its target contains the character @samp{%} (exactly one of them). The target is considered a pattern for matching file names; the @samp{%} can match any nonempty substring, while other characters match only themselves. The prerequisites likewise use @samp{%} to show how their names relate to the target name. Thus, a pattern rule @samp{%.o : %.c} says how to make any file @file{@var{stem}.o} from another file @file{@var{stem}.c}.@refill Note that expansion using @samp{%} in pattern rules occurs @strong{after} any variable or function expansions, which take place when the makefile is read. @xref{Using Variables, , How to Use Variables}, and @ref{Functions, ,Functions for Transforming Text}. @menu * Pattern Intro:: An introduction to pattern rules. * Pattern Examples:: Examples of pattern rules. * Automatic Variables:: How to use automatic variables in the commands of implicit rules. * Pattern Match:: How patterns match. * Match-Anything Rules:: Precautions you should take prior to defining rules that can match any target file whatever. * Canceling Rules:: How to override or cancel built-in rules. @end menu @node Pattern Intro, Pattern Examples, Pattern Rules, Pattern Rules @subsection Introduction to Pattern Rules @cindex pattern rule @cindex rule, pattern A pattern rule contains the character @samp{%} (exactly one of them) in the target; otherwise, it looks exactly like an ordinary rule. The target is a pattern for matching file names; the @samp{%} matches any nonempty substring, while other characters match only themselves. @cindex target pattern, implicit @cindex @code{%}, in pattern rules For example, @samp{%.c} as a pattern matches any file name that ends in @samp{.c}. @samp{s.%.c} as a pattern matches any file name that starts with @samp{s.}, ends in @samp{.c} and is at least five characters long. (There must be at least one character to match the @samp{%}.) The substring that the @samp{%} matches is called the @dfn{stem}.@refill @samp{%} in a prerequisite of a pattern rule stands for the same stem that was matched by the @samp{%} in the target. In order for the pattern rule to apply, its target pattern must match the file name under consideration and all of its prerequisites (after pattern substitution) must name files that exist or can be made. These files become prerequisites of the target. @cindex prerequisite pattern, implicit Thus, a rule of the form @example %.o : %.c ; @var{command}@dots{} @end example @noindent specifies how to make a file @file{@var{n}.o}, with another file @file{@var{n}.c} as its prerequisite, provided that @file{@var{n}.c} exists or can be made. There may also be prerequisites that do not use @samp{%}; such a prerequisite attaches to every file made by this pattern rule. These unvarying prerequisites are useful occasionally. A pattern rule need not have any prerequisites that contain @samp{%}, or in fact any prerequisites at all. Such a rule is effectively a general wildcard. It provides a way to make any file that matches the target pattern. @xref{Last Resort}. @c !!! The end of of this paragraph should be rewritten. --bob Pattern rules may have more than one target. Unlike normal rules, this does not act as many different rules with the same prerequisites and commands. If a pattern rule has multiple targets, @code{make} knows that the rule's commands are responsible for making all of the targets. The commands are executed only once to make all the targets. When searching for a pattern rule to match a target, the target patterns of a rule other than the one that matches the target in need of a rule are incidental: @code{make} worries only about giving commands and prerequisites to the file presently in question. However, when this file's commands are run, the other targets are marked as having been updated themselves. @cindex multiple targets, in pattern rule @cindex target, multiple in pattern rule The order in which pattern rules appear in the makefile is important since this is the order in which they are considered. Of equally applicable rules, only the first one found is used. The rules you write take precedence over those that are built in. Note however, that a rule whose prerequisites actually exist or are mentioned always takes priority over a rule with prerequisites that must be made by chaining other implicit rules. @cindex pattern rules, order of @cindex order of pattern rules @node Pattern Examples, Automatic Variables, Pattern Intro, Pattern Rules @subsection Pattern Rule Examples Here are some examples of pattern rules actually predefined in @code{make}. First, the rule that compiles @samp{.c} files into @samp{.o} files:@refill @example %.o : %.c $(CC) -c $(CFLAGS) $(CPPFLAGS) $< -o $@@ @end example @noindent defines a rule that can make any file @file{@var{x}.o} from @file{@var{x}.c}. The command uses the automatic variables @samp{$@@} and @samp{$<} to substitute the names of the target file and the source file in each case where the rule applies (@pxref{Automatic Variables}).@refill Here is a second built-in rule: @example % :: RCS/%,v $(CO) $(COFLAGS) $< @end example @noindent defines a rule that can make any file @file{@var{x}} whatsoever from a corresponding file @file{@var{x},v} in the subdirectory @file{RCS}. Since the target is @samp{%}, this rule will apply to any file whatever, provided the appropriate prerequisite file exists. The double colon makes the rule @dfn{terminal}, which means that its prerequisite may not be an intermediate file (@pxref{Match-Anything Rules, ,Match-Anything Pattern Rules}).@refill @need 500 This pattern rule has two targets: @example @group %.tab.c %.tab.h: %.y bison -d $< @end group @end example @noindent @c The following paragraph is rewritten to avoid overfull hboxes This tells @code{make} that the command @samp{bison -d @var{x}.y} will make both @file{@var{x}.tab.c} and @file{@var{x}.tab.h}. If the file @file{foo} depends on the files @file{parse.tab.o} and @file{scan.o} and the file @file{scan.o} depends on the file @file{parse.tab.h}, when @file{parse.y} is changed, the command @samp{bison -d parse.y} will be executed only once, and the prerequisites of both @file{parse.tab.o} and @file{scan.o} will be satisfied. (Presumably the file @file{parse.tab.o} will be recompiled from @file{parse.tab.c} and the file @file{scan.o} from @file{scan.c}, while @file{foo} is linked from @file{parse.tab.o}, @file{scan.o}, and its other prerequisites, and it will execute happily ever after.)@refill @node Automatic Variables, Pattern Match, Pattern Examples, Pattern Rules @subsection Automatic Variables @cindex automatic variables @cindex variables, automatic @cindex variables, and implicit rule Suppose you are writing a pattern rule to compile a @samp{.c} file into a @samp{.o} file: how do you write the @samp{cc} command so that it operates on the right source file name? You cannot write the name in the command, because the name is different each time the implicit rule is applied. What you do is use a special feature of @code{make}, the @dfn{automatic variables}. These variables have values computed afresh for each rule that is executed, based on the target and prerequisites of the rule. In this example, you would use @samp{$@@} for the object file name and @samp{$<} for the source file name. @cindex automatic variables in prerequisites @cindex prerequisites, and automatic variables It's very important that you recognize the limited scope in which automatic variable values are available: they only have values within the command script. In particular, you cannot use them anywhere within the target list of a rule; they have no value there and will expand to the empty string. Also, they cannot be accessed directly within the prerequisite list of a rule. A common mistake is attempting to use @code{$@@} within the prerequisites list; this will not work. However, there is a special feature of GNU @code{make}, secondary expansion (@pxref{Secondary Expansion}), which will allow automatic variable values to be used in prerequisite lists. Here is a table of automatic variables: @table @code @vindex $@@ @vindex @@ @r{(automatic variable)} @item $@@ The file name of the target of the rule. If the target is an archive member, then @samp{$@@} is the name of the archive file. In a pattern rule that has multiple targets (@pxref{Pattern Intro, ,Introduction to Pattern Rules}), @samp{$@@} is the name of whichever target caused the rule's commands to be run. @vindex $% @vindex % @r{(automatic variable)} @item $% The target member name, when the target is an archive member. @xref{Archives}. For example, if the target is @file{foo.a(bar.o)} then @samp{$%} is @file{bar.o} and @samp{$@@} is @file{foo.a}. @samp{$%} is empty when the target is not an archive member. @vindex $< @vindex < @r{(automatic variable)} @item $< The name of the first prerequisite. If the target got its commands from an implicit rule, this will be the first prerequisite added by the implicit rule (@pxref{Implicit Rules}). @vindex $? @vindex ? @r{(automatic variable)} @item $? The names of all the prerequisites that are newer than the target, with spaces between them. For prerequisites which are archive members, only the member named is used (@pxref{Archives}). @cindex prerequisites, list of changed @cindex list of changed prerequisites @vindex $^ @vindex ^ @r{(automatic variable)} @item $^ The names of all the prerequisites, with spaces between them. For prerequisites which are archive members, only the member named is used (@pxref{Archives}). A target has only one prerequisite on each other file it depends on, no matter how many times each file is listed as a prerequisite. So if you list a prerequisite more than once for a target, the value of @code{$^} contains just one copy of the name. This list does @strong{not} contain any of the order-only prerequisites; for those see the @samp{$|} variable, below. @cindex prerequisites, list of all @cindex list of all prerequisites @vindex $+ @vindex + @r{(automatic variable)} @item $+ This is like @samp{$^}, but prerequisites listed more than once are duplicated in the order they were listed in the makefile. This is primarily useful for use in linking commands where it is meaningful to repeat library file names in a particular order. @vindex $| @vindex | @r{(automatic variable)} @item $| The names of all the order-only prerequisites, with spaces between them. @vindex $* @vindex * @r{(automatic variable)} @item $* The stem with which an implicit rule matches (@pxref{Pattern Match, ,How Patterns Match}). If the target is @file{dir/a.foo.b} and the target pattern is @file{a.%.b} then the stem is @file{dir/foo}. The stem is useful for constructing names of related files.@refill @cindex stem, variable for In a static pattern rule, the stem is part of the file name that matched the @samp{%} in the target pattern. In an explicit rule, there is no stem; so @samp{$*} cannot be determined in that way. Instead, if the target name ends with a recognized suffix (@pxref{Suffix Rules, ,Old-Fashioned Suffix Rules}), @samp{$*} is set to the target name minus the suffix. For example, if the target name is @samp{foo.c}, then @samp{$*} is set to @samp{foo}, since @samp{.c} is a suffix. GNU @code{make} does this bizarre thing only for compatibility with other implementations of @code{make}. You should generally avoid using @samp{$*} except in implicit rules or static pattern rules.@refill If the target name in an explicit rule does not end with a recognized suffix, @samp{$*} is set to the empty string for that rule. @end table @samp{$?} is useful even in explicit rules when you wish to operate on only the prerequisites that have changed. For example, suppose that an archive named @file{lib} is supposed to contain copies of several object files. This rule copies just the changed object files into the archive: @example @group lib: foo.o bar.o lose.o win.o ar r lib $? @end group @end example Of the variables listed above, four have values that are single file names, and three have values that are lists of file names. These seven have variants that get just the file's directory name or just the file name within the directory. The variant variables' names are formed by appending @samp{D} or @samp{F}, respectively. These variants are semi-obsolete in GNU @code{make} since the functions @code{dir} and @code{notdir} can be used to get a similar effect (@pxref{File Name Functions, , Functions for File Names}). Note, however, that the @samp{D} variants all omit the trailing slash which always appears in the output of the @code{dir} function. Here is a table of the variants: @table @samp @vindex $(@@D) @vindex @@D @r{(automatic variable)} @item $(@@D) The directory part of the file name of the target, with the trailing slash removed. If the value of @samp{$@@} is @file{dir/foo.o} then @samp{$(@@D)} is @file{dir}. This value is @file{.} if @samp{$@@} does not contain a slash. @vindex $(@@F) @vindex @@F @r{(automatic variable)} @item $(@@F) The file-within-directory part of the file name of the target. If the value of @samp{$@@} is @file{dir/foo.o} then @samp{$(@@F)} is @file{foo.o}. @samp{$(@@F)} is equivalent to @samp{$(notdir $@@)}. @vindex $(*D) @vindex *D @r{(automatic variable)} @item $(*D) @vindex $(*F) @vindex *F @r{(automatic variable)} @itemx $(*F) The directory part and the file-within-directory part of the stem; @file{dir} and @file{foo} in this example. @vindex $(%D) @vindex %D @r{(automatic variable)} @item $(%D) @vindex $(%F) @vindex %F @r{(automatic variable)} @itemx $(%F) The directory part and the file-within-directory part of the target archive member name. This makes sense only for archive member targets of the form @file{@var{archive}(@var{member})} and is useful only when @var{member} may contain a directory name. (@xref{Archive Members, ,Archive Members as Targets}.) @vindex $( tar-`sed -e '/version_string/!d' \ -e 's/[^0-9.]*\([0-9.]*\).*/\1/' \ -e q version.c`.shar.Z @end group @group .PHONY: dist dist: $(SRCS) $(AUX) echo tar-`sed \ -e '/version_string/!d' \ -e 's/[^0-9.]*\([0-9.]*\).*/\1/' \ -e q version.c` > .fname -rm -rf `cat .fname` mkdir `cat .fname` ln $(SRCS) $(AUX) `cat .fname` tar chZf `cat .fname`.tar.Z `cat .fname` -rm -rf `cat .fname` .fname @end group @group tar.zoo: $(SRCS) $(AUX) -rm -rf tmp.dir -mkdir tmp.dir -rm tar.zoo for X in $(SRCS) $(AUX) ; do \ echo $$X ; \ sed 's/$$/^M/' $$X \ > tmp.dir/$$X ; done cd tmp.dir ; zoo aM ../tar.zoo * -rm -rf tmp.dir @end group @end example @raisesections @include fdl.texi @lowersections @node Concept Index, Name Index, GNU Free Documentation License, Top @unnumbered Index of Concepts @printindex cp @node Name Index, , Concept Index, Top @unnumbered Index of Functions, Variables, & Directives @printindex fn @bye make-doc-non-dfsg-3.81.orig/doc/stamp-vti0000644000175000017500000000013410416557457020477 0ustar srivastasrivasta@set UPDATED 1 April 2006 @set UPDATED-MONTH April 2006 @set EDITION 3.81 @set VERSION 3.81 make-doc-non-dfsg-3.81.orig/doc/texinfo.tex0000644000175000017500000070156310416557457021044 0ustar srivastasrivasta% texinfo.tex -- TeX macros to handle Texinfo files. % % Load plain if necessary, i.e., if running under initex. \expandafter\ifx\csname fmtname\endcsname\relax\input plain\fi % \def\texinfoversion{2005-07-05.19} % % Copyright (C) 1985, 1986, 1988, 1990, 1991, 1992, 1993, 1994, 1995, % 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software % Foundation, Inc. % % This texinfo.tex file is free software; you can redistribute it and/or % modify it under the terms of the GNU General Public License as % published by the Free Software Foundation; either version 2, or (at % your option) any later version. % % This texinfo.tex file is distributed in the hope that it will be % useful, but WITHOUT ANY WARRANTY; without even the implied warranty % of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU % General Public License for more details. % % You should have received a copy of the GNU General Public License % along with this texinfo.tex file; see the file COPYING. If not, write % to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, % Boston, MA 02110-1301, USA. % % As a special exception, when this file is read by TeX when processing % a Texinfo source document, you may use the result without % restriction. (This has been our intent since Texinfo was invented.) % % Please try the latest version of texinfo.tex before submitting bug % reports; you can get the latest version from: % http://www.gnu.org/software/texinfo/ (the Texinfo home page), or % ftp://tug.org/tex/texinfo.tex % (and all CTAN mirrors, see http://www.ctan.org). % The texinfo.tex in any given distribution could well be out % of date, so if that's what you're using, please check. % % Send bug reports to bug-texinfo@gnu.org. Please include including a % complete document in each bug report with which we can reproduce the % problem. Patches are, of course, greatly appreciated. % % To process a Texinfo manual with TeX, it's most reliable to use the % texi2dvi shell script that comes with the distribution. For a simple % manual foo.texi, however, you can get away with this: % tex foo.texi % texindex foo.?? % tex foo.texi % tex foo.texi % dvips foo.dvi -o # or whatever; this makes foo.ps. % The extra TeX runs get the cross-reference information correct. % Sometimes one run after texindex suffices, and sometimes you need more % than two; texi2dvi does it as many times as necessary. % % It is possible to adapt texinfo.tex for other languages, to some % extent. You can get the existing language-specific files from the % full Texinfo distribution. % % The GNU Texinfo home page is http://www.gnu.org/software/texinfo. \message{Loading texinfo [version \texinfoversion]:} % If in a .fmt file, print the version number % and turn on active characters that we couldn't do earlier because % they might have appeared in the input file name. \everyjob{\message{[Texinfo version \texinfoversion]}% \catcode`+=\active \catcode`\_=\active} \message{Basics,} \chardef\other=12 % We never want plain's \outer definition of \+ in Texinfo. % For @tex, we can use \tabalign. \let\+ = \relax % Save some plain tex macros whose names we will redefine. \let\ptexb=\b \let\ptexbullet=\bullet \let\ptexc=\c \let\ptexcomma=\, \let\ptexdot=\. \let\ptexdots=\dots \let\ptexend=\end \let\ptexequiv=\equiv \let\ptexexclam=\! \let\ptexfootnote=\footnote \let\ptexgtr=> \let\ptexhat=^ \let\ptexi=\i \let\ptexindent=\indent \let\ptexinsert=\insert \let\ptexlbrace=\{ \let\ptexless=< \let\ptexnewwrite\newwrite \let\ptexnoindent=\noindent \let\ptexplus=+ \let\ptexrbrace=\} \let\ptexslash=\/ \let\ptexstar=\* \let\ptext=\t % If this character appears in an error message or help string, it % starts a new line in the output. \newlinechar = `^^J % Use TeX 3.0's \inputlineno to get the line number, for better error % messages, but if we're using an old version of TeX, don't do anything. % \ifx\inputlineno\thisisundefined \let\linenumber = \empty % Pre-3.0. \else \def\linenumber{l.\the\inputlineno:\space} \fi % Set up fixed words for English if not already set. \ifx\putwordAppendix\undefined \gdef\putwordAppendix{Appendix}\fi \ifx\putwordChapter\undefined \gdef\putwordChapter{Chapter}\fi \ifx\putwordfile\undefined \gdef\putwordfile{file}\fi \ifx\putwordin\undefined \gdef\putwordin{in}\fi \ifx\putwordIndexIsEmpty\undefined \gdef\putwordIndexIsEmpty{(Index is empty)}\fi \ifx\putwordIndexNonexistent\undefined \gdef\putwordIndexNonexistent{(Index is nonexistent)}\fi \ifx\putwordInfo\undefined \gdef\putwordInfo{Info}\fi \ifx\putwordInstanceVariableof\undefined \gdef\putwordInstanceVariableof{Instance Variable of}\fi \ifx\putwordMethodon\undefined \gdef\putwordMethodon{Method on}\fi \ifx\putwordNoTitle\undefined \gdef\putwordNoTitle{No Title}\fi \ifx\putwordof\undefined \gdef\putwordof{of}\fi \ifx\putwordon\undefined \gdef\putwordon{on}\fi \ifx\putwordpage\undefined \gdef\putwordpage{page}\fi \ifx\putwordsection\undefined \gdef\putwordsection{section}\fi \ifx\putwordSection\undefined \gdef\putwordSection{Section}\fi \ifx\putwordsee\undefined \gdef\putwordsee{see}\fi \ifx\putwordSee\undefined \gdef\putwordSee{See}\fi \ifx\putwordShortTOC\undefined \gdef\putwordShortTOC{Short Contents}\fi \ifx\putwordTOC\undefined \gdef\putwordTOC{Table of Contents}\fi % \ifx\putwordMJan\undefined \gdef\putwordMJan{January}\fi \ifx\putwordMFeb\undefined \gdef\putwordMFeb{February}\fi \ifx\putwordMMar\undefined \gdef\putwordMMar{March}\fi \ifx\putwordMApr\undefined \gdef\putwordMApr{April}\fi \ifx\putwordMMay\undefined \gdef\putwordMMay{May}\fi \ifx\putwordMJun\undefined \gdef\putwordMJun{June}\fi \ifx\putwordMJul\undefined \gdef\putwordMJul{July}\fi \ifx\putwordMAug\undefined \gdef\putwordMAug{August}\fi \ifx\putwordMSep\undefined \gdef\putwordMSep{September}\fi \ifx\putwordMOct\undefined \gdef\putwordMOct{October}\fi \ifx\putwordMNov\undefined \gdef\putwordMNov{November}\fi \ifx\putwordMDec\undefined \gdef\putwordMDec{December}\fi % \ifx\putwordDefmac\undefined \gdef\putwordDefmac{Macro}\fi \ifx\putwordDefspec\undefined \gdef\putwordDefspec{Special Form}\fi \ifx\putwordDefvar\undefined \gdef\putwordDefvar{Variable}\fi \ifx\putwordDefopt\undefined \gdef\putwordDefopt{User Option}\fi \ifx\putwordDeffunc\undefined \gdef\putwordDeffunc{Function}\fi % In some macros, we cannot use the `\? notation---the left quote is % in some cases the escape char. \chardef\backChar = `\\ \chardef\colonChar = `\: \chardef\commaChar = `\, \chardef\dotChar = `\. \chardef\exclamChar= `\! \chardef\plusChar = `\+ \chardef\questChar = `\? \chardef\semiChar = `\; \chardef\underChar = `\_ \chardef\spaceChar = `\ % \chardef\spacecat = 10 \def\spaceisspace{\catcode\spaceChar=\spacecat} {% for help with debugging. % example usage: \expandafter\show\activebackslash \catcode`\! = 0 \catcode`\\ = \active !global!def!activebackslash{\} } % Ignore a token. % \def\gobble#1{} % The following is used inside several \edef's. \def\makecsname#1{\expandafter\noexpand\csname#1\endcsname} % Hyphenation fixes. \hyphenation{ Flor-i-da Ghost-script Ghost-view Mac-OS Post-Script ap-pen-dix bit-map bit-maps data-base data-bases eshell fall-ing half-way long-est man-u-script man-u-scripts mini-buf-fer mini-buf-fers over-view par-a-digm par-a-digms rath-er rec-tan-gu-lar ro-bot-ics se-vere-ly set-up spa-ces spell-ing spell-ings stand-alone strong-est time-stamp time-stamps which-ever white-space wide-spread wrap-around } % Margin to add to right of even pages, to left of odd pages. \newdimen\bindingoffset \newdimen\normaloffset \newdimen\pagewidth \newdimen\pageheight % For a final copy, take out the rectangles % that mark overfull boxes (in case you have decided % that the text looks ok even though it passes the margin). % \def\finalout{\overfullrule=0pt} % @| inserts a changebar to the left of the current line. It should % surround any changed text. This approach does *not* work if the % change spans more than two lines of output. To handle that, we would % have adopt a much more difficult approach (putting marks into the main % vertical list for the beginning and end of each change). % \def\|{% % \vadjust can only be used in horizontal mode. \leavevmode % % Append this vertical mode material after the current line in the output. \vadjust{% % We want to insert a rule with the height and depth of the current % leading; that is exactly what \strutbox is supposed to record. \vskip-\baselineskip % % \vadjust-items are inserted at the left edge of the type. So % the \llap here moves out into the left-hand margin. \llap{% % % For a thicker or thinner bar, change the `1pt'. \vrule height\baselineskip width1pt % % This is the space between the bar and the text. \hskip 12pt }% }% } % Sometimes it is convenient to have everything in the transcript file % and nothing on the terminal. We don't just call \tracingall here, % since that produces some useless output on the terminal. We also make % some effort to order the tracing commands to reduce output in the log % file; cf. trace.sty in LaTeX. % \def\gloggingall{\begingroup \globaldefs = 1 \loggingall \endgroup}% \def\loggingall{% \tracingstats2 \tracingpages1 \tracinglostchars2 % 2 gives us more in etex \tracingparagraphs1 \tracingoutput1 \tracingmacros2 \tracingrestores1 \showboxbreadth\maxdimen \showboxdepth\maxdimen \ifx\eTeXversion\undefined\else % etex gives us more logging \tracingscantokens1 \tracingifs1 \tracinggroups1 \tracingnesting2 \tracingassigns1 \fi \tracingcommands3 % 3 gives us more in etex \errorcontextlines16 }% % add check for \lastpenalty to plain's definitions. If the last thing % we did was a \nobreak, we don't want to insert more space. % \def\smallbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\smallskipamount \removelastskip\penalty-50\smallskip\fi\fi} \def\medbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\medskipamount \removelastskip\penalty-100\medskip\fi\fi} \def\bigbreak{\ifnum\lastpenalty<10000\par\ifdim\lastskip<\bigskipamount \removelastskip\penalty-200\bigskip\fi\fi} % For @cropmarks command. % Do @cropmarks to get crop marks. % \newif\ifcropmarks \let\cropmarks = \cropmarkstrue % % Dimensions to add cropmarks at corners. % Added by P. A. MacKay, 12 Nov. 1986 % \newdimen\outerhsize \newdimen\outervsize % set by the paper size routines \newdimen\cornerlong \cornerlong=1pc \newdimen\cornerthick \cornerthick=.3pt \newdimen\topandbottommargin \topandbottommargin=.75in % Main output routine. \chardef\PAGE = 255 \output = {\onepageout{\pagecontents\PAGE}} \newbox\headlinebox \newbox\footlinebox % \onepageout takes a vbox as an argument. Note that \pagecontents % does insertions, but you have to call it yourself. \def\onepageout#1{% \ifcropmarks \hoffset=0pt \else \hoffset=\normaloffset \fi % \ifodd\pageno \advance\hoffset by \bindingoffset \else \advance\hoffset by -\bindingoffset\fi % % Do this outside of the \shipout so @code etc. will be expanded in % the headline as they should be, not taken literally (outputting ''code). \setbox\headlinebox = \vbox{\let\hsize=\pagewidth \makeheadline}% \setbox\footlinebox = \vbox{\let\hsize=\pagewidth \makefootline}% % {% % Have to do this stuff outside the \shipout because we want it to % take effect in \write's, yet the group defined by the \vbox ends % before the \shipout runs. % \indexdummies % don't expand commands in the output. \shipout\vbox{% % Do this early so pdf references go to the beginning of the page. \ifpdfmakepagedest \pdfdest name{\the\pageno} xyz\fi % \ifcropmarks \vbox to \outervsize\bgroup \hsize = \outerhsize \vskip-\topandbottommargin \vtop to0pt{% \line{\ewtop\hfil\ewtop}% \nointerlineskip \line{% \vbox{\moveleft\cornerthick\nstop}% \hfill \vbox{\moveright\cornerthick\nstop}% }% \vss}% \vskip\topandbottommargin \line\bgroup \hfil % center the page within the outer (page) hsize. \ifodd\pageno\hskip\bindingoffset\fi \vbox\bgroup \fi % \unvbox\headlinebox \pagebody{#1}% \ifdim\ht\footlinebox > 0pt % Only leave this space if the footline is nonempty. % (We lessened \vsize for it in \oddfootingxxx.) % The \baselineskip=24pt in plain's \makefootline has no effect. \vskip 2\baselineskip \unvbox\footlinebox \fi % \ifcropmarks \egroup % end of \vbox\bgroup \hfil\egroup % end of (centering) \line\bgroup \vskip\topandbottommargin plus1fill minus1fill \boxmaxdepth = \cornerthick \vbox to0pt{\vss \line{% \vbox{\moveleft\cornerthick\nsbot}% \hfill \vbox{\moveright\cornerthick\nsbot}% }% \nointerlineskip \line{\ewbot\hfil\ewbot}% }% \egroup % \vbox from first cropmarks clause \fi }% end of \shipout\vbox }% end of group with \indexdummies \advancepageno \ifnum\outputpenalty>-20000 \else\dosupereject\fi } \newinsert\margin \dimen\margin=\maxdimen \def\pagebody#1{\vbox to\pageheight{\boxmaxdepth=\maxdepth #1}} {\catcode`\@ =11 \gdef\pagecontents#1{\ifvoid\topins\else\unvbox\topins\fi % marginal hacks, juha@viisa.uucp (Juha Takala) \ifvoid\margin\else % marginal info is present \rlap{\kern\hsize\vbox to\z@{\kern1pt\box\margin \vss}}\fi \dimen@=\dp#1 \unvbox#1 \ifvoid\footins\else\vskip\skip\footins\footnoterule \unvbox\footins\fi \ifr@ggedbottom \kern-\dimen@ \vfil \fi} } % Here are the rules for the cropmarks. Note that they are % offset so that the space between them is truly \outerhsize or \outervsize % (P. A. MacKay, 12 November, 1986) % \def\ewtop{\vrule height\cornerthick depth0pt width\cornerlong} \def\nstop{\vbox {\hrule height\cornerthick depth\cornerlong width\cornerthick}} \def\ewbot{\vrule height0pt depth\cornerthick width\cornerlong} \def\nsbot{\vbox {\hrule height\cornerlong depth\cornerthick width\cornerthick}} % Parse an argument, then pass it to #1. The argument is the rest of % the input line (except we remove a trailing comment). #1 should be a % macro which expects an ordinary undelimited TeX argument. % \def\parsearg{\parseargusing{}} \def\parseargusing#1#2{% \def\next{#2}% \begingroup \obeylines \spaceisspace #1% \parseargline\empty% Insert the \empty token, see \finishparsearg below. } {\obeylines % \gdef\parseargline#1^^M{% \endgroup % End of the group started in \parsearg. \argremovecomment #1\comment\ArgTerm% }% } % First remove any @comment, then any @c comment. \def\argremovecomment#1\comment#2\ArgTerm{\argremovec #1\c\ArgTerm} \def\argremovec#1\c#2\ArgTerm{\argcheckspaces#1\^^M\ArgTerm} % Each occurence of `\^^M' or `\^^M' is replaced by a single space. % % \argremovec might leave us with trailing space, e.g., % @end itemize @c foo % This space token undergoes the same procedure and is eventually removed % by \finishparsearg. % \def\argcheckspaces#1\^^M{\argcheckspacesX#1\^^M \^^M} \def\argcheckspacesX#1 \^^M{\argcheckspacesY#1\^^M} \def\argcheckspacesY#1\^^M#2\^^M#3\ArgTerm{% \def\temp{#3}% \ifx\temp\empty % We cannot use \next here, as it holds the macro to run; % thus we reuse \temp. \let\temp\finishparsearg \else \let\temp\argcheckspaces \fi % Put the space token in: \temp#1 #3\ArgTerm } % If a _delimited_ argument is enclosed in braces, they get stripped; so % to get _exactly_ the rest of the line, we had to prevent such situation. % We prepended an \empty token at the very beginning and we expand it now, % just before passing the control to \next. % (Similarily, we have to think about #3 of \argcheckspacesY above: it is % either the null string, or it ends with \^^M---thus there is no danger % that a pair of braces would be stripped. % % But first, we have to remove the trailing space token. % \def\finishparsearg#1 \ArgTerm{\expandafter\next\expandafter{#1}} % \parseargdef\foo{...} % is roughly equivalent to % \def\foo{\parsearg\Xfoo} % \def\Xfoo#1{...} % % Actually, I use \csname\string\foo\endcsname, ie. \\foo, as it is my % favourite TeX trick. --kasal, 16nov03 \def\parseargdef#1{% \expandafter \doparseargdef \csname\string#1\endcsname #1% } \def\doparseargdef#1#2{% \def#2{\parsearg#1}% \def#1##1% } % Several utility definitions with active space: { \obeyspaces \gdef\obeyedspace{ } % Make each space character in the input produce a normal interword % space in the output. Don't allow a line break at this space, as this % is used only in environments like @example, where each line of input % should produce a line of output anyway. % \gdef\sepspaces{\obeyspaces\let =\tie} % If an index command is used in an @example environment, any spaces % therein should become regular spaces in the raw index file, not the % expansion of \tie (\leavevmode \penalty \@M \ ). \gdef\unsepspaces{\let =\space} } \def\flushcr{\ifx\par\lisppar \def\next##1{}\else \let\next=\relax \fi \next} % Define the framework for environments in texinfo.tex. It's used like this: % % \envdef\foo{...} % \def\Efoo{...} % % It's the responsibility of \envdef to insert \begingroup before the % actual body; @end closes the group after calling \Efoo. \envdef also % defines \thisenv, so the current environment is known; @end checks % whether the environment name matches. The \checkenv macro can also be % used to check whether the current environment is the one expected. % % Non-false conditionals (@iftex, @ifset) don't fit into this, so they % are not treated as enviroments; they don't open a group. (The % implementation of @end takes care not to call \endgroup in this % special case.) % At runtime, environments start with this: \def\startenvironment#1{\begingroup\def\thisenv{#1}} % initialize \let\thisenv\empty % ... but they get defined via ``\envdef\foo{...}'': \long\def\envdef#1#2{\def#1{\startenvironment#1#2}} \def\envparseargdef#1#2{\parseargdef#1{\startenvironment#1#2}} % Check whether we're in the right environment: \def\checkenv#1{% \def\temp{#1}% \ifx\thisenv\temp \else \badenverr \fi } % Evironment mismatch, #1 expected: \def\badenverr{% \errhelp = \EMsimple \errmessage{This command can appear only \inenvironment\temp, not \inenvironment\thisenv}% } \def\inenvironment#1{% \ifx#1\empty out of any environment% \else in environment \expandafter\string#1% \fi } % @end foo executes the definition of \Efoo. % But first, it executes a specialized version of \checkenv % \parseargdef\end{% \if 1\csname iscond.#1\endcsname \else % The general wording of \badenverr may not be ideal, but... --kasal, 06nov03 \expandafter\checkenv\csname#1\endcsname \csname E#1\endcsname \endgroup \fi } \newhelp\EMsimple{Press RETURN to continue.} %% Simple single-character @ commands % @@ prints an @ % Kludge this until the fonts are right (grr). \def\@{{\tt\char64}} % This is turned off because it was never documented % and you can use @w{...} around a quote to suppress ligatures. %% Define @` and @' to be the same as ` and ' %% but suppressing ligatures. %\def\`{{`}} %\def\'{{'}} % Used to generate quoted braces. \def\mylbrace {{\tt\char123}} \def\myrbrace {{\tt\char125}} \let\{=\mylbrace \let\}=\myrbrace \begingroup % Definitions to produce \{ and \} commands for indices, % and @{ and @} for the aux/toc files. \catcode`\{ = \other \catcode`\} = \other \catcode`\[ = 1 \catcode`\] = 2 \catcode`\! = 0 \catcode`\\ = \other !gdef!lbracecmd[\{]% !gdef!rbracecmd[\}]% !gdef!lbraceatcmd[@{]% !gdef!rbraceatcmd[@}]% !endgroup % @comma{} to avoid , parsing problems. \let\comma = , % Accents: @, @dotaccent @ringaccent @ubaraccent @udotaccent % Others are defined by plain TeX: @` @' @" @^ @~ @= @u @v @H. \let\, = \c \let\dotaccent = \. \def\ringaccent#1{{\accent23 #1}} \let\tieaccent = \t \let\ubaraccent = \b \let\udotaccent = \d % Other special characters: @questiondown @exclamdown @ordf @ordm % Plain TeX defines: @AA @AE @O @OE @L (plus lowercase versions) @ss. \def\questiondown{?`} \def\exclamdown{!`} \def\ordf{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{a}}} \def\ordm{\leavevmode\raise1ex\hbox{\selectfonts\lllsize \underbar{o}}} % Dotless i and dotless j, used for accents. \def\imacro{i} \def\jmacro{j} \def\dotless#1{% \def\temp{#1}% \ifx\temp\imacro \ptexi \else\ifx\temp\jmacro \j \else \errmessage{@dotless can be used only with i or j}% \fi\fi } % The \TeX{} logo, as in plain, but resetting the spacing so that a % period following counts as ending a sentence. (Idea found in latex.) % \edef\TeX{\TeX \spacefactor=1000 } % @LaTeX{} logo. Not quite the same results as the definition in % latex.ltx, since we use a different font for the raised A; it's most % convenient for us to use an explicitly smaller font, rather than using % the \scriptstyle font (since we don't reset \scriptstyle and % \scriptscriptstyle). % \def\LaTeX{% L\kern-.36em {\setbox0=\hbox{T}% \vbox to \ht0{\hbox{\selectfonts\lllsize A}\vss}}% \kern-.15em \TeX } % Be sure we're in horizontal mode when doing a tie, since we make space % equivalent to this in @example-like environments. Otherwise, a space % at the beginning of a line will start with \penalty -- and % since \penalty is valid in vertical mode, we'd end up putting the % penalty on the vertical list instead of in the new paragraph. {\catcode`@ = 11 % Avoid using \@M directly, because that causes trouble % if the definition is written into an index file. \global\let\tiepenalty = \@M \gdef\tie{\leavevmode\penalty\tiepenalty\ } } % @: forces normal size whitespace following. \def\:{\spacefactor=1000 } % @* forces a line break. \def\*{\hfil\break\hbox{}\ignorespaces} % @/ allows a line break. \let\/=\allowbreak % @. is an end-of-sentence period. \def\.{.\spacefactor=\endofsentencespacefactor\space} % @! is an end-of-sentence bang. \def\!{!\spacefactor=\endofsentencespacefactor\space} % @? is an end-of-sentence query. \def\?{?\spacefactor=\endofsentencespacefactor\space} % @frenchspacing on|off says whether to put extra space after punctuation. % \def\onword{on} \def\offword{off} % \parseargdef\frenchspacing{% \def\temp{#1}% \ifx\temp\onword \plainfrenchspacing \else\ifx\temp\offword \plainnonfrenchspacing \else \errhelp = \EMsimple \errmessage{Unknown @frenchspacing option `\temp', must be on/off}% \fi\fi } % @w prevents a word break. Without the \leavevmode, @w at the % beginning of a paragraph, when TeX is still in vertical mode, would % produce a whole line of output instead of starting the paragraph. \def\w#1{\leavevmode\hbox{#1}} % @group ... @end group forces ... to be all on one page, by enclosing % it in a TeX vbox. We use \vtop instead of \vbox to construct the box % to keep its height that of a normal line. According to the rules for % \topskip (p.114 of the TeXbook), the glue inserted is % max (\topskip - \ht (first item), 0). If that height is large, % therefore, no glue is inserted, and the space between the headline and % the text is small, which looks bad. % % Another complication is that the group might be very large. This can % cause the glue on the previous page to be unduly stretched, because it % does not have much material. In this case, it's better to add an % explicit \vfill so that the extra space is at the bottom. The % threshold for doing this is if the group is more than \vfilllimit % percent of a page (\vfilllimit can be changed inside of @tex). % \newbox\groupbox \def\vfilllimit{0.7} % \envdef\group{% \ifnum\catcode`\^^M=\active \else \errhelp = \groupinvalidhelp \errmessage{@group invalid in context where filling is enabled}% \fi \startsavinginserts % \setbox\groupbox = \vtop\bgroup % Do @comment since we are called inside an environment such as % @example, where each end-of-line in the input causes an % end-of-line in the output. We don't want the end-of-line after % the `@group' to put extra space in the output. Since @group % should appear on a line by itself (according to the Texinfo % manual), we don't worry about eating any user text. \comment } % % The \vtop produces a box with normal height and large depth; thus, TeX puts % \baselineskip glue before it, and (when the next line of text is done) % \lineskip glue after it. Thus, space below is not quite equal to space % above. But it's pretty close. \def\Egroup{% % To get correct interline space between the last line of the group % and the first line afterwards, we have to propagate \prevdepth. \endgraf % Not \par, as it may have been set to \lisppar. \global\dimen1 = \prevdepth \egroup % End the \vtop. % \dimen0 is the vertical size of the group's box. \dimen0 = \ht\groupbox \advance\dimen0 by \dp\groupbox % \dimen2 is how much space is left on the page (more or less). \dimen2 = \pageheight \advance\dimen2 by -\pagetotal % if the group doesn't fit on the current page, and it's a big big % group, force a page break. \ifdim \dimen0 > \dimen2 \ifdim \pagetotal < \vfilllimit\pageheight \page \fi \fi \box\groupbox \prevdepth = \dimen1 \checkinserts } % % TeX puts in an \escapechar (i.e., `@') at the beginning of the help % message, so this ends up printing `@group can only ...'. % \newhelp\groupinvalidhelp{% group can only be used in environments such as @example,^^J% where each line of input produces a line of output.} % @need space-in-mils % forces a page break if there is not space-in-mils remaining. \newdimen\mil \mil=0.001in % Old definition--didn't work. %\parseargdef\need{\par % %% This method tries to make TeX break the page naturally %% if the depth of the box does not fit. %{\baselineskip=0pt% %\vtop to #1\mil{\vfil}\kern -#1\mil\nobreak %\prevdepth=-1000pt %}} \parseargdef\need{% % Ensure vertical mode, so we don't make a big box in the middle of a % paragraph. \par % % If the @need value is less than one line space, it's useless. \dimen0 = #1\mil \dimen2 = \ht\strutbox \advance\dimen2 by \dp\strutbox \ifdim\dimen0 > \dimen2 % % Do a \strut just to make the height of this box be normal, so the % normal leading is inserted relative to the preceding line. % And a page break here is fine. \vtop to #1\mil{\strut\vfil}% % % TeX does not even consider page breaks if a penalty added to the % main vertical list is 10000 or more. But in order to see if the % empty box we just added fits on the page, we must make it consider % page breaks. On the other hand, we don't want to actually break the % page after the empty box. So we use a penalty of 9999. % % There is an extremely small chance that TeX will actually break the % page at this \penalty, if there are no other feasible breakpoints in % sight. (If the user is using lots of big @group commands, which % almost-but-not-quite fill up a page, TeX will have a hard time doing % good page breaking, for example.) However, I could not construct an % example where a page broke at this \penalty; if it happens in a real % document, then we can reconsider our strategy. \penalty9999 % % Back up by the size of the box, whether we did a page break or not. \kern -#1\mil % % Do not allow a page break right after this kern. \nobreak \fi } % @br forces paragraph break (and is undocumented). \let\br = \par % @page forces the start of a new page. % \def\page{\par\vfill\supereject} % @exdent text.... % outputs text on separate line in roman font, starting at standard page margin % This records the amount of indent in the innermost environment. % That's how much \exdent should take out. \newskip\exdentamount % This defn is used inside fill environments such as @defun. \parseargdef\exdent{\hfil\break\hbox{\kern -\exdentamount{\rm#1}}\hfil\break} % This defn is used inside nofill environments such as @example. \parseargdef\nofillexdent{{\advance \leftskip by -\exdentamount \leftline{\hskip\leftskip{\rm#1}}}} % @inmargin{WHICH}{TEXT} puts TEXT in the WHICH margin next to the current % paragraph. For more general purposes, use the \margin insertion % class. WHICH is `l' or `r'. % \newskip\inmarginspacing \inmarginspacing=1cm \def\strutdepth{\dp\strutbox} % \def\doinmargin#1#2{\strut\vadjust{% \nobreak \kern-\strutdepth \vtop to \strutdepth{% \baselineskip=\strutdepth \vss % if you have multiple lines of stuff to put here, you'll need to % make the vbox yourself of the appropriate size. \ifx#1l% \llap{\ignorespaces #2\hskip\inmarginspacing}% \else \rlap{\hskip\hsize \hskip\inmarginspacing \ignorespaces #2}% \fi \null }% }} \def\inleftmargin{\doinmargin l} \def\inrightmargin{\doinmargin r} % % @inmargin{TEXT [, RIGHT-TEXT]} % (if RIGHT-TEXT is given, use TEXT for left page, RIGHT-TEXT for right; % else use TEXT for both). % \def\inmargin#1{\parseinmargin #1,,\finish} \def\parseinmargin#1,#2,#3\finish{% not perfect, but better than nothing. \setbox0 = \hbox{\ignorespaces #2}% \ifdim\wd0 > 0pt \def\lefttext{#1}% have both texts \def\righttext{#2}% \else \def\lefttext{#1}% have only one text \def\righttext{#1}% \fi % \ifodd\pageno \def\temp{\inrightmargin\righttext}% odd page -> outside is right margin \else \def\temp{\inleftmargin\lefttext}% \fi \temp } % @include file insert text of that file as input. % \def\include{\parseargusing\filenamecatcodes\includezzz} \def\includezzz#1{% \pushthisfilestack \def\thisfile{#1}% {% \makevalueexpandable \def\temp{\input #1 }% \expandafter }\temp \popthisfilestack } \def\filenamecatcodes{% \catcode`\\=\other \catcode`~=\other \catcode`^=\other \catcode`_=\other \catcode`|=\other \catcode`<=\other \catcode`>=\other \catcode`+=\other \catcode`-=\other } \def\pushthisfilestack{% \expandafter\pushthisfilestackX\popthisfilestack\StackTerm } \def\pushthisfilestackX{% \expandafter\pushthisfilestackY\thisfile\StackTerm } \def\pushthisfilestackY #1\StackTerm #2\StackTerm {% \gdef\popthisfilestack{\gdef\thisfile{#1}\gdef\popthisfilestack{#2}}% } \def\popthisfilestack{\errthisfilestackempty} \def\errthisfilestackempty{\errmessage{Internal error: the stack of filenames is empty.}} \def\thisfile{} % @center line % outputs that line, centered. % \parseargdef\center{% \ifhmode \let\next\centerH \else \let\next\centerV \fi \next{\hfil \ignorespaces#1\unskip \hfil}% } \def\centerH#1{% {% \hfil\break \advance\hsize by -\leftskip \advance\hsize by -\rightskip \line{#1}% \break }% } \def\centerV#1{\line{\kern\leftskip #1\kern\rightskip}} % @sp n outputs n lines of vertical space \parseargdef\sp{\vskip #1\baselineskip} % @comment ...line which is ignored... % @c is the same as @comment % @ignore ... @end ignore is another way to write a comment \def\comment{\begingroup \catcode`\^^M=\other% \catcode`\@=\other \catcode`\{=\other \catcode`\}=\other% \commentxxx} {\catcode`\^^M=\other \gdef\commentxxx#1^^M{\endgroup}} \let\c=\comment % @paragraphindent NCHARS % We'll use ems for NCHARS, close enough. % NCHARS can also be the word `asis' or `none'. % We cannot feasibly implement @paragraphindent asis, though. % \def\asisword{asis} % no translation, these are keywords \def\noneword{none} % \parseargdef\paragraphindent{% \def\temp{#1}% \ifx\temp\asisword \else \ifx\temp\noneword \defaultparindent = 0pt \else \defaultparindent = #1em \fi \fi \parindent = \defaultparindent } % @exampleindent NCHARS % We'll use ems for NCHARS like @paragraphindent. % It seems @exampleindent asis isn't necessary, but % I preserve it to make it similar to @paragraphindent. \parseargdef\exampleindent{% \def\temp{#1}% \ifx\temp\asisword \else \ifx\temp\noneword \lispnarrowing = 0pt \else \lispnarrowing = #1em \fi \fi } % @firstparagraphindent WORD % If WORD is `none', then suppress indentation of the first paragraph % after a section heading. If WORD is `insert', then do indent at such % paragraphs. % % The paragraph indentation is suppressed or not by calling % \suppressfirstparagraphindent, which the sectioning commands do. % We switch the definition of this back and forth according to WORD. % By default, we suppress indentation. % \def\suppressfirstparagraphindent{\dosuppressfirstparagraphindent} \def\insertword{insert} % \parseargdef\firstparagraphindent{% \def\temp{#1}% \ifx\temp\noneword \let\suppressfirstparagraphindent = \dosuppressfirstparagraphindent \else\ifx\temp\insertword \let\suppressfirstparagraphindent = \relax \else \errhelp = \EMsimple \errmessage{Unknown @firstparagraphindent option `\temp'}% \fi\fi } % Here is how we actually suppress indentation. Redefine \everypar to % \kern backwards by \parindent, and then reset itself to empty. % % We also make \indent itself not actually do anything until the next % paragraph. % \gdef\dosuppressfirstparagraphindent{% \gdef\indent{% \restorefirstparagraphindent \indent }% \gdef\noindent{% \restorefirstparagraphindent \noindent }% \global\everypar = {% \kern -\parindent \restorefirstparagraphindent }% } \gdef\restorefirstparagraphindent{% \global \let \indent = \ptexindent \global \let \noindent = \ptexnoindent \global \everypar = {}% } % @asis just yields its argument. Used with @table, for example. % \def\asis#1{#1} % @math outputs its argument in math mode. % % One complication: _ usually means subscripts, but it could also mean % an actual _ character, as in @math{@var{some_variable} + 1}. So make % _ active, and distinguish by seeing if the current family is \slfam, % which is what @var uses. { \catcode\underChar = \active \gdef\mathunderscore{% \catcode\underChar=\active \def_{\ifnum\fam=\slfam \_\else\sb\fi}% } } % Another complication: we want \\ (and @\) to output a \ character. % FYI, plain.tex uses \\ as a temporary control sequence (why?), but % this is not advertised and we don't care. Texinfo does not % otherwise define @\. % % The \mathchar is class=0=ordinary, family=7=ttfam, position=5C=\. \def\mathbackslash{\ifnum\fam=\ttfam \mathchar"075C \else\backslash \fi} % \def\math{% \tex \mathunderscore \let\\ = \mathbackslash \mathactive $\finishmath } \def\finishmath#1{#1$\endgroup} % Close the group opened by \tex. % Some active characters (such as <) are spaced differently in math. % We have to reset their definitions in case the @math was an argument % to a command which sets the catcodes (such as @item or @section). % { \catcode`^ = \active \catcode`< = \active \catcode`> = \active \catcode`+ = \active \gdef\mathactive{% \let^ = \ptexhat \let< = \ptexless \let> = \ptexgtr \let+ = \ptexplus } } % @bullet and @minus need the same treatment as @math, just above. \def\bullet{$\ptexbullet$} \def\minus{$-$} % @dots{} outputs an ellipsis using the current font. % We do .5em per period so that it has the same spacing in a typewriter % font as three actual period characters. % \def\dots{% \leavevmode \hbox to 1.5em{% \hskip 0pt plus 0.25fil .\hfil.\hfil.% \hskip 0pt plus 0.5fil }% } % @enddots{} is an end-of-sentence ellipsis. % \def\enddots{% \dots \spacefactor=\endofsentencespacefactor } % @comma{} is so commas can be inserted into text without messing up % Texinfo's parsing. % \let\comma = , % @refill is a no-op. \let\refill=\relax % If working on a large document in chapters, it is convenient to % be able to disable indexing, cross-referencing, and contents, for test runs. % This is done with @novalidate (before @setfilename). % \newif\iflinks \linkstrue % by default we want the aux files. \let\novalidate = \linksfalse % @setfilename is done at the beginning of every texinfo file. % So open here the files we need to have open while reading the input. % This makes it possible to make a .fmt file for texinfo. \def\setfilename{% \fixbackslash % Turn off hack to swallow `\input texinfo'. \iflinks \tryauxfile % Open the new aux file. TeX will close it automatically at exit. \immediate\openout\auxfile=\jobname.aux \fi % \openindices needs to do some work in any case. \openindices \let\setfilename=\comment % Ignore extra @setfilename cmds. % % If texinfo.cnf is present on the system, read it. % Useful for site-wide @afourpaper, etc. \openin 1 texinfo.cnf \ifeof 1 \else \input texinfo.cnf \fi \closein 1 % \comment % Ignore the actual filename. } % Called from \setfilename. % \def\openindices{% \newindex{cp}% \newcodeindex{fn}% \newcodeindex{vr}% \newcodeindex{tp}% \newcodeindex{ky}% \newcodeindex{pg}% } % @bye. \outer\def\bye{\pagealignmacro\tracingstats=1\ptexend} \message{pdf,} % adobe `portable' document format \newcount\tempnum \newcount\lnkcount \newtoks\filename \newcount\filenamelength \newcount\pgn \newtoks\toksA \newtoks\toksB \newtoks\toksC \newtoks\toksD \newbox\boxA \newcount\countA \newif\ifpdf \newif\ifpdfmakepagedest % when pdftex is run in dvi mode, \pdfoutput is defined (so \pdfoutput=1 % can be set). So we test for \relax and 0 as well as \undefined, % borrowed from ifpdf.sty. \ifx\pdfoutput\undefined \else \ifx\pdfoutput\relax \else \ifcase\pdfoutput \else \pdftrue \fi \fi \fi % PDF uses PostScript string constants for the names of xref targets, to % for display in the outlines, and in other places. Thus, we have to % double any backslashes. Otherwise, a name like "\node" will be % interpreted as a newline (\n), followed by o, d, e. Not good. % http://www.ntg.nl/pipermail/ntg-pdftex/2004-July/000654.html % (and related messages, the final outcome is that it is up to the TeX % user to double the backslashes and otherwise make the string valid, so % that's we do). % double active backslashes. % {\catcode`\@=0 \catcode`\\=\active @gdef@activebackslash{@catcode`@\=@active @otherbackslash} @gdef@activebackslashdouble{% @catcode@backChar=@active @let\=@doublebackslash} } % To handle parens, we must adopt a different approach, since parens are % not active characters. hyperref.dtx (which has the same problem as % us) handles it with this amazing macro to replace tokens. I've % tinkered with it a little for texinfo, but it's definitely from there. % % #1 is the tokens to replace. % #2 is the replacement. % #3 is the control sequence with the string. % \def\HyPsdSubst#1#2#3{% \def\HyPsdReplace##1#1##2\END{% ##1% \ifx\\##2\\% \else #2% \HyReturnAfterFi{% \HyPsdReplace##2\END }% \fi }% \xdef#3{\expandafter\HyPsdReplace#3#1\END}% } \long\def\HyReturnAfterFi#1\fi{\fi#1} % #1 is a control sequence in which to do the replacements. \def\backslashparens#1{% \xdef#1{#1}% redefine it as its expansion; the definition is simply % \lastnode when called from \setref -> \pdfmkdest. \HyPsdSubst{(}{\backslashlparen}{#1}% \HyPsdSubst{)}{\backslashrparen}{#1}% } {\catcode\exclamChar = 0 \catcode\backChar = \other !gdef!backslashlparen{\(}% !gdef!backslashrparen{\)}% } \ifpdf \input pdfcolor \pdfcatalog{/PageMode /UseOutlines}% \def\dopdfimage#1#2#3{% \def\imagewidth{#2}% \def\imageheight{#3}% % without \immediate, pdftex seg faults when the same image is % included twice. (Version 3.14159-pre-1.0-unofficial-20010704.) \ifnum\pdftexversion < 14 \immediate\pdfimage \else \immediate\pdfximage \fi \ifx\empty\imagewidth\else width \imagewidth \fi \ifx\empty\imageheight\else height \imageheight \fi \ifnum\pdftexversion<13 #1.pdf% \else {#1.pdf}% \fi \ifnum\pdftexversion < 14 \else \pdfrefximage \pdflastximage \fi} \def\pdfmkdest#1{{% % We have to set dummies so commands such as @code, and characters % such as \, aren't expanded when present in a section title. \atdummies \activebackslashdouble \def\pdfdestname{#1}% \backslashparens\pdfdestname \pdfdest name{\pdfdestname} xyz% }}% % % used to mark target names; must be expandable. \def\pdfmkpgn#1{#1}% % \let\linkcolor = \Blue % was Cyan, but that seems light? \def\endlink{\Black\pdfendlink} % Adding outlines to PDF; macros for calculating structure of outlines % come from Petr Olsak \def\expnumber#1{\expandafter\ifx\csname#1\endcsname\relax 0% \else \csname#1\endcsname \fi} \def\advancenumber#1{\tempnum=\expnumber{#1}\relax \advance\tempnum by 1 \expandafter\xdef\csname#1\endcsname{\the\tempnum}} % % #1 is the section text, which is what will be displayed in the % outline by the pdf viewer. #2 is the pdf expression for the number % of subentries (or empty, for subsubsections). #3 is the node text, % which might be empty if this toc entry had no corresponding node. % #4 is the page number % \def\dopdfoutline#1#2#3#4{% % Generate a link to the node text if that exists; else, use the % page number. We could generate a destination for the section % text in the case where a section has no node, but it doesn't % seem worth the trouble, since most documents are normally structured. \def\pdfoutlinedest{#3}% \ifx\pdfoutlinedest\empty \def\pdfoutlinedest{#4}% \else % Doubled backslashes in the name. {\activebackslashdouble \xdef\pdfoutlinedest{#3}% \backslashparens\pdfoutlinedest}% \fi % % Also double the backslashes in the display string. {\activebackslashdouble \xdef\pdfoutlinetext{#1}% \backslashparens\pdfoutlinetext}% % \pdfoutline goto name{\pdfmkpgn{\pdfoutlinedest}}#2{\pdfoutlinetext}% } % \def\pdfmakeoutlines{% \begingroup % Thanh's hack / proper braces in bookmarks \edef\mylbrace{\iftrue \string{\else}\fi}\let\{=\mylbrace \edef\myrbrace{\iffalse{\else\string}\fi}\let\}=\myrbrace % % Read toc silently, to get counts of subentries for \pdfoutline. \def\numchapentry##1##2##3##4{% \def\thischapnum{##2}% \def\thissecnum{0}% \def\thissubsecnum{0}% }% \def\numsecentry##1##2##3##4{% \advancenumber{chap\thischapnum}% \def\thissecnum{##2}% \def\thissubsecnum{0}% }% \def\numsubsecentry##1##2##3##4{% \advancenumber{sec\thissecnum}% \def\thissubsecnum{##2}% }% \def\numsubsubsecentry##1##2##3##4{% \advancenumber{subsec\thissubsecnum}% }% \def\thischapnum{0}% \def\thissecnum{0}% \def\thissubsecnum{0}% % % use \def rather than \let here because we redefine \chapentry et % al. a second time, below. \def\appentry{\numchapentry}% \def\appsecentry{\numsecentry}% \def\appsubsecentry{\numsubsecentry}% \def\appsubsubsecentry{\numsubsubsecentry}% \def\unnchapentry{\numchapentry}% \def\unnsecentry{\numsecentry}% \def\unnsubsecentry{\numsubsecentry}% \def\unnsubsubsecentry{\numsubsubsecentry}% \readdatafile{toc}% % % Read toc second time, this time actually producing the outlines. % The `-' means take the \expnumber as the absolute number of % subentries, which we calculated on our first read of the .toc above. % % We use the node names as the destinations. \def\numchapentry##1##2##3##4{% \dopdfoutline{##1}{count-\expnumber{chap##2}}{##3}{##4}}% \def\numsecentry##1##2##3##4{% \dopdfoutline{##1}{count-\expnumber{sec##2}}{##3}{##4}}% \def\numsubsecentry##1##2##3##4{% \dopdfoutline{##1}{count-\expnumber{subsec##2}}{##3}{##4}}% \def\numsubsubsecentry##1##2##3##4{% count is always zero \dopdfoutline{##1}{}{##3}{##4}}% % % PDF outlines are displayed using system fonts, instead of % document fonts. Therefore we cannot use special characters, % since the encoding is unknown. For example, the eogonek from % Latin 2 (0xea) gets translated to a | character. Info from % Staszek Wawrykiewicz, 19 Jan 2004 04:09:24 +0100. % % xx to do this right, we have to translate 8-bit characters to % their "best" equivalent, based on the @documentencoding. Right % now, I guess we'll just let the pdf reader have its way. \indexnofonts \setupdatafile \activebackslash \input \jobname.toc \endgroup } % \def\skipspaces#1{\def\PP{#1}\def\D{|}% \ifx\PP\D\let\nextsp\relax \else\let\nextsp\skipspaces \ifx\p\space\else\addtokens{\filename}{\PP}% \advance\filenamelength by 1 \fi \fi \nextsp} \def\getfilename#1{\filenamelength=0\expandafter\skipspaces#1|\relax} \ifnum\pdftexversion < 14 \let \startlink \pdfannotlink \else \let \startlink \pdfstartlink \fi \def\pdfurl#1{% \begingroup \normalturnoffactive\def\@{@}% \makevalueexpandable \leavevmode\Red \startlink attr{/Border [0 0 0]}% user{/Subtype /Link /A << /S /URI /URI (#1) >>}% \endgroup} \def\pdfgettoks#1.{\setbox\boxA=\hbox{\toksA={#1.}\toksB={}\maketoks}} \def\addtokens#1#2{\edef\addtoks{\noexpand#1={\the#1#2}}\addtoks} \def\adn#1{\addtokens{\toksC}{#1}\global\countA=1\let\next=\maketoks} \def\poptoks#1#2|ENDTOKS|{\let\first=#1\toksD={#1}\toksA={#2}} \def\maketoks{% \expandafter\poptoks\the\toksA|ENDTOKS|\relax \ifx\first0\adn0 \else\ifx\first1\adn1 \else\ifx\first2\adn2 \else\ifx\first3\adn3 \else\ifx\first4\adn4 \else\ifx\first5\adn5 \else\ifx\first6\adn6 \else\ifx\first7\adn7 \else\ifx\first8\adn8 \else\ifx\first9\adn9 \else \ifnum0=\countA\else\makelink\fi \ifx\first.\let\next=\done\else \let\next=\maketoks \addtokens{\toksB}{\the\toksD} \ifx\first,\addtokens{\toksB}{\space}\fi \fi \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi \next} \def\makelink{\addtokens{\toksB}% {\noexpand\pdflink{\the\toksC}}\toksC={}\global\countA=0} \def\pdflink#1{% \startlink attr{/Border [0 0 0]} goto name{\pdfmkpgn{#1}} \linkcolor #1\endlink} \def\done{\edef\st{\global\noexpand\toksA={\the\toksB}}\st} \else \let\pdfmkdest = \gobble \let\pdfurl = \gobble \let\endlink = \relax \let\linkcolor = \relax \let\pdfmakeoutlines = \relax \fi % \ifx\pdfoutput \message{fonts,} % Change the current font style to #1, remembering it in \curfontstyle. % For now, we do not accumulate font styles: @b{@i{foo}} prints foo in % italics, not bold italics. % \def\setfontstyle#1{% \def\curfontstyle{#1}% not as a control sequence, because we are \edef'd. \csname ten#1\endcsname % change the current font } % Select #1 fonts with the current style. % \def\selectfonts#1{\csname #1fonts\endcsname \csname\curfontstyle\endcsname} \def\rm{\fam=0 \setfontstyle{rm}} \def\it{\fam=\itfam \setfontstyle{it}} \def\sl{\fam=\slfam \setfontstyle{sl}} \def\bf{\fam=\bffam \setfontstyle{bf}}\def\bfstylename{bf} \def\tt{\fam=\ttfam \setfontstyle{tt}} % Texinfo sort of supports the sans serif font style, which plain TeX does not. % So we set up a \sf. \newfam\sffam \def\sf{\fam=\sffam \setfontstyle{sf}} \let\li = \sf % Sometimes we call it \li, not \sf. % We don't need math for this font style. \def\ttsl{\setfontstyle{ttsl}} % Default leading. \newdimen\textleading \textleading = 13.2pt % Set the baselineskip to #1, and the lineskip and strut size % correspondingly. There is no deep meaning behind these magic numbers % used as factors; they just match (closely enough) what Knuth defined. % \def\lineskipfactor{.08333} \def\strutheightpercent{.70833} \def\strutdepthpercent {.29167} % \def\setleading#1{% \normalbaselineskip = #1\relax \normallineskip = \lineskipfactor\normalbaselineskip \normalbaselines \setbox\strutbox =\hbox{% \vrule width0pt height\strutheightpercent\baselineskip depth \strutdepthpercent \baselineskip }% } % Set the font macro #1 to the font named #2, adding on the % specified font prefix (normally `cm'). % #3 is the font's design size, #4 is a scale factor \def\setfont#1#2#3#4{\font#1=\fontprefix#2#3 scaled #4} % Use cm as the default font prefix. % To specify the font prefix, you must define \fontprefix % before you read in texinfo.tex. \ifx\fontprefix\undefined \def\fontprefix{cm} \fi % Support font families that don't use the same naming scheme as CM. \def\rmshape{r} \def\rmbshape{bx} %where the normal face is bold \def\bfshape{b} \def\bxshape{bx} \def\ttshape{tt} \def\ttbshape{tt} \def\ttslshape{sltt} \def\itshape{ti} \def\itbshape{bxti} \def\slshape{sl} \def\slbshape{bxsl} \def\sfshape{ss} \def\sfbshape{ss} \def\scshape{csc} \def\scbshape{csc} % Text fonts (11.2pt, magstep1). \def\textnominalsize{11pt} \edef\mainmagstep{\magstephalf} \setfont\textrm\rmshape{10}{\mainmagstep} \setfont\texttt\ttshape{10}{\mainmagstep} \setfont\textbf\bfshape{10}{\mainmagstep} \setfont\textit\itshape{10}{\mainmagstep} \setfont\textsl\slshape{10}{\mainmagstep} \setfont\textsf\sfshape{10}{\mainmagstep} \setfont\textsc\scshape{10}{\mainmagstep} \setfont\textttsl\ttslshape{10}{\mainmagstep} \font\texti=cmmi10 scaled \mainmagstep \font\textsy=cmsy10 scaled \mainmagstep % A few fonts for @defun names and args. \setfont\defbf\bfshape{10}{\magstep1} \setfont\deftt\ttshape{10}{\magstep1} \setfont\defttsl\ttslshape{10}{\magstep1} \def\df{\let\tentt=\deftt \let\tenbf = \defbf \let\tenttsl=\defttsl \bf} % Fonts for indices, footnotes, small examples (9pt). \def\smallnominalsize{9pt} \setfont\smallrm\rmshape{9}{1000} \setfont\smalltt\ttshape{9}{1000} \setfont\smallbf\bfshape{10}{900} \setfont\smallit\itshape{9}{1000} \setfont\smallsl\slshape{9}{1000} \setfont\smallsf\sfshape{9}{1000} \setfont\smallsc\scshape{10}{900} \setfont\smallttsl\ttslshape{10}{900} \font\smalli=cmmi9 \font\smallsy=cmsy9 % Fonts for small examples (8pt). \def\smallernominalsize{8pt} \setfont\smallerrm\rmshape{8}{1000} \setfont\smallertt\ttshape{8}{1000} \setfont\smallerbf\bfshape{10}{800} \setfont\smallerit\itshape{8}{1000} \setfont\smallersl\slshape{8}{1000} \setfont\smallersf\sfshape{8}{1000} \setfont\smallersc\scshape{10}{800} \setfont\smallerttsl\ttslshape{10}{800} \font\smalleri=cmmi8 \font\smallersy=cmsy8 % Fonts for title page (20.4pt): \def\titlenominalsize{20pt} \setfont\titlerm\rmbshape{12}{\magstep3} \setfont\titleit\itbshape{10}{\magstep4} \setfont\titlesl\slbshape{10}{\magstep4} \setfont\titlett\ttbshape{12}{\magstep3} \setfont\titlettsl\ttslshape{10}{\magstep4} \setfont\titlesf\sfbshape{17}{\magstep1} \let\titlebf=\titlerm \setfont\titlesc\scbshape{10}{\magstep4} \font\titlei=cmmi12 scaled \magstep3 \font\titlesy=cmsy10 scaled \magstep4 \def\authorrm{\secrm} \def\authortt{\sectt} % Chapter (and unnumbered) fonts (17.28pt). \def\chapnominalsize{17pt} \setfont\chaprm\rmbshape{12}{\magstep2} \setfont\chapit\itbshape{10}{\magstep3} \setfont\chapsl\slbshape{10}{\magstep3} \setfont\chaptt\ttbshape{12}{\magstep2} \setfont\chapttsl\ttslshape{10}{\magstep3} \setfont\chapsf\sfbshape{17}{1000} \let\chapbf=\chaprm \setfont\chapsc\scbshape{10}{\magstep3} \font\chapi=cmmi12 scaled \magstep2 \font\chapsy=cmsy10 scaled \magstep3 % Section fonts (14.4pt). \def\secnominalsize{14pt} \setfont\secrm\rmbshape{12}{\magstep1} \setfont\secit\itbshape{10}{\magstep2} \setfont\secsl\slbshape{10}{\magstep2} \setfont\sectt\ttbshape{12}{\magstep1} \setfont\secttsl\ttslshape{10}{\magstep2} \setfont\secsf\sfbshape{12}{\magstep1} \let\secbf\secrm \setfont\secsc\scbshape{10}{\magstep2} \font\seci=cmmi12 scaled \magstep1 \font\secsy=cmsy10 scaled \magstep2 % Subsection fonts (13.15pt). \def\ssecnominalsize{13pt} \setfont\ssecrm\rmbshape{12}{\magstephalf} \setfont\ssecit\itbshape{10}{1315} \setfont\ssecsl\slbshape{10}{1315} \setfont\ssectt\ttbshape{12}{\magstephalf} \setfont\ssecttsl\ttslshape{10}{1315} \setfont\ssecsf\sfbshape{12}{\magstephalf} \let\ssecbf\ssecrm \setfont\ssecsc\scbshape{10}{1315} \font\sseci=cmmi12 scaled \magstephalf \font\ssecsy=cmsy10 scaled 1315 % Reduced fonts for @acro in text (10pt). \def\reducednominalsize{10pt} \setfont\reducedrm\rmshape{10}{1000} \setfont\reducedtt\ttshape{10}{1000} \setfont\reducedbf\bfshape{10}{1000} \setfont\reducedit\itshape{10}{1000} \setfont\reducedsl\slshape{10}{1000} \setfont\reducedsf\sfshape{10}{1000} \setfont\reducedsc\scshape{10}{1000} \setfont\reducedttsl\ttslshape{10}{1000} \font\reducedi=cmmi10 \font\reducedsy=cmsy10 % In order for the font changes to affect most math symbols and letters, % we have to define the \textfont of the standard families. Since % texinfo doesn't allow for producing subscripts and superscripts except % in the main text, we don't bother to reset \scriptfont and % \scriptscriptfont (which would also require loading a lot more fonts). % \def\resetmathfonts{% \textfont0=\tenrm \textfont1=\teni \textfont2=\tensy \textfont\itfam=\tenit \textfont\slfam=\tensl \textfont\bffam=\tenbf \textfont\ttfam=\tentt \textfont\sffam=\tensf } % The font-changing commands redefine the meanings of \tenSTYLE, instead % of just \STYLE. We do this because \STYLE needs to also set the % current \fam for math mode. Our \STYLE (e.g., \rm) commands hardwire % \tenSTYLE to set the current font. % % Each font-changing command also sets the names \lsize (one size lower) % and \lllsize (three sizes lower). These relative commands are used in % the LaTeX logo and acronyms. % % This all needs generalizing, badly. % \def\textfonts{% \let\tenrm=\textrm \let\tenit=\textit \let\tensl=\textsl \let\tenbf=\textbf \let\tentt=\texttt \let\smallcaps=\textsc \let\tensf=\textsf \let\teni=\texti \let\tensy=\textsy \let\tenttsl=\textttsl \def\curfontsize{text}% \def\lsize{reduced}\def\lllsize{smaller}% \resetmathfonts \setleading{\textleading}} \def\titlefonts{% \let\tenrm=\titlerm \let\tenit=\titleit \let\tensl=\titlesl \let\tenbf=\titlebf \let\tentt=\titlett \let\smallcaps=\titlesc \let\tensf=\titlesf \let\teni=\titlei \let\tensy=\titlesy \let\tenttsl=\titlettsl \def\curfontsize{title}% \def\lsize{chap}\def\lllsize{subsec}% \resetmathfonts \setleading{25pt}} \def\titlefont#1{{\titlefonts\rm #1}} \def\chapfonts{% \let\tenrm=\chaprm \let\tenit=\chapit \let\tensl=\chapsl \let\tenbf=\chapbf \let\tentt=\chaptt \let\smallcaps=\chapsc \let\tensf=\chapsf \let\teni=\chapi \let\tensy=\chapsy \let\tenttsl=\chapttsl \def\curfontsize{chap}% \def\lsize{sec}\def\lllsize{text}% \resetmathfonts \setleading{19pt}} \def\secfonts{% \let\tenrm=\secrm \let\tenit=\secit \let\tensl=\secsl \let\tenbf=\secbf \let\tentt=\sectt \let\smallcaps=\secsc \let\tensf=\secsf \let\teni=\seci \let\tensy=\secsy \let\tenttsl=\secttsl \def\curfontsize{sec}% \def\lsize{subsec}\def\lllsize{reduced}% \resetmathfonts \setleading{16pt}} \def\subsecfonts{% \let\tenrm=\ssecrm \let\tenit=\ssecit \let\tensl=\ssecsl \let\tenbf=\ssecbf \let\tentt=\ssectt \let\smallcaps=\ssecsc \let\tensf=\ssecsf \let\teni=\sseci \let\tensy=\ssecsy \let\tenttsl=\ssecttsl \def\curfontsize{ssec}% \def\lsize{text}\def\lllsize{small}% \resetmathfonts \setleading{15pt}} \let\subsubsecfonts = \subsecfonts \def\reducedfonts{% \let\tenrm=\reducedrm \let\tenit=\reducedit \let\tensl=\reducedsl \let\tenbf=\reducedbf \let\tentt=\reducedtt \let\reducedcaps=\reducedsc \let\tensf=\reducedsf \let\teni=\reducedi \let\tensy=\reducedsy \let\tenttsl=\reducedttsl \def\curfontsize{reduced}% \def\lsize{small}\def\lllsize{smaller}% \resetmathfonts \setleading{10.5pt}} \def\smallfonts{% \let\tenrm=\smallrm \let\tenit=\smallit \let\tensl=\smallsl \let\tenbf=\smallbf \let\tentt=\smalltt \let\smallcaps=\smallsc \let\tensf=\smallsf \let\teni=\smalli \let\tensy=\smallsy \let\tenttsl=\smallttsl \def\curfontsize{small}% \def\lsize{smaller}\def\lllsize{smaller}% \resetmathfonts \setleading{10.5pt}} \def\smallerfonts{% \let\tenrm=\smallerrm \let\tenit=\smallerit \let\tensl=\smallersl \let\tenbf=\smallerbf \let\tentt=\smallertt \let\smallcaps=\smallersc \let\tensf=\smallersf \let\teni=\smalleri \let\tensy=\smallersy \let\tenttsl=\smallerttsl \def\curfontsize{smaller}% \def\lsize{smaller}\def\lllsize{smaller}% \resetmathfonts \setleading{9.5pt}} % Set the fonts to use with the @small... environments. \let\smallexamplefonts = \smallfonts % About \smallexamplefonts. If we use \smallfonts (9pt), @smallexample % can fit this many characters: % 8.5x11=86 smallbook=72 a4=90 a5=69 % If we use \scriptfonts (8pt), then we can fit this many characters: % 8.5x11=90+ smallbook=80 a4=90+ a5=77 % For me, subjectively, the few extra characters that fit aren't worth % the additional smallness of 8pt. So I'm making the default 9pt. % % By the way, for comparison, here's what fits with @example (10pt): % 8.5x11=71 smallbook=60 a4=75 a5=58 % % I wish the USA used A4 paper. % --karl, 24jan03. % Set up the default fonts, so we can use them for creating boxes. % \textfonts \rm % Define these so they can be easily changed for other fonts. \def\angleleft{$\langle$} \def\angleright{$\rangle$} % Count depth in font-changes, for error checks \newcount\fontdepth \fontdepth=0 % Fonts for short table of contents. \setfont\shortcontrm\rmshape{12}{1000} \setfont\shortcontbf\bfshape{10}{\magstep1} % no cmb12 \setfont\shortcontsl\slshape{12}{1000} \setfont\shortconttt\ttshape{12}{1000} %% Add scribe-like font environments, plus @l for inline lisp (usually sans %% serif) and @ii for TeX italic % \smartitalic{ARG} outputs arg in italics, followed by an italic correction % unless the following character is such as not to need one. \def\smartitalicx{\ifx\next,\else\ifx\next-\else\ifx\next.\else \ptexslash\fi\fi\fi} \def\smartslanted#1{{\ifusingtt\ttsl\sl #1}\futurelet\next\smartitalicx} \def\smartitalic#1{{\ifusingtt\ttsl\it #1}\futurelet\next\smartitalicx} % like \smartslanted except unconditionally uses \ttsl. % @var is set to this for defun arguments. \def\ttslanted#1{{\ttsl #1}\futurelet\next\smartitalicx} % like \smartslanted except unconditionally use \sl. We never want % ttsl for book titles, do we? \def\cite#1{{\sl #1}\futurelet\next\smartitalicx} \let\i=\smartitalic \let\slanted=\smartslanted \let\var=\smartslanted \let\dfn=\smartslanted \let\emph=\smartitalic % @b, explicit bold. \def\b#1{{\bf #1}} \let\strong=\b % @sansserif, explicit sans. \def\sansserif#1{{\sf #1}} % We can't just use \exhyphenpenalty, because that only has effect at % the end of a paragraph. Restore normal hyphenation at the end of the % group within which \nohyphenation is presumably called. % \def\nohyphenation{\hyphenchar\font = -1 \aftergroup\restorehyphenation} \def\restorehyphenation{\hyphenchar\font = `- } % Set sfcode to normal for the chars that usually have another value. % Can't use plain's \frenchspacing because it uses the `\x notation, and % sometimes \x has an active definition that messes things up. % \catcode`@=11 \def\plainfrenchspacing{% \sfcode\dotChar =\@m \sfcode\questChar=\@m \sfcode\exclamChar=\@m \sfcode\colonChar=\@m \sfcode\semiChar =\@m \sfcode\commaChar =\@m \def\endofsentencespacefactor{1000}% for @. and friends } \def\plainnonfrenchspacing{% \sfcode`\.3000\sfcode`\?3000\sfcode`\!3000 \sfcode`\:2000\sfcode`\;1500\sfcode`\,1250 \def\endofsentencespacefactor{3000}% for @. and friends } \catcode`@=\other \def\endofsentencespacefactor{3000}% default \def\t#1{% {\tt \rawbackslash \plainfrenchspacing #1}% \null } \def\samp#1{`\tclose{#1}'\null} \setfont\keyrm\rmshape{8}{1000} \font\keysy=cmsy9 \def\key#1{{\keyrm\textfont2=\keysy \leavevmode\hbox{% \raise0.4pt\hbox{\angleleft}\kern-.08em\vtop{% \vbox{\hrule\kern-0.4pt \hbox{\raise0.4pt\hbox{\vphantom{\angleleft}}#1}}% \kern-0.4pt\hrule}% \kern-.06em\raise0.4pt\hbox{\angleright}}}} % The old definition, with no lozenge: %\def\key #1{{\ttsl \nohyphenation \uppercase{#1}}\null} \def\ctrl #1{{\tt \rawbackslash \hat}#1} % @file, @option are the same as @samp. \let\file=\samp \let\option=\samp % @code is a modification of @t, % which makes spaces the same size as normal in the surrounding text. \def\tclose#1{% {% % Change normal interword space to be same as for the current font. \spaceskip = \fontdimen2\font % % Switch to typewriter. \tt % % But `\ ' produces the large typewriter interword space. \def\ {{\spaceskip = 0pt{} }}% % % Turn off hyphenation. \nohyphenation % \rawbackslash \plainfrenchspacing #1% }% \null } % We *must* turn on hyphenation at `-' and `_' in @code. % Otherwise, it is too hard to avoid overfull hboxes % in the Emacs manual, the Library manual, etc. % Unfortunately, TeX uses one parameter (\hyphenchar) to control % both hyphenation at - and hyphenation within words. % We must therefore turn them both off (\tclose does that) % and arrange explicitly to hyphenate at a dash. % -- rms. { \catcode`\-=\active \catcode`\_=\active % \global\def\code{\begingroup \catcode`\-=\active \catcode`\_=\active \ifallowcodebreaks \let-\codedash \let_\codeunder \else \let-\realdash \let_\realunder \fi \codex } } \def\realdash{-} \def\codedash{-\discretionary{}{}{}} \def\codeunder{% % this is all so @math{@code{var_name}+1} can work. In math mode, _ % is "active" (mathcode"8000) and \normalunderscore (or \char95, etc.) % will therefore expand the active definition of _, which is us % (inside @code that is), therefore an endless loop. \ifusingtt{\ifmmode \mathchar"075F % class 0=ordinary, family 7=ttfam, pos 0x5F=_. \else\normalunderscore \fi \discretionary{}{}{}}% {\_}% } \def\codex #1{\tclose{#1}\endgroup} % An additional complication: the above will allow breaks after, e.g., % each of the four underscores in __typeof__. This is undesirable in % some manuals, especially if they don't have long identifiers in % general. @allowcodebreaks provides a way to control this. % \newif\ifallowcodebreaks \allowcodebreakstrue \def\keywordtrue{true} \def\keywordfalse{false} \parseargdef\allowcodebreaks{% \def\txiarg{#1}% \ifx\txiarg\keywordtrue \allowcodebreakstrue \else\ifx\txiarg\keywordfalse \allowcodebreaksfalse \else \errhelp = \EMsimple \errmessage{Unknown @allowcodebreaks option `\txiarg'}% \fi\fi } % @kbd is like @code, except that if the argument is just one @key command, % then @kbd has no effect. % @kbdinputstyle -- arg is `distinct' (@kbd uses slanted tty font always), % `example' (@kbd uses ttsl only inside of @example and friends), % or `code' (@kbd uses normal tty font always). \parseargdef\kbdinputstyle{% \def\txiarg{#1}% \ifx\txiarg\worddistinct \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\ttsl}% \else\ifx\txiarg\wordexample \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\tt}% \else\ifx\txiarg\wordcode \gdef\kbdexamplefont{\tt}\gdef\kbdfont{\tt}% \else \errhelp = \EMsimple \errmessage{Unknown @kbdinputstyle option `\txiarg'}% \fi\fi\fi } \def\worddistinct{distinct} \def\wordexample{example} \def\wordcode{code} % Default is `distinct.' \kbdinputstyle distinct \def\xkey{\key} \def\kbdfoo#1#2#3\par{\def\one{#1}\def\three{#3}\def\threex{??}% \ifx\one\xkey\ifx\threex\three \key{#2}% \else{\tclose{\kbdfont\look}}\fi \else{\tclose{\kbdfont\look}}\fi} % For @indicateurl, @env, @command quotes seem unnecessary, so use \code. \let\indicateurl=\code \let\env=\code \let\command=\code % @uref (abbreviation for `urlref') takes an optional (comma-separated) % second argument specifying the text to display and an optional third % arg as text to display instead of (rather than in addition to) the url % itself. First (mandatory) arg is the url. Perhaps eventually put in % a hypertex \special here. % \def\uref#1{\douref #1,,,\finish} \def\douref#1,#2,#3,#4\finish{\begingroup \unsepspaces \pdfurl{#1}% \setbox0 = \hbox{\ignorespaces #3}% \ifdim\wd0 > 0pt \unhbox0 % third arg given, show only that \else \setbox0 = \hbox{\ignorespaces #2}% \ifdim\wd0 > 0pt \ifpdf \unhbox0 % PDF: 2nd arg given, show only it \else \unhbox0\ (\code{#1})% DVI: 2nd arg given, show both it and url \fi \else \code{#1}% only url given, so show it \fi \fi \endlink \endgroup} % @url synonym for @uref, since that's how everyone uses it. % \let\url=\uref % rms does not like angle brackets --karl, 17may97. % So now @email is just like @uref, unless we are pdf. % %\def\email#1{\angleleft{\tt #1}\angleright} \ifpdf \def\email#1{\doemail#1,,\finish} \def\doemail#1,#2,#3\finish{\begingroup \unsepspaces \pdfurl{mailto:#1}% \setbox0 = \hbox{\ignorespaces #2}% \ifdim\wd0>0pt\unhbox0\else\code{#1}\fi \endlink \endgroup} \else \let\email=\uref \fi % Check if we are currently using a typewriter font. Since all the % Computer Modern typewriter fonts have zero interword stretch (and % shrink), and it is reasonable to expect all typewriter fonts to have % this property, we can check that font parameter. % \def\ifmonospace{\ifdim\fontdimen3\font=0pt } % Typeset a dimension, e.g., `in' or `pt'. The only reason for the % argument is to make the input look right: @dmn{pt} instead of @dmn{}pt. % \def\dmn#1{\thinspace #1} \def\kbd#1{\def\look{#1}\expandafter\kbdfoo\look??\par} % @l was never documented to mean ``switch to the Lisp font'', % and it is not used as such in any manual I can find. We need it for % Polish suppressed-l. --karl, 22sep96. %\def\l#1{{\li #1}\null} % Explicit font changes: @r, @sc, undocumented @ii. \def\r#1{{\rm #1}} % roman font \def\sc#1{{\smallcaps#1}} % smallcaps font \def\ii#1{{\it #1}} % italic font % @acronym for "FBI", "NATO", and the like. % We print this one point size smaller, since it's intended for % all-uppercase. % \def\acronym#1{\doacronym #1,,\finish} \def\doacronym#1,#2,#3\finish{% {\selectfonts\lsize #1}% \def\temp{#2}% \ifx\temp\empty \else \space ({\unsepspaces \ignorespaces \temp \unskip})% \fi } % @abbr for "Comput. J." and the like. % No font change, but don't do end-of-sentence spacing. % \def\abbr#1{\doabbr #1,,\finish} \def\doabbr#1,#2,#3\finish{% {\plainfrenchspacing #1}% \def\temp{#2}% \ifx\temp\empty \else \space ({\unsepspaces \ignorespaces \temp \unskip})% \fi } % @pounds{} is a sterling sign, which Knuth put in the CM italic font. % \def\pounds{{\it\$}} % @euro{} comes from a separate font, depending on the current style. % We use the free feym* fonts from the eurosym package by Henrik % Theiling, which support regular, slanted, bold and bold slanted (and % "outlined" (blackboard board, sort of) versions, which we don't need). % It is available from http://www.ctan.org/tex-archive/fonts/eurosym. % % Although only regular is the truly official Euro symbol, we ignore % that. The Euro is designed to be slightly taller than the regular % font height. % % feymr - regular % feymo - slanted % feybr - bold % feybo - bold slanted % % There is no good (free) typewriter version, to my knowledge. % A feymr10 euro is ~7.3pt wide, while a normal cmtt10 char is ~5.25pt wide. % Hmm. % % Also doesn't work in math. Do we need to do math with euro symbols? % Hope not. % % \def\euro{{\eurofont e}} \def\eurofont{% % We set the font at each command, rather than predefining it in % \textfonts and the other font-switching commands, so that % installations which never need the symbol don't have to have the % font installed. % % There is only one designed size (nominal 10pt), so we always scale % that to the current nominal size. % % By the way, simply using "at 1em" works for cmr10 and the like, but % does not work for cmbx10 and other extended/shrunken fonts. % \def\eurosize{\csname\curfontsize nominalsize\endcsname}% % \ifx\curfontstyle\bfstylename % bold: \font\thiseurofont = \ifusingit{feybo10}{feybr10} at \eurosize \else % regular: \font\thiseurofont = \ifusingit{feymo10}{feymr10} at \eurosize \fi \thiseurofont } % @registeredsymbol - R in a circle. The font for the R should really % be smaller yet, but lllsize is the best we can do for now. % Adapted from the plain.tex definition of \copyright. % \def\registeredsymbol{% $^{{\ooalign{\hfil\raise.07ex\hbox{\selectfonts\lllsize R}% \hfil\crcr\Orb}}% }$% } % Laurent Siebenmann reports \Orb undefined with: % Textures 1.7.7 (preloaded format=plain 93.10.14) (68K) 16 APR 2004 02:38 % so we'll define it if necessary. % \ifx\Orb\undefined \def\Orb{\mathhexbox20D} \fi \message{page headings,} \newskip\titlepagetopglue \titlepagetopglue = 1.5in \newskip\titlepagebottomglue \titlepagebottomglue = 2pc % First the title page. Must do @settitle before @titlepage. \newif\ifseenauthor \newif\iffinishedtitlepage % Do an implicit @contents or @shortcontents after @end titlepage if the % user says @setcontentsaftertitlepage or @setshortcontentsaftertitlepage. % \newif\ifsetcontentsaftertitlepage \let\setcontentsaftertitlepage = \setcontentsaftertitlepagetrue \newif\ifsetshortcontentsaftertitlepage \let\setshortcontentsaftertitlepage = \setshortcontentsaftertitlepagetrue \parseargdef\shorttitlepage{\begingroup\hbox{}\vskip 1.5in \chaprm \centerline{#1}% \endgroup\page\hbox{}\page} \envdef\titlepage{% % Open one extra group, as we want to close it in the middle of \Etitlepage. \begingroup \parindent=0pt \textfonts % Leave some space at the very top of the page. \vglue\titlepagetopglue % No rule at page bottom unless we print one at the top with @title. \finishedtitlepagetrue % % Most title ``pages'' are actually two pages long, with space % at the top of the second. We don't want the ragged left on the second. \let\oldpage = \page \def\page{% \iffinishedtitlepage\else \finishtitlepage \fi \let\page = \oldpage \page \null }% } \def\Etitlepage{% \iffinishedtitlepage\else \finishtitlepage \fi % It is important to do the page break before ending the group, % because the headline and footline are only empty inside the group. % If we use the new definition of \page, we always get a blank page % after the title page, which we certainly don't want. \oldpage \endgroup % % Need this before the \...aftertitlepage checks so that if they are % in effect the toc pages will come out with page numbers. \HEADINGSon % % If they want short, they certainly want long too. \ifsetshortcontentsaftertitlepage \shortcontents \contents \global\let\shortcontents = \relax \global\let\contents = \relax \fi % \ifsetcontentsaftertitlepage \contents \global\let\contents = \relax \global\let\shortcontents = \relax \fi } \def\finishtitlepage{% \vskip4pt \hrule height 2pt width \hsize \vskip\titlepagebottomglue \finishedtitlepagetrue } %%% Macros to be used within @titlepage: \let\subtitlerm=\tenrm \def\subtitlefont{\subtitlerm \normalbaselineskip = 13pt \normalbaselines} \def\authorfont{\authorrm \normalbaselineskip = 16pt \normalbaselines \let\tt=\authortt} \parseargdef\title{% \checkenv\titlepage \leftline{\titlefonts\rm #1} % print a rule at the page bottom also. \finishedtitlepagefalse \vskip4pt \hrule height 4pt width \hsize \vskip4pt } \parseargdef\subtitle{% \checkenv\titlepage {\subtitlefont \rightline{#1}}% } % @author should come last, but may come many times. % It can also be used inside @quotation. % \parseargdef\author{% \def\temp{\quotation}% \ifx\thisenv\temp \def\quotationauthor{#1}% printed in \Equotation. \else \checkenv\titlepage \ifseenauthor\else \vskip 0pt plus 1filll \seenauthortrue \fi {\authorfont \leftline{#1}}% \fi } %%% Set up page headings and footings. \let\thispage=\folio \newtoks\evenheadline % headline on even pages \newtoks\oddheadline % headline on odd pages \newtoks\evenfootline % footline on even pages \newtoks\oddfootline % footline on odd pages % Now make TeX use those variables \headline={{\textfonts\rm \ifodd\pageno \the\oddheadline \else \the\evenheadline \fi}} \footline={{\textfonts\rm \ifodd\pageno \the\oddfootline \else \the\evenfootline \fi}\HEADINGShook} \let\HEADINGShook=\relax % Commands to set those variables. % For example, this is what @headings on does % @evenheading @thistitle|@thispage|@thischapter % @oddheading @thischapter|@thispage|@thistitle % @evenfooting @thisfile|| % @oddfooting ||@thisfile \def\evenheading{\parsearg\evenheadingxxx} \def\evenheadingxxx #1{\evenheadingyyy #1\|\|\|\|\finish} \def\evenheadingyyy #1\|#2\|#3\|#4\finish{% \global\evenheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}} \def\oddheading{\parsearg\oddheadingxxx} \def\oddheadingxxx #1{\oddheadingyyy #1\|\|\|\|\finish} \def\oddheadingyyy #1\|#2\|#3\|#4\finish{% \global\oddheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}} \parseargdef\everyheading{\oddheadingxxx{#1}\evenheadingxxx{#1}}% \def\evenfooting{\parsearg\evenfootingxxx} \def\evenfootingxxx #1{\evenfootingyyy #1\|\|\|\|\finish} \def\evenfootingyyy #1\|#2\|#3\|#4\finish{% \global\evenfootline={\rlap{\centerline{#2}}\line{#1\hfil#3}}} \def\oddfooting{\parsearg\oddfootingxxx} \def\oddfootingxxx #1{\oddfootingyyy #1\|\|\|\|\finish} \def\oddfootingyyy #1\|#2\|#3\|#4\finish{% \global\oddfootline = {\rlap{\centerline{#2}}\line{#1\hfil#3}}% % % Leave some space for the footline. Hopefully ok to assume % @evenfooting will not be used by itself. \global\advance\pageheight by -\baselineskip \global\advance\vsize by -\baselineskip } \parseargdef\everyfooting{\oddfootingxxx{#1}\evenfootingxxx{#1}} % @headings double turns headings on for double-sided printing. % @headings single turns headings on for single-sided printing. % @headings off turns them off. % @headings on same as @headings double, retained for compatibility. % @headings after turns on double-sided headings after this page. % @headings doubleafter turns on double-sided headings after this page. % @headings singleafter turns on single-sided headings after this page. % By default, they are off at the start of a document, % and turned `on' after @end titlepage. \def\headings #1 {\csname HEADINGS#1\endcsname} \def\HEADINGSoff{% \global\evenheadline={\hfil} \global\evenfootline={\hfil} \global\oddheadline={\hfil} \global\oddfootline={\hfil}} \HEADINGSoff % When we turn headings on, set the page number to 1. % For double-sided printing, put current file name in lower left corner, % chapter name on inside top of right hand pages, document % title on inside top of left hand pages, and page numbers on outside top % edge of all pages. \def\HEADINGSdouble{% \global\pageno=1 \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\folio\hfil\thistitle}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chapoddpage } \let\contentsalignmacro = \chappager % For single-sided printing, chapter title goes across top left of page, % page number on top right. \def\HEADINGSsingle{% \global\pageno=1 \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\thischapter\hfil\folio}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chappager } \def\HEADINGSon{\HEADINGSdouble} \def\HEADINGSafter{\let\HEADINGShook=\HEADINGSdoublex} \let\HEADINGSdoubleafter=\HEADINGSafter \def\HEADINGSdoublex{% \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\folio\hfil\thistitle}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chapoddpage } \def\HEADINGSsingleafter{\let\HEADINGShook=\HEADINGSsinglex} \def\HEADINGSsinglex{% \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\thischapter\hfil\folio}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chappager } % Subroutines used in generating headings % This produces Day Month Year style of output. % Only define if not already defined, in case a txi-??.tex file has set % up a different format (e.g., txi-cs.tex does this). \ifx\today\undefined \def\today{% \number\day\space \ifcase\month \or\putwordMJan\or\putwordMFeb\or\putwordMMar\or\putwordMApr \or\putwordMMay\or\putwordMJun\or\putwordMJul\or\putwordMAug \or\putwordMSep\or\putwordMOct\or\putwordMNov\or\putwordMDec \fi \space\number\year} \fi % @settitle line... specifies the title of the document, for headings. % It generates no output of its own. \def\thistitle{\putwordNoTitle} \def\settitle{\parsearg{\gdef\thistitle}} \message{tables,} % Tables -- @table, @ftable, @vtable, @item(x). % default indentation of table text \newdimen\tableindent \tableindent=.8in % default indentation of @itemize and @enumerate text \newdimen\itemindent \itemindent=.3in % margin between end of table item and start of table text. \newdimen\itemmargin \itemmargin=.1in % used internally for \itemindent minus \itemmargin \newdimen\itemmax % Note @table, @ftable, and @vtable define @item, @itemx, etc., with % these defs. % They also define \itemindex % to index the item name in whatever manner is desired (perhaps none). \newif\ifitemxneedsnegativevskip \def\itemxpar{\par\ifitemxneedsnegativevskip\nobreak\vskip-\parskip\nobreak\fi} \def\internalBitem{\smallbreak \parsearg\itemzzz} \def\internalBitemx{\itemxpar \parsearg\itemzzz} \def\itemzzz #1{\begingroup % \advance\hsize by -\rightskip \advance\hsize by -\tableindent \setbox0=\hbox{\itemindicate{#1}}% \itemindex{#1}% \nobreak % This prevents a break before @itemx. % % If the item text does not fit in the space we have, put it on a line % by itself, and do not allow a page break either before or after that % line. We do not start a paragraph here because then if the next % command is, e.g., @kindex, the whatsit would get put into the % horizontal list on a line by itself, resulting in extra blank space. \ifdim \wd0>\itemmax % % Make this a paragraph so we get the \parskip glue and wrapping, % but leave it ragged-right. \begingroup \advance\leftskip by-\tableindent \advance\hsize by\tableindent \advance\rightskip by0pt plus1fil \leavevmode\unhbox0\par \endgroup % % We're going to be starting a paragraph, but we don't want the % \parskip glue -- logically it's part of the @item we just started. \nobreak \vskip-\parskip % % Stop a page break at the \parskip glue coming up. However, if % what follows is an environment such as @example, there will be no % \parskip glue; then the negative vskip we just inserted would % cause the example and the item to crash together. So we use this % bizarre value of 10001 as a signal to \aboveenvbreak to insert % \parskip glue after all. Section titles are handled this way also. % \penalty 10001 \endgroup \itemxneedsnegativevskipfalse \else % The item text fits into the space. Start a paragraph, so that the % following text (if any) will end up on the same line. \noindent % Do this with kerns and \unhbox so that if there is a footnote in % the item text, it can migrate to the main vertical list and % eventually be printed. \nobreak\kern-\tableindent \dimen0 = \itemmax \advance\dimen0 by \itemmargin \advance\dimen0 by -\wd0 \unhbox0 \nobreak\kern\dimen0 \endgroup \itemxneedsnegativevskiptrue \fi } \def\item{\errmessage{@item while not in a list environment}} \def\itemx{\errmessage{@itemx while not in a list environment}} % @table, @ftable, @vtable. \envdef\table{% \let\itemindex\gobble \tablecheck{table}% } \envdef\ftable{% \def\itemindex ##1{\doind {fn}{\code{##1}}}% \tablecheck{ftable}% } \envdef\vtable{% \def\itemindex ##1{\doind {vr}{\code{##1}}}% \tablecheck{vtable}% } \def\tablecheck#1{% \ifnum \the\catcode`\^^M=\active \endgroup \errmessage{This command won't work in this context; perhaps the problem is that we are \inenvironment\thisenv}% \def\next{\doignore{#1}}% \else \let\next\tablex \fi \next } \def\tablex#1{% \def\itemindicate{#1}% \parsearg\tabley } \def\tabley#1{% {% \makevalueexpandable \edef\temp{\noexpand\tablez #1\space\space\space}% \expandafter }\temp \endtablez } \def\tablez #1 #2 #3 #4\endtablez{% \aboveenvbreak \ifnum 0#1>0 \advance \leftskip by #1\mil \fi \ifnum 0#2>0 \tableindent=#2\mil \fi \ifnum 0#3>0 \advance \rightskip by #3\mil \fi \itemmax=\tableindent \advance \itemmax by -\itemmargin \advance \leftskip by \tableindent \exdentamount=\tableindent \parindent = 0pt \parskip = \smallskipamount \ifdim \parskip=0pt \parskip=2pt \fi \let\item = \internalBitem \let\itemx = \internalBitemx } \def\Etable{\endgraf\afterenvbreak} \let\Eftable\Etable \let\Evtable\Etable \let\Eitemize\Etable \let\Eenumerate\Etable % This is the counter used by @enumerate, which is really @itemize \newcount \itemno \envdef\itemize{\parsearg\doitemize} \def\doitemize#1{% \aboveenvbreak \itemmax=\itemindent \advance\itemmax by -\itemmargin \advance\leftskip by \itemindent \exdentamount=\itemindent \parindent=0pt \parskip=\smallskipamount \ifdim\parskip=0pt \parskip=2pt \fi \def\itemcontents{#1}% % @itemize with no arg is equivalent to @itemize @bullet. \ifx\itemcontents\empty\def\itemcontents{\bullet}\fi \let\item=\itemizeitem } % Definition of @item while inside @itemize and @enumerate. % \def\itemizeitem{% \advance\itemno by 1 % for enumerations {\let\par=\endgraf \smallbreak}% reasonable place to break {% % If the document has an @itemize directly after a section title, a % \nobreak will be last on the list, and \sectionheading will have % done a \vskip-\parskip. In that case, we don't want to zero % parskip, or the item text will crash with the heading. On the % other hand, when there is normal text preceding the item (as there % usually is), we do want to zero parskip, or there would be too much % space. In that case, we won't have a \nobreak before. At least % that's the theory. \ifnum\lastpenalty<10000 \parskip=0in \fi \noindent \hbox to 0pt{\hss \itemcontents \kern\itemmargin}% \vadjust{\penalty 1200}}% not good to break after first line of item. \flushcr } % \splitoff TOKENS\endmark defines \first to be the first token in % TOKENS, and \rest to be the remainder. % \def\splitoff#1#2\endmark{\def\first{#1}\def\rest{#2}}% % Allow an optional argument of an uppercase letter, lowercase letter, % or number, to specify the first label in the enumerated list. No % argument is the same as `1'. % \envparseargdef\enumerate{\enumeratey #1 \endenumeratey} \def\enumeratey #1 #2\endenumeratey{% % If we were given no argument, pretend we were given `1'. \def\thearg{#1}% \ifx\thearg\empty \def\thearg{1}\fi % % Detect if the argument is a single token. If so, it might be a % letter. Otherwise, the only valid thing it can be is a number. % (We will always have one token, because of the test we just made. % This is a good thing, since \splitoff doesn't work given nothing at % all -- the first parameter is undelimited.) \expandafter\splitoff\thearg\endmark \ifx\rest\empty % Only one token in the argument. It could still be anything. % A ``lowercase letter'' is one whose \lccode is nonzero. % An ``uppercase letter'' is one whose \lccode is both nonzero, and % not equal to itself. % Otherwise, we assume it's a number. % % We need the \relax at the end of the \ifnum lines to stop TeX from % continuing to look for a . % \ifnum\lccode\expandafter`\thearg=0\relax \numericenumerate % a number (we hope) \else % It's a letter. \ifnum\lccode\expandafter`\thearg=\expandafter`\thearg\relax \lowercaseenumerate % lowercase letter \else \uppercaseenumerate % uppercase letter \fi \fi \else % Multiple tokens in the argument. We hope it's a number. \numericenumerate \fi } % An @enumerate whose labels are integers. The starting integer is % given in \thearg. % \def\numericenumerate{% \itemno = \thearg \startenumeration{\the\itemno}% } % The starting (lowercase) letter is in \thearg. \def\lowercaseenumerate{% \itemno = \expandafter`\thearg \startenumeration{% % Be sure we're not beyond the end of the alphabet. \ifnum\itemno=0 \errmessage{No more lowercase letters in @enumerate; get a bigger alphabet}% \fi \char\lccode\itemno }% } % The starting (uppercase) letter is in \thearg. \def\uppercaseenumerate{% \itemno = \expandafter`\thearg \startenumeration{% % Be sure we're not beyond the end of the alphabet. \ifnum\itemno=0 \errmessage{No more uppercase letters in @enumerate; get a bigger alphabet} \fi \char\uccode\itemno }% } % Call \doitemize, adding a period to the first argument and supplying the % common last two arguments. Also subtract one from the initial value in % \itemno, since @item increments \itemno. % \def\startenumeration#1{% \advance\itemno by -1 \doitemize{#1.}\flushcr } % @alphaenumerate and @capsenumerate are abbreviations for giving an arg % to @enumerate. % \def\alphaenumerate{\enumerate{a}} \def\capsenumerate{\enumerate{A}} \def\Ealphaenumerate{\Eenumerate} \def\Ecapsenumerate{\Eenumerate} % @multitable macros % Amy Hendrickson, 8/18/94, 3/6/96 % % @multitable ... @end multitable will make as many columns as desired. % Contents of each column will wrap at width given in preamble. Width % can be specified either with sample text given in a template line, % or in percent of \hsize, the current width of text on page. % Table can continue over pages but will only break between lines. % To make preamble: % % Either define widths of columns in terms of percent of \hsize: % @multitable @columnfractions .25 .3 .45 % @item ... % % Numbers following @columnfractions are the percent of the total % current hsize to be used for each column. You may use as many % columns as desired. % Or use a template: % @multitable {Column 1 template} {Column 2 template} {Column 3 template} % @item ... % using the widest term desired in each column. % Each new table line starts with @item, each subsequent new column % starts with @tab. Empty columns may be produced by supplying @tab's % with nothing between them for as many times as empty columns are needed, % ie, @tab@tab@tab will produce two empty columns. % @item, @tab do not need to be on their own lines, but it will not hurt % if they are. % Sample multitable: % @multitable {Column 1 template} {Column 2 template} {Column 3 template} % @item first col stuff @tab second col stuff @tab third col % @item % first col stuff % @tab % second col stuff % @tab % third col % @item first col stuff @tab second col stuff % @tab Many paragraphs of text may be used in any column. % % They will wrap at the width determined by the template. % @item@tab@tab This will be in third column. % @end multitable % Default dimensions may be reset by user. % @multitableparskip is vertical space between paragraphs in table. % @multitableparindent is paragraph indent in table. % @multitablecolmargin is horizontal space to be left between columns. % @multitablelinespace is space to leave between table items, baseline % to baseline. % 0pt means it depends on current normal line spacing. % \newskip\multitableparskip \newskip\multitableparindent \newdimen\multitablecolspace \newskip\multitablelinespace \multitableparskip=0pt \multitableparindent=6pt \multitablecolspace=12pt \multitablelinespace=0pt % Macros used to set up halign preamble: % \let\endsetuptable\relax \def\xendsetuptable{\endsetuptable} \let\columnfractions\relax \def\xcolumnfractions{\columnfractions} \newif\ifsetpercent % #1 is the @columnfraction, usually a decimal number like .5, but might % be just 1. We just use it, whatever it is. % \def\pickupwholefraction#1 {% \global\advance\colcount by 1 \expandafter\xdef\csname col\the\colcount\endcsname{#1\hsize}% \setuptable } \newcount\colcount \def\setuptable#1{% \def\firstarg{#1}% \ifx\firstarg\xendsetuptable \let\go = \relax \else \ifx\firstarg\xcolumnfractions \global\setpercenttrue \else \ifsetpercent \let\go\pickupwholefraction \else \global\advance\colcount by 1 \setbox0=\hbox{#1\unskip\space}% Add a normal word space as a % separator; typically that is always in the input, anyway. \expandafter\xdef\csname col\the\colcount\endcsname{\the\wd0}% \fi \fi \ifx\go\pickupwholefraction % Put the argument back for the \pickupwholefraction call, so % we'll always have a period there to be parsed. \def\go{\pickupwholefraction#1}% \else \let\go = \setuptable \fi% \fi \go } % multitable-only commands. % % @headitem starts a heading row, which we typeset in bold. % Assignments have to be global since we are inside the implicit group % of an alignment entry. Note that \everycr resets \everytab. \def\headitem{\checkenv\multitable \crcr \global\everytab={\bf}\the\everytab}% % % A \tab used to include \hskip1sp. But then the space in a template % line is not enough. That is bad. So let's go back to just `&' until % we encounter the problem it was intended to solve again. % --karl, nathan@acm.org, 20apr99. \def\tab{\checkenv\multitable &\the\everytab}% % @multitable ... @end multitable definitions: % \newtoks\everytab % insert after every tab. % \envdef\multitable{% \vskip\parskip \startsavinginserts % % @item within a multitable starts a normal row. % We use \def instead of \let so that if one of the multitable entries % contains an @itemize, we don't choke on the \item (seen as \crcr aka % \endtemplate) expanding \doitemize. \def\item{\crcr}% % \tolerance=9500 \hbadness=9500 \setmultitablespacing \parskip=\multitableparskip \parindent=\multitableparindent \overfullrule=0pt \global\colcount=0 % \everycr = {% \noalign{% \global\everytab={}% \global\colcount=0 % Reset the column counter. % Check for saved footnotes, etc. \checkinserts % Keeps underfull box messages off when table breaks over pages. %\filbreak % Maybe so, but it also creates really weird page breaks when the % table breaks over pages. Wouldn't \vfil be better? Wait until the % problem manifests itself, so it can be fixed for real --karl. }% }% % \parsearg\domultitable } \def\domultitable#1{% % To parse everything between @multitable and @item: \setuptable#1 \endsetuptable % % This preamble sets up a generic column definition, which will % be used as many times as user calls for columns. % \vtop will set a single line and will also let text wrap and % continue for many paragraphs if desired. \halign\bgroup &% \global\advance\colcount by 1 \multistrut \vtop{% % Use the current \colcount to find the correct column width: \hsize=\expandafter\csname col\the\colcount\endcsname % % In order to keep entries from bumping into each other % we will add a \leftskip of \multitablecolspace to all columns after % the first one. % % If a template has been used, we will add \multitablecolspace % to the width of each template entry. % % If the user has set preamble in terms of percent of \hsize we will % use that dimension as the width of the column, and the \leftskip % will keep entries from bumping into each other. Table will start at % left margin and final column will justify at right margin. % % Make sure we don't inherit \rightskip from the outer environment. \rightskip=0pt \ifnum\colcount=1 % The first column will be indented with the surrounding text. \advance\hsize by\leftskip \else \ifsetpercent \else % If user has not set preamble in terms of percent of \hsize % we will advance \hsize by \multitablecolspace. \advance\hsize by \multitablecolspace \fi % In either case we will make \leftskip=\multitablecolspace: \leftskip=\multitablecolspace \fi % Ignoring space at the beginning and end avoids an occasional spurious % blank line, when TeX decides to break the line at the space before the % box from the multistrut, so the strut ends up on a line by itself. % For example: % @multitable @columnfractions .11 .89 % @item @code{#} % @tab Legal holiday which is valid in major parts of the whole country. % Is automatically provided with highlighting sequences respectively % marking characters. \noindent\ignorespaces##\unskip\multistrut }\cr } \def\Emultitable{% \crcr \egroup % end the \halign \global\setpercentfalse } \def\setmultitablespacing{% \def\multistrut{\strut}% just use the standard line spacing % % Compute \multitablelinespace (if not defined by user) for use in % \multitableparskip calculation. We used define \multistrut based on % this, but (ironically) that caused the spacing to be off. % See bug-texinfo report from Werner Lemberg, 31 Oct 2004 12:52:20 +0100. \ifdim\multitablelinespace=0pt \setbox0=\vbox{X}\global\multitablelinespace=\the\baselineskip \global\advance\multitablelinespace by-\ht0 \fi %% Test to see if parskip is larger than space between lines of %% table. If not, do nothing. %% If so, set to same dimension as multitablelinespace. \ifdim\multitableparskip>\multitablelinespace \global\multitableparskip=\multitablelinespace \global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller %% than skip between lines in the table. \fi% \ifdim\multitableparskip=0pt \global\multitableparskip=\multitablelinespace \global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller %% than skip between lines in the table. \fi} \message{conditionals,} % @iftex, @ifnotdocbook, @ifnothtml, @ifnotinfo, @ifnotplaintext, % @ifnotxml always succeed. They currently do nothing; we don't % attempt to check whether the conditionals are properly nested. But we % have to remember that they are conditionals, so that @end doesn't % attempt to close an environment group. % \def\makecond#1{% \expandafter\let\csname #1\endcsname = \relax \expandafter\let\csname iscond.#1\endcsname = 1 } \makecond{iftex} \makecond{ifnotdocbook} \makecond{ifnothtml} \makecond{ifnotinfo} \makecond{ifnotplaintext} \makecond{ifnotxml} % Ignore @ignore, @ifhtml, @ifinfo, and the like. % \def\direntry{\doignore{direntry}} \def\documentdescription{\doignore{documentdescription}} \def\docbook{\doignore{docbook}} \def\html{\doignore{html}} \def\ifdocbook{\doignore{ifdocbook}} \def\ifhtml{\doignore{ifhtml}} \def\ifinfo{\doignore{ifinfo}} \def\ifnottex{\doignore{ifnottex}} \def\ifplaintext{\doignore{ifplaintext}} \def\ifxml{\doignore{ifxml}} \def\ignore{\doignore{ignore}} \def\menu{\doignore{menu}} \def\xml{\doignore{xml}} % Ignore text until a line `@end #1', keeping track of nested conditionals. % % A count to remember the depth of nesting. \newcount\doignorecount \def\doignore#1{\begingroup % Scan in ``verbatim'' mode: \catcode`\@ = \other \catcode`\{ = \other \catcode`\} = \other % % Make sure that spaces turn into tokens that match what \doignoretext wants. \spaceisspace % % Count number of #1's that we've seen. \doignorecount = 0 % % Swallow text until we reach the matching `@end #1'. \dodoignore{#1}% } { \catcode`_=11 % We want to use \_STOP_ which cannot appear in texinfo source. \obeylines % % \gdef\dodoignore#1{% % #1 contains the command name as a string, e.g., `ifinfo'. % % Define a command to find the next `@end #1', which must be on a line % by itself. \long\def\doignoretext##1^^M@end #1{\doignoretextyyy##1^^M@#1\_STOP_}% % And this command to find another #1 command, at the beginning of a % line. (Otherwise, we would consider a line `@c @ifset', for % example, to count as an @ifset for nesting.) \long\def\doignoretextyyy##1^^M@#1##2\_STOP_{\doignoreyyy{##2}\_STOP_}% % % And now expand that command. \obeylines % \doignoretext ^^M% }% } \def\doignoreyyy#1{% \def\temp{#1}% \ifx\temp\empty % Nothing found. \let\next\doignoretextzzz \else % Found a nested condition, ... \advance\doignorecount by 1 \let\next\doignoretextyyy % ..., look for another. % If we're here, #1 ends with ^^M\ifinfo (for example). \fi \next #1% the token \_STOP_ is present just after this macro. } % We have to swallow the remaining "\_STOP_". % \def\doignoretextzzz#1{% \ifnum\doignorecount = 0 % We have just found the outermost @end. \let\next\enddoignore \else % Still inside a nested condition. \advance\doignorecount by -1 \let\next\doignoretext % Look for the next @end. \fi \next } % Finish off ignored text. \def\enddoignore{\endgroup\ignorespaces} % @set VAR sets the variable VAR to an empty value. % @set VAR REST-OF-LINE sets VAR to the value REST-OF-LINE. % % Since we want to separate VAR from REST-OF-LINE (which might be % empty), we can't just use \parsearg; we have to insert a space of our % own to delimit the rest of the line, and then take it out again if we % didn't need it. % We rely on the fact that \parsearg sets \catcode`\ =10. % \parseargdef\set{\setyyy#1 \endsetyyy} \def\setyyy#1 #2\endsetyyy{% {% \makevalueexpandable \def\temp{#2}% \edef\next{\gdef\makecsname{SET#1}}% \ifx\temp\empty \next{}% \else \setzzz#2\endsetzzz \fi }% } % Remove the trailing space \setxxx inserted. \def\setzzz#1 \endsetzzz{\next{#1}} % @clear VAR clears (i.e., unsets) the variable VAR. % \parseargdef\clear{% {% \makevalueexpandable \global\expandafter\let\csname SET#1\endcsname=\relax }% } % @value{foo} gets the text saved in variable foo. \def\value{\begingroup\makevalueexpandable\valuexxx} \def\valuexxx#1{\expandablevalue{#1}\endgroup} { \catcode`\- = \active \catcode`\_ = \active % \gdef\makevalueexpandable{% \let\value = \expandablevalue % We don't want these characters active, ... \catcode`\-=\other \catcode`\_=\other % ..., but we might end up with active ones in the argument if % we're called from @code, as @code{@value{foo-bar_}}, though. % So \let them to their normal equivalents. \let-\realdash \let_\normalunderscore } } % We have this subroutine so that we can handle at least some @value's % properly in indexes (we call \makevalueexpandable in \indexdummies). % The command has to be fully expandable (if the variable is set), since % the result winds up in the index file. This means that if the % variable's value contains other Texinfo commands, it's almost certain % it will fail (although perhaps we could fix that with sufficient work % to do a one-level expansion on the result, instead of complete). % \def\expandablevalue#1{% \expandafter\ifx\csname SET#1\endcsname\relax {[No value for ``#1'']}% \message{Variable `#1', used in @value, is not set.}% \else \csname SET#1\endcsname \fi } % @ifset VAR ... @end ifset reads the `...' iff VAR has been defined % with @set. % % To get special treatment of `@end ifset,' call \makeond and the redefine. % \makecond{ifset} \def\ifset{\parsearg{\doifset{\let\next=\ifsetfail}}} \def\doifset#1#2{% {% \makevalueexpandable \let\next=\empty \expandafter\ifx\csname SET#2\endcsname\relax #1% If not set, redefine \next. \fi \expandafter }\next } \def\ifsetfail{\doignore{ifset}} % @ifclear VAR ... @end ifclear reads the `...' iff VAR has never been % defined with @set, or has been undefined with @clear. % % The `\else' inside the `\doifset' parameter is a trick to reuse the % above code: if the variable is not set, do nothing, if it is set, % then redefine \next to \ifclearfail. % \makecond{ifclear} \def\ifclear{\parsearg{\doifset{\else \let\next=\ifclearfail}}} \def\ifclearfail{\doignore{ifclear}} % @dircategory CATEGORY -- specify a category of the dir file % which this file should belong to. Ignore this in TeX. \let\dircategory=\comment % @defininfoenclose. \let\definfoenclose=\comment \message{indexing,} % Index generation facilities % Define \newwrite to be identical to plain tex's \newwrite % except not \outer, so it can be used within macros and \if's. \edef\newwrite{\makecsname{ptexnewwrite}} % \newindex {foo} defines an index named foo. % It automatically defines \fooindex such that % \fooindex ...rest of line... puts an entry in the index foo. % It also defines \fooindfile to be the number of the output channel for % the file that accumulates this index. The file's extension is foo. % The name of an index should be no more than 2 characters long % for the sake of vms. % \def\newindex#1{% \iflinks \expandafter\newwrite \csname#1indfile\endcsname \openout \csname#1indfile\endcsname \jobname.#1 % Open the file \fi \expandafter\xdef\csname#1index\endcsname{% % Define @#1index \noexpand\doindex{#1}} } % @defindex foo == \newindex{foo} % \def\defindex{\parsearg\newindex} % Define @defcodeindex, like @defindex except put all entries in @code. % \def\defcodeindex{\parsearg\newcodeindex} % \def\newcodeindex#1{% \iflinks \expandafter\newwrite \csname#1indfile\endcsname \openout \csname#1indfile\endcsname \jobname.#1 \fi \expandafter\xdef\csname#1index\endcsname{% \noexpand\docodeindex{#1}}% } % @synindex foo bar makes index foo feed into index bar. % Do this instead of @defindex foo if you don't want it as a separate index. % % @syncodeindex foo bar similar, but put all entries made for index foo % inside @code. % \def\synindex#1 #2 {\dosynindex\doindex{#1}{#2}} \def\syncodeindex#1 #2 {\dosynindex\docodeindex{#1}{#2}} % #1 is \doindex or \docodeindex, #2 the index getting redefined (foo), % #3 the target index (bar). \def\dosynindex#1#2#3{% % Only do \closeout if we haven't already done it, else we'll end up % closing the target index. \expandafter \ifx\csname donesynindex#2\endcsname \undefined % The \closeout helps reduce unnecessary open files; the limit on the % Acorn RISC OS is a mere 16 files. \expandafter\closeout\csname#2indfile\endcsname \expandafter\let\csname\donesynindex#2\endcsname = 1 \fi % redefine \fooindfile: \expandafter\let\expandafter\temp\expandafter=\csname#3indfile\endcsname \expandafter\let\csname#2indfile\endcsname=\temp % redefine \fooindex: \expandafter\xdef\csname#2index\endcsname{\noexpand#1{#3}}% } % Define \doindex, the driver for all \fooindex macros. % Argument #1 is generated by the calling \fooindex macro, % and it is "foo", the name of the index. % \doindex just uses \parsearg; it calls \doind for the actual work. % This is because \doind is more useful to call from other macros. % There is also \dosubind {index}{topic}{subtopic} % which makes an entry in a two-level index such as the operation index. \def\doindex#1{\edef\indexname{#1}\parsearg\singleindexer} \def\singleindexer #1{\doind{\indexname}{#1}} % like the previous two, but they put @code around the argument. \def\docodeindex#1{\edef\indexname{#1}\parsearg\singlecodeindexer} \def\singlecodeindexer #1{\doind{\indexname}{\code{#1}}} % Take care of Texinfo commands that can appear in an index entry. % Since there are some commands we want to expand, and others we don't, % we have to laboriously prevent expansion for those that we don't. % \def\indexdummies{% \escapechar = `\\ % use backslash in output files. \def\@{@}% change to @@ when we switch to @ as escape char in index files. \def\ {\realbackslash\space }% % Need these in case \tex is in effect and \{ is a \delimiter again. % But can't use \lbracecmd and \rbracecmd because texindex assumes % braces and backslashes are used only as delimiters. \let\{ = \mylbrace \let\} = \myrbrace % % Do the redefinitions. \commondummies } % For the aux and toc files, @ is the escape character. So we want to % redefine everything using @ as the escape character (instead of % \realbackslash, still used for index files). When everything uses @, % this will be simpler. % \def\atdummies{% \def\@{@@}% \def\ {@ }% \let\{ = \lbraceatcmd \let\} = \rbraceatcmd % % Do the redefinitions. \commondummies } % Called from \indexdummies and \atdummies. % \def\commondummies{% % % \definedummyword defines \#1 as \string\#1\space, thus effectively % preventing its expansion. This is used only for control% words, % not control letters, because the \space would be incorrect for % control characters, but is needed to separate the control word % from whatever follows. % % For control letters, we have \definedummyletter, which omits the % space. % % These can be used both for control words that take an argument and % those that do not. If it is followed by {arg} in the input, then % that will dutifully get written to the index (or wherever). % \def\definedummyword ##1{\def##1{\string##1\space}}% \def\definedummyletter##1{\def##1{\string##1}}% \let\definedummyaccent\definedummyletter % \commondummiesnofonts % \definedummyletter\_% % % Non-English letters. \definedummyword\AA \definedummyword\AE \definedummyword\L \definedummyword\OE \definedummyword\O \definedummyword\aa \definedummyword\ae \definedummyword\l \definedummyword\oe \definedummyword\o \definedummyword\ss \definedummyword\exclamdown \definedummyword\questiondown \definedummyword\ordf \definedummyword\ordm % % Although these internal commands shouldn't show up, sometimes they do. \definedummyword\bf \definedummyword\gtr \definedummyword\hat \definedummyword\less \definedummyword\sf \definedummyword\sl \definedummyword\tclose \definedummyword\tt % \definedummyword\LaTeX \definedummyword\TeX % % Assorted special characters. \definedummyword\bullet \definedummyword\comma \definedummyword\copyright \definedummyword\registeredsymbol \definedummyword\dots \definedummyword\enddots \definedummyword\equiv \definedummyword\error \definedummyword\euro \definedummyword\expansion \definedummyword\minus \definedummyword\pounds \definedummyword\point \definedummyword\print \definedummyword\result % % We want to disable all macros so that they are not expanded by \write. \macrolist % \normalturnoffactive % % Handle some cases of @value -- where it does not contain any % (non-fully-expandable) commands. \makevalueexpandable } % \commondummiesnofonts: common to \commondummies and \indexnofonts. % % Better have this without active chars. { \catcode`\~=\other \gdef\commondummiesnofonts{% % Control letters and accents. \definedummyletter\!% \definedummyaccent\"% \definedummyaccent\'% \definedummyletter\*% \definedummyaccent\,% \definedummyletter\.% \definedummyletter\/% \definedummyletter\:% \definedummyaccent\=% \definedummyletter\?% \definedummyaccent\^% \definedummyaccent\`% \definedummyaccent\~% \definedummyword\u \definedummyword\v \definedummyword\H \definedummyword\dotaccent \definedummyword\ringaccent \definedummyword\tieaccent \definedummyword\ubaraccent \definedummyword\udotaccent \definedummyword\dotless % % Texinfo font commands. \definedummyword\b \definedummyword\i \definedummyword\r \definedummyword\sc \definedummyword\t % % Commands that take arguments. \definedummyword\acronym \definedummyword\cite \definedummyword\code \definedummyword\command \definedummyword\dfn \definedummyword\emph \definedummyword\env \definedummyword\file \definedummyword\kbd \definedummyword\key \definedummyword\math \definedummyword\option \definedummyword\samp \definedummyword\strong \definedummyword\tie \definedummyword\uref \definedummyword\url \definedummyword\var \definedummyword\verb \definedummyword\w } } % \indexnofonts is used when outputting the strings to sort the index % by, and when constructing control sequence names. It eliminates all % control sequences and just writes whatever the best ASCII sort string % would be for a given command (usually its argument). % \def\indexnofonts{% % Accent commands should become @asis. \def\definedummyaccent##1{\let##1\asis}% % We can just ignore other control letters. \def\definedummyletter##1{\let##1\empty}% % Hopefully, all control words can become @asis. \let\definedummyword\definedummyaccent % \commondummiesnofonts % % Don't no-op \tt, since it isn't a user-level command % and is used in the definitions of the active chars like <, >, |, etc. % Likewise with the other plain tex font commands. %\let\tt=\asis % \def\ { }% \def\@{@}% % how to handle braces? \def\_{\normalunderscore}% % % Non-English letters. \def\AA{AA}% \def\AE{AE}% \def\L{L}% \def\OE{OE}% \def\O{O}% \def\aa{aa}% \def\ae{ae}% \def\l{l}% \def\oe{oe}% \def\o{o}% \def\ss{ss}% \def\exclamdown{!}% \def\questiondown{?}% \def\ordf{a}% \def\ordm{o}% % \def\LaTeX{LaTeX}% \def\TeX{TeX}% % % Assorted special characters. % (The following {} will end up in the sort string, but that's ok.) \def\bullet{bullet}% \def\comma{,}% \def\copyright{copyright}% \def\registeredsymbol{R}% \def\dots{...}% \def\enddots{...}% \def\equiv{==}% \def\error{error}% \def\euro{euro}% \def\expansion{==>}% \def\minus{-}% \def\pounds{pounds}% \def\point{.}% \def\print{-|}% \def\result{=>}% % % We need to get rid of all macros, leaving only the arguments (if present). % Of course this is not nearly correct, but it is the best we can do for now. % makeinfo does not expand macros in the argument to @deffn, which ends up % writing an index entry, and texindex isn't prepared for an index sort entry % that starts with \. % % Since macro invocations are followed by braces, we can just redefine them % to take a single TeX argument. The case of a macro invocation that % goes to end-of-line is not handled. % \macrolist } \let\indexbackslash=0 %overridden during \printindex. \let\SETmarginindex=\relax % put index entries in margin (undocumented)? % Most index entries go through here, but \dosubind is the general case. % #1 is the index name, #2 is the entry text. \def\doind#1#2{\dosubind{#1}{#2}{}} % Workhorse for all \fooindexes. % #1 is name of index, #2 is stuff to put there, #3 is subentry -- % empty if called from \doind, as we usually are (the main exception % is with most defuns, which call us directly). % \def\dosubind#1#2#3{% \iflinks {% % Store the main index entry text (including the third arg). \toks0 = {#2}% % If third arg is present, precede it with a space. \def\thirdarg{#3}% \ifx\thirdarg\empty \else \toks0 = \expandafter{\the\toks0 \space #3}% \fi % \edef\writeto{\csname#1indfile\endcsname}% % \ifvmode \dosubindsanitize \else \dosubindwrite \fi }% \fi } % Write the entry in \toks0 to the index file: % \def\dosubindwrite{% % Put the index entry in the margin if desired. \ifx\SETmarginindex\relax\else \insert\margin{\hbox{\vrule height8pt depth3pt width0pt \the\toks0}}% \fi % % Remember, we are within a group. \indexdummies % Must do this here, since \bf, etc expand at this stage \def\backslashcurfont{\indexbackslash}% \indexbackslash isn't defined now % so it will be output as is; and it will print as backslash. % % Process the index entry with all font commands turned off, to % get the string to sort by. {\indexnofonts \edef\temp{\the\toks0}% need full expansion \xdef\indexsorttmp{\temp}% }% % % Set up the complete index entry, with both the sort key and % the original text, including any font commands. We write % three arguments to \entry to the .?? file (four in the % subentry case), texindex reduces to two when writing the .??s % sorted result. \edef\temp{% \write\writeto{% \string\entry{\indexsorttmp}{\noexpand\folio}{\the\toks0}}% }% \temp } % Take care of unwanted page breaks: % % If a skip is the last thing on the list now, preserve it % by backing up by \lastskip, doing the \write, then inserting % the skip again. Otherwise, the whatsit generated by the % \write will make \lastskip zero. The result is that sequences % like this: % @end defun % @tindex whatever % @defun ... % will have extra space inserted, because the \medbreak in the % start of the @defun won't see the skip inserted by the @end of % the previous defun. % % But don't do any of this if we're not in vertical mode. We % don't want to do a \vskip and prematurely end a paragraph. % % Avoid page breaks due to these extra skips, too. % % But wait, there is a catch there: % We'll have to check whether \lastskip is zero skip. \ifdim is not % sufficient for this purpose, as it ignores stretch and shrink parts % of the skip. The only way seems to be to check the textual % representation of the skip. % % The following is almost like \def\zeroskipmacro{0.0pt} except that % the ``p'' and ``t'' characters have catcode \other, not 11 (letter). % \edef\zeroskipmacro{\expandafter\the\csname z@skip\endcsname} % % ..., ready, GO: % \def\dosubindsanitize{% % \lastskip and \lastpenalty cannot both be nonzero simultaneously. \skip0 = \lastskip \edef\lastskipmacro{\the\lastskip}% \count255 = \lastpenalty % % If \lastskip is nonzero, that means the last item was a % skip. And since a skip is discardable, that means this % -\skip0 glue we're inserting is preceded by a % non-discardable item, therefore it is not a potential % breakpoint, therefore no \nobreak needed. \ifx\lastskipmacro\zeroskipmacro \else \vskip-\skip0 \fi % \dosubindwrite % \ifx\lastskipmacro\zeroskipmacro % If \lastskip was zero, perhaps the last item was a penalty, and % perhaps it was >=10000, e.g., a \nobreak. In that case, we want % to re-insert the same penalty (values >10000 are used for various % signals); since we just inserted a non-discardable item, any % following glue (such as a \parskip) would be a breakpoint. For example: % % @deffn deffn-whatever % @vindex index-whatever % Description. % would allow a break between the index-whatever whatsit % and the "Description." paragraph. \ifnum\count255>9999 \penalty\count255 \fi \else % On the other hand, if we had a nonzero \lastskip, % this make-up glue would be preceded by a non-discardable item % (the whatsit from the \write), so we must insert a \nobreak. \nobreak\vskip\skip0 \fi } % The index entry written in the file actually looks like % \entry {sortstring}{page}{topic} % or % \entry {sortstring}{page}{topic}{subtopic} % The texindex program reads in these files and writes files % containing these kinds of lines: % \initial {c} % before the first topic whose initial is c % \entry {topic}{pagelist} % for a topic that is used without subtopics % \primary {topic} % for the beginning of a topic that is used with subtopics % \secondary {subtopic}{pagelist} % for each subtopic. % Define the user-accessible indexing commands % @findex, @vindex, @kindex, @cindex. \def\findex {\fnindex} \def\kindex {\kyindex} \def\cindex {\cpindex} \def\vindex {\vrindex} \def\tindex {\tpindex} \def\pindex {\pgindex} \def\cindexsub {\begingroup\obeylines\cindexsub} {\obeylines % \gdef\cindexsub "#1" #2^^M{\endgroup % \dosubind{cp}{#2}{#1}}} % Define the macros used in formatting output of the sorted index material. % @printindex causes a particular index (the ??s file) to get printed. % It does not print any chapter heading (usually an @unnumbered). % \parseargdef\printindex{\begingroup \dobreak \chapheadingskip{10000}% % \smallfonts \rm \tolerance = 9500 \everypar = {}% don't want the \kern\-parindent from indentation suppression. % % See if the index file exists and is nonempty. % Change catcode of @ here so that if the index file contains % \initial {@} % as its first line, TeX doesn't complain about mismatched braces % (because it thinks @} is a control sequence). \catcode`\@ = 11 \openin 1 \jobname.#1s \ifeof 1 % \enddoublecolumns gets confused if there is no text in the index, % and it loses the chapter title and the aux file entries for the % index. The easiest way to prevent this problem is to make sure % there is some text. \putwordIndexNonexistent \else % % If the index file exists but is empty, then \openin leaves \ifeof % false. We have to make TeX try to read something from the file, so % it can discover if there is anything in it. \read 1 to \temp \ifeof 1 \putwordIndexIsEmpty \else % Index files are almost Texinfo source, but we use \ as the escape % character. It would be better to use @, but that's too big a change % to make right now. \def\indexbackslash{\backslashcurfont}% \catcode`\\ = 0 \escapechar = `\\ \begindoublecolumns \input \jobname.#1s \enddoublecolumns \fi \fi \closein 1 \endgroup} % These macros are used by the sorted index file itself. % Change them to control the appearance of the index. \def\initial#1{{% % Some minor font changes for the special characters. \let\tentt=\sectt \let\tt=\sectt \let\sf=\sectt % % Remove any glue we may have, we'll be inserting our own. \removelastskip % % We like breaks before the index initials, so insert a bonus. \nobreak \vskip 0pt plus 3\baselineskip \penalty 0 \vskip 0pt plus -3\baselineskip % % Typeset the initial. Making this add up to a whole number of % baselineskips increases the chance of the dots lining up from column % to column. It still won't often be perfect, because of the stretch % we need before each entry, but it's better. % % No shrink because it confuses \balancecolumns. \vskip 1.67\baselineskip plus .5\baselineskip \leftline{\secbf #1}% % Do our best not to break after the initial. \nobreak \vskip .33\baselineskip plus .1\baselineskip }} % \entry typesets a paragraph consisting of the text (#1), dot leaders, and % then page number (#2) flushed to the right margin. It is used for index % and table of contents entries. The paragraph is indented by \leftskip. % % A straightforward implementation would start like this: % \def\entry#1#2{... % But this frozes the catcodes in the argument, and can cause problems to % @code, which sets - active. This problem was fixed by a kludge--- % ``-'' was active throughout whole index, but this isn't really right. % % The right solution is to prevent \entry from swallowing the whole text. % --kasal, 21nov03 \def\entry{% \begingroup % % Start a new paragraph if necessary, so our assignments below can't % affect previous text. \par % % Do not fill out the last line with white space. \parfillskip = 0in % % No extra space above this paragraph. \parskip = 0in % % Do not prefer a separate line ending with a hyphen to fewer lines. \finalhyphendemerits = 0 % % \hangindent is only relevant when the entry text and page number % don't both fit on one line. In that case, bob suggests starting the % dots pretty far over on the line. Unfortunately, a large % indentation looks wrong when the entry text itself is broken across % lines. So we use a small indentation and put up with long leaders. % % \hangafter is reset to 1 (which is the value we want) at the start % of each paragraph, so we need not do anything with that. \hangindent = 2em % % When the entry text needs to be broken, just fill out the first line % with blank space. \rightskip = 0pt plus1fil % % A bit of stretch before each entry for the benefit of balancing % columns. \vskip 0pt plus1pt % % Swallow the left brace of the text (first parameter): \afterassignment\doentry \let\temp = } \def\doentry{% \bgroup % Instead of the swallowed brace. \noindent \aftergroup\finishentry % And now comes the text of the entry. } \def\finishentry#1{% % #1 is the page number. % % The following is kludged to not output a line of dots in the index if % there are no page numbers. The next person who breaks this will be % cursed by a Unix daemon. \def\tempa{{\rm }}% \def\tempb{#1}% \edef\tempc{\tempa}% \edef\tempd{\tempb}% \ifx\tempc\tempd \ % \else % % If we must, put the page number on a line of its own, and fill out % this line with blank space. (The \hfil is overwhelmed with the % fill leaders glue in \indexdotfill if the page number does fit.) \hfil\penalty50 \null\nobreak\indexdotfill % Have leaders before the page number. % % The `\ ' here is removed by the implicit \unskip that TeX does as % part of (the primitive) \par. Without it, a spurious underfull % \hbox ensues. \ifpdf \pdfgettoks#1.% \ \the\toksA \else \ #1% \fi \fi \par \endgroup } % Like \dotfill except takes at least 1 em. \def\indexdotfill{\cleaders \hbox{$\mathsurround=0pt \mkern1.5mu ${\it .}$ \mkern1.5mu$}\hskip 1em plus 1fill} \def\primary #1{\line{#1\hfil}} \newskip\secondaryindent \secondaryindent=0.5cm \def\secondary#1#2{{% \parfillskip=0in \parskip=0in \hangindent=1in \hangafter=1 \noindent\hskip\secondaryindent\hbox{#1}\indexdotfill \ifpdf \pdfgettoks#2.\ \the\toksA % The page number ends the paragraph. \else #2 \fi \par }} % Define two-column mode, which we use to typeset indexes. % Adapted from the TeXbook, page 416, which is to say, % the manmac.tex format used to print the TeXbook itself. \catcode`\@=11 \newbox\partialpage \newdimen\doublecolumnhsize \def\begindoublecolumns{\begingroup % ended by \enddoublecolumns % Grab any single-column material above us. \output = {% % % Here is a possibility not foreseen in manmac: if we accumulate a % whole lot of material, we might end up calling this \output % routine twice in a row (see the doublecol-lose test, which is % essentially a couple of indexes with @setchapternewpage off). In % that case we just ship out what is in \partialpage with the normal % output routine. Generally, \partialpage will be empty when this % runs and this will be a no-op. See the indexspread.tex test case. \ifvoid\partialpage \else \onepageout{\pagecontents\partialpage}% \fi % \global\setbox\partialpage = \vbox{% % Unvbox the main output page. \unvbox\PAGE \kern-\topskip \kern\baselineskip }% }% \eject % run that output routine to set \partialpage % % Use the double-column output routine for subsequent pages. \output = {\doublecolumnout}% % % Change the page size parameters. We could do this once outside this % routine, in each of @smallbook, @afourpaper, and the default 8.5x11 % format, but then we repeat the same computation. Repeating a couple % of assignments once per index is clearly meaningless for the % execution time, so we may as well do it in one place. % % First we halve the line length, less a little for the gutter between % the columns. We compute the gutter based on the line length, so it % changes automatically with the paper format. The magic constant % below is chosen so that the gutter has the same value (well, +-<1pt) % as it did when we hard-coded it. % % We put the result in a separate register, \doublecolumhsize, so we % can restore it in \pagesofar, after \hsize itself has (potentially) % been clobbered. % \doublecolumnhsize = \hsize \advance\doublecolumnhsize by -.04154\hsize \divide\doublecolumnhsize by 2 \hsize = \doublecolumnhsize % % Double the \vsize as well. (We don't need a separate register here, % since nobody clobbers \vsize.) \vsize = 2\vsize } % The double-column output routine for all double-column pages except % the last. % \def\doublecolumnout{% \splittopskip=\topskip \splitmaxdepth=\maxdepth % Get the available space for the double columns -- the normal % (undoubled) page height minus any material left over from the % previous page. \dimen@ = \vsize \divide\dimen@ by 2 \advance\dimen@ by -\ht\partialpage % % box0 will be the left-hand column, box2 the right. \setbox0=\vsplit255 to\dimen@ \setbox2=\vsplit255 to\dimen@ \onepageout\pagesofar \unvbox255 \penalty\outputpenalty } % % Re-output the contents of the output page -- any previous material, % followed by the two boxes we just split, in box0 and box2. \def\pagesofar{% \unvbox\partialpage % \hsize = \doublecolumnhsize \wd0=\hsize \wd2=\hsize \hbox to\pagewidth{\box0\hfil\box2}% } % % All done with double columns. \def\enddoublecolumns{% \output = {% % Split the last of the double-column material. Leave it on the % current page, no automatic page break. \balancecolumns % % If we end up splitting too much material for the current page, % though, there will be another page break right after this \output % invocation ends. Having called \balancecolumns once, we do not % want to call it again. Therefore, reset \output to its normal % definition right away. (We hope \balancecolumns will never be % called on to balance too much material, but if it is, this makes % the output somewhat more palatable.) \global\output = {\onepageout{\pagecontents\PAGE}}% }% \eject \endgroup % started in \begindoublecolumns % % \pagegoal was set to the doubled \vsize above, since we restarted % the current page. We're now back to normal single-column % typesetting, so reset \pagegoal to the normal \vsize (after the % \endgroup where \vsize got restored). \pagegoal = \vsize } % % Called at the end of the double column material. \def\balancecolumns{% \setbox0 = \vbox{\unvbox255}% like \box255 but more efficient, see p.120. \dimen@ = \ht0 \advance\dimen@ by \topskip \advance\dimen@ by-\baselineskip \divide\dimen@ by 2 % target to split to %debug\message{final 2-column material height=\the\ht0, target=\the\dimen@.}% \splittopskip = \topskip % Loop until we get a decent breakpoint. {% \vbadness = 10000 \loop \global\setbox3 = \copy0 \global\setbox1 = \vsplit3 to \dimen@ \ifdim\ht3>\dimen@ \global\advance\dimen@ by 1pt \repeat }% %debug\message{split to \the\dimen@, column heights: \the\ht1, \the\ht3.}% \setbox0=\vbox to\dimen@{\unvbox1}% \setbox2=\vbox to\dimen@{\unvbox3}% % \pagesofar } \catcode`\@ = \other \message{sectioning,} % Chapters, sections, etc. % \unnumberedno is an oxymoron, of course. But we count the unnumbered % sections so that we can refer to them unambiguously in the pdf % outlines by their "section number". We avoid collisions with chapter % numbers by starting them at 10000. (If a document ever has 10000 % chapters, we're in trouble anyway, I'm sure.) \newcount\unnumberedno \unnumberedno = 10000 \newcount\chapno \newcount\secno \secno=0 \newcount\subsecno \subsecno=0 \newcount\subsubsecno \subsubsecno=0 % This counter is funny since it counts through charcodes of letters A, B, ... \newcount\appendixno \appendixno = `\@ % % \def\appendixletter{\char\the\appendixno} % We do the following ugly conditional instead of the above simple % construct for the sake of pdftex, which needs the actual % letter in the expansion, not just typeset. % \def\appendixletter{% \ifnum\appendixno=`A A% \else\ifnum\appendixno=`B B% \else\ifnum\appendixno=`C C% \else\ifnum\appendixno=`D D% \else\ifnum\appendixno=`E E% \else\ifnum\appendixno=`F F% \else\ifnum\appendixno=`G G% \else\ifnum\appendixno=`H H% \else\ifnum\appendixno=`I I% \else\ifnum\appendixno=`J J% \else\ifnum\appendixno=`K K% \else\ifnum\appendixno=`L L% \else\ifnum\appendixno=`M M% \else\ifnum\appendixno=`N N% \else\ifnum\appendixno=`O O% \else\ifnum\appendixno=`P P% \else\ifnum\appendixno=`Q Q% \else\ifnum\appendixno=`R R% \else\ifnum\appendixno=`S S% \else\ifnum\appendixno=`T T% \else\ifnum\appendixno=`U U% \else\ifnum\appendixno=`V V% \else\ifnum\appendixno=`W W% \else\ifnum\appendixno=`X X% \else\ifnum\appendixno=`Y Y% \else\ifnum\appendixno=`Z Z% % The \the is necessary, despite appearances, because \appendixletter is % expanded while writing the .toc file. \char\appendixno is not % expandable, thus it is written literally, thus all appendixes come out % with the same letter (or @) in the toc without it. \else\char\the\appendixno \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi \fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi\fi} % Each @chapter defines this as the name of the chapter. % page headings and footings can use it. @section does likewise. % However, they are not reliable, because we don't use marks. \def\thischapter{} \def\thissection{} \newcount\absseclevel % used to calculate proper heading level \newcount\secbase\secbase=0 % @raisesections/@lowersections modify this count % @raisesections: treat @section as chapter, @subsection as section, etc. \def\raisesections{\global\advance\secbase by -1} \let\up=\raisesections % original BFox name % @lowersections: treat @chapter as section, @section as subsection, etc. \def\lowersections{\global\advance\secbase by 1} \let\down=\lowersections % original BFox name % we only have subsub. \chardef\maxseclevel = 3 % % A numbered section within an unnumbered changes to unnumbered too. % To achive this, remember the "biggest" unnum. sec. we are currently in: \chardef\unmlevel = \maxseclevel % % Trace whether the current chapter is an appendix or not: % \chapheadtype is "N" or "A", unnumbered chapters are ignored. \def\chapheadtype{N} % Choose a heading macro % #1 is heading type % #2 is heading level % #3 is text for heading \def\genhead#1#2#3{% % Compute the abs. sec. level: \absseclevel=#2 \advance\absseclevel by \secbase % Make sure \absseclevel doesn't fall outside the range: \ifnum \absseclevel < 0 \absseclevel = 0 \else \ifnum \absseclevel > 3 \absseclevel = 3 \fi \fi % The heading type: \def\headtype{#1}% \if \headtype U% \ifnum \absseclevel < \unmlevel \chardef\unmlevel = \absseclevel \fi \else % Check for appendix sections: \ifnum \absseclevel = 0 \edef\chapheadtype{\headtype}% \else \if \headtype A\if \chapheadtype N% \errmessage{@appendix... within a non-appendix chapter}% \fi\fi \fi % Check for numbered within unnumbered: \ifnum \absseclevel > \unmlevel \def\headtype{U}% \else \chardef\unmlevel = 3 \fi \fi % Now print the heading: \if \headtype U% \ifcase\absseclevel \unnumberedzzz{#3}% \or \unnumberedseczzz{#3}% \or \unnumberedsubseczzz{#3}% \or \unnumberedsubsubseczzz{#3}% \fi \else \if \headtype A% \ifcase\absseclevel \appendixzzz{#3}% \or \appendixsectionzzz{#3}% \or \appendixsubseczzz{#3}% \or \appendixsubsubseczzz{#3}% \fi \else \ifcase\absseclevel \chapterzzz{#3}% \or \seczzz{#3}% \or \numberedsubseczzz{#3}% \or \numberedsubsubseczzz{#3}% \fi \fi \fi \suppressfirstparagraphindent } % an interface: \def\numhead{\genhead N} \def\apphead{\genhead A} \def\unnmhead{\genhead U} % @chapter, @appendix, @unnumbered. Increment top-level counter, reset % all lower-level sectioning counters to zero. % % Also set \chaplevelprefix, which we prepend to @float sequence numbers % (e.g., figures), q.v. By default (before any chapter), that is empty. \let\chaplevelprefix = \empty % \outer\parseargdef\chapter{\numhead0{#1}} % normally numhead0 calls chapterzzz \def\chapterzzz#1{% % section resetting is \global in case the chapter is in a group, such % as an @include file. \global\secno=0 \global\subsecno=0 \global\subsubsecno=0 \global\advance\chapno by 1 % % Used for \float. \gdef\chaplevelprefix{\the\chapno.}% \resetallfloatnos % \message{\putwordChapter\space \the\chapno}% % % Write the actual heading. \chapmacro{#1}{Ynumbered}{\the\chapno}% % % So @section and the like are numbered underneath this chapter. \global\let\section = \numberedsec \global\let\subsection = \numberedsubsec \global\let\subsubsection = \numberedsubsubsec } \outer\parseargdef\appendix{\apphead0{#1}} % normally apphead0 calls appendixzzz \def\appendixzzz#1{% \global\secno=0 \global\subsecno=0 \global\subsubsecno=0 \global\advance\appendixno by 1 \gdef\chaplevelprefix{\appendixletter.}% \resetallfloatnos % \def\appendixnum{\putwordAppendix\space \appendixletter}% \message{\appendixnum}% % \chapmacro{#1}{Yappendix}{\appendixletter}% % \global\let\section = \appendixsec \global\let\subsection = \appendixsubsec \global\let\subsubsection = \appendixsubsubsec } \outer\parseargdef\unnumbered{\unnmhead0{#1}} % normally unnmhead0 calls unnumberedzzz \def\unnumberedzzz#1{% \global\secno=0 \global\subsecno=0 \global\subsubsecno=0 \global\advance\unnumberedno by 1 % % Since an unnumbered has no number, no prefix for figures. \global\let\chaplevelprefix = \empty \resetallfloatnos % % This used to be simply \message{#1}, but TeX fully expands the % argument to \message. Therefore, if #1 contained @-commands, TeX % expanded them. For example, in `@unnumbered The @cite{Book}', TeX % expanded @cite (which turns out to cause errors because \cite is meant % to be executed, not expanded). % % Anyway, we don't want the fully-expanded definition of @cite to appear % as a result of the \message, we just want `@cite' itself. We use % \the to achieve this: TeX expands \the only once, % simply yielding the contents of . (We also do this for % the toc entries.) \toks0 = {#1}% \message{(\the\toks0)}% % \chapmacro{#1}{Ynothing}{\the\unnumberedno}% % \global\let\section = \unnumberedsec \global\let\subsection = \unnumberedsubsec \global\let\subsubsection = \unnumberedsubsubsec } % @centerchap is like @unnumbered, but the heading is centered. \outer\parseargdef\centerchap{% % Well, we could do the following in a group, but that would break % an assumption that \chapmacro is called at the outermost level. % Thus we are safer this way: --kasal, 24feb04 \let\centerparametersmaybe = \centerparameters \unnmhead0{#1}% \let\centerparametersmaybe = \relax } % @top is like @unnumbered. \let\top\unnumbered % Sections. \outer\parseargdef\numberedsec{\numhead1{#1}} % normally calls seczzz \def\seczzz#1{% \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1 \sectionheading{#1}{sec}{Ynumbered}{\the\chapno.\the\secno}% } \outer\parseargdef\appendixsection{\apphead1{#1}} % normally calls appendixsectionzzz \def\appendixsectionzzz#1{% \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1 \sectionheading{#1}{sec}{Yappendix}{\appendixletter.\the\secno}% } \let\appendixsec\appendixsection \outer\parseargdef\unnumberedsec{\unnmhead1{#1}} % normally calls unnumberedseczzz \def\unnumberedseczzz#1{% \global\subsecno=0 \global\subsubsecno=0 \global\advance\secno by 1 \sectionheading{#1}{sec}{Ynothing}{\the\unnumberedno.\the\secno}% } % Subsections. \outer\parseargdef\numberedsubsec{\numhead2{#1}} % normally calls numberedsubseczzz \def\numberedsubseczzz#1{% \global\subsubsecno=0 \global\advance\subsecno by 1 \sectionheading{#1}{subsec}{Ynumbered}{\the\chapno.\the\secno.\the\subsecno}% } \outer\parseargdef\appendixsubsec{\apphead2{#1}} % normally calls appendixsubseczzz \def\appendixsubseczzz#1{% \global\subsubsecno=0 \global\advance\subsecno by 1 \sectionheading{#1}{subsec}{Yappendix}% {\appendixletter.\the\secno.\the\subsecno}% } \outer\parseargdef\unnumberedsubsec{\unnmhead2{#1}} %normally calls unnumberedsubseczzz \def\unnumberedsubseczzz#1{% \global\subsubsecno=0 \global\advance\subsecno by 1 \sectionheading{#1}{subsec}{Ynothing}% {\the\unnumberedno.\the\secno.\the\subsecno}% } % Subsubsections. \outer\parseargdef\numberedsubsubsec{\numhead3{#1}} % normally numberedsubsubseczzz \def\numberedsubsubseczzz#1{% \global\advance\subsubsecno by 1 \sectionheading{#1}{subsubsec}{Ynumbered}% {\the\chapno.\the\secno.\the\subsecno.\the\subsubsecno}% } \outer\parseargdef\appendixsubsubsec{\apphead3{#1}} % normally appendixsubsubseczzz \def\appendixsubsubseczzz#1{% \global\advance\subsubsecno by 1 \sectionheading{#1}{subsubsec}{Yappendix}% {\appendixletter.\the\secno.\the\subsecno.\the\subsubsecno}% } \outer\parseargdef\unnumberedsubsubsec{\unnmhead3{#1}} %normally unnumberedsubsubseczzz \def\unnumberedsubsubseczzz#1{% \global\advance\subsubsecno by 1 \sectionheading{#1}{subsubsec}{Ynothing}% {\the\unnumberedno.\the\secno.\the\subsecno.\the\subsubsecno}% } % These macros control what the section commands do, according % to what kind of chapter we are in (ordinary, appendix, or unnumbered). % Define them by default for a numbered chapter. \let\section = \numberedsec \let\subsection = \numberedsubsec \let\subsubsection = \numberedsubsubsec % Define @majorheading, @heading and @subheading % NOTE on use of \vbox for chapter headings, section headings, and such: % 1) We use \vbox rather than the earlier \line to permit % overlong headings to fold. % 2) \hyphenpenalty is set to 10000 because hyphenation in a % heading is obnoxious; this forbids it. % 3) Likewise, headings look best if no \parindent is used, and % if justification is not attempted. Hence \raggedright. \def\majorheading{% {\advance\chapheadingskip by 10pt \chapbreak }% \parsearg\chapheadingzzz } \def\chapheading{\chapbreak \parsearg\chapheadingzzz} \def\chapheadingzzz#1{% {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt\raggedright \rm #1\hfill}}% \bigskip \par\penalty 200\relax \suppressfirstparagraphindent } % @heading, @subheading, @subsubheading. \parseargdef\heading{\sectionheading{#1}{sec}{Yomitfromtoc}{} \suppressfirstparagraphindent} \parseargdef\subheading{\sectionheading{#1}{subsec}{Yomitfromtoc}{} \suppressfirstparagraphindent} \parseargdef\subsubheading{\sectionheading{#1}{subsubsec}{Yomitfromtoc}{} \suppressfirstparagraphindent} % These macros generate a chapter, section, etc. heading only % (including whitespace, linebreaking, etc. around it), % given all the information in convenient, parsed form. %%% Args are the skip and penalty (usually negative) \def\dobreak#1#2{\par\ifdim\lastskip<#1\removelastskip\penalty#2\vskip#1\fi} %%% Define plain chapter starts, and page on/off switching for it % Parameter controlling skip before chapter headings (if needed) \newskip\chapheadingskip \def\chapbreak{\dobreak \chapheadingskip {-4000}} \def\chappager{\par\vfill\supereject} \def\chapoddpage{\chappager \ifodd\pageno \else \hbox to 0pt{} \chappager\fi} \def\setchapternewpage #1 {\csname CHAPPAG#1\endcsname} \def\CHAPPAGoff{% \global\let\contentsalignmacro = \chappager \global\let\pchapsepmacro=\chapbreak \global\let\pagealignmacro=\chappager} \def\CHAPPAGon{% \global\let\contentsalignmacro = \chappager \global\let\pchapsepmacro=\chappager \global\let\pagealignmacro=\chappager \global\def\HEADINGSon{\HEADINGSsingle}} \def\CHAPPAGodd{% \global\let\contentsalignmacro = \chapoddpage \global\let\pchapsepmacro=\chapoddpage \global\let\pagealignmacro=\chapoddpage \global\def\HEADINGSon{\HEADINGSdouble}} \CHAPPAGon % Chapter opening. % % #1 is the text, #2 is the section type (Ynumbered, Ynothing, % Yappendix, Yomitfromtoc), #3 the chapter number. % % To test against our argument. \def\Ynothingkeyword{Ynothing} \def\Yomitfromtockeyword{Yomitfromtoc} \def\Yappendixkeyword{Yappendix} % \def\chapmacro#1#2#3{% \pchapsepmacro {% \chapfonts \rm % % Have to define \thissection before calling \donoderef, because the % xref code eventually uses it. On the other hand, it has to be called % after \pchapsepmacro, or the headline will change too soon. \gdef\thissection{#1}% \gdef\thischaptername{#1}% % % Only insert the separating space if we have a chapter/appendix % number, and don't print the unnumbered ``number''. \def\temptype{#2}% \ifx\temptype\Ynothingkeyword \setbox0 = \hbox{}% \def\toctype{unnchap}% \gdef\thischapter{#1}% \else\ifx\temptype\Yomitfromtockeyword \setbox0 = \hbox{}% contents like unnumbered, but no toc entry \def\toctype{omit}% \gdef\thischapter{}% \else\ifx\temptype\Yappendixkeyword \setbox0 = \hbox{\putwordAppendix{} #3\enspace}% \def\toctype{app}% % We don't substitute the actual chapter name into \thischapter % because we don't want its macros evaluated now. And we don't % use \thissection because that changes with each section. % \xdef\thischapter{\putwordAppendix{} \appendixletter: \noexpand\thischaptername}% \else \setbox0 = \hbox{#3\enspace}% \def\toctype{numchap}% \xdef\thischapter{\putwordChapter{} \the\chapno: \noexpand\thischaptername}% \fi\fi\fi % % Write the toc entry for this chapter. Must come before the % \donoderef, because we include the current node name in the toc % entry, and \donoderef resets it to empty. \writetocentry{\toctype}{#1}{#3}% % % For pdftex, we have to write out the node definition (aka, make % the pdfdest) after any page break, but before the actual text has % been typeset. If the destination for the pdf outline is after the % text, then jumping from the outline may wind up with the text not % being visible, for instance under high magnification. \donoderef{#2}% % % Typeset the actual heading. \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright \hangindent=\wd0 \centerparametersmaybe \unhbox0 #1\par}% }% \nobreak\bigskip % no page break after a chapter title \nobreak } % @centerchap -- centered and unnumbered. \let\centerparametersmaybe = \relax \def\centerparameters{% \advance\rightskip by 3\rightskip \leftskip = \rightskip \parfillskip = 0pt } % I don't think this chapter style is supported any more, so I'm not % updating it with the new noderef stuff. We'll see. --karl, 11aug03. % \def\setchapterstyle #1 {\csname CHAPF#1\endcsname} % \def\unnchfopen #1{% \chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt\raggedright \rm #1\hfill}}\bigskip \par\nobreak } \def\chfopen #1#2{\chapoddpage {\chapfonts \vbox to 3in{\vfil \hbox to\hsize{\hfil #2} \hbox to\hsize{\hfil #1} \vfil}}% \par\penalty 5000 % } \def\centerchfopen #1{% \chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt \hfill {\rm #1}\hfill}}\bigskip \par\nobreak } \def\CHAPFopen{% \global\let\chapmacro=\chfopen \global\let\centerchapmacro=\centerchfopen} % Section titles. These macros combine the section number parts and % call the generic \sectionheading to do the printing. % \newskip\secheadingskip \def\secheadingbreak{\dobreak \secheadingskip{-1000}} % Subsection titles. \newskip\subsecheadingskip \def\subsecheadingbreak{\dobreak \subsecheadingskip{-500}} % Subsubsection titles. \def\subsubsecheadingskip{\subsecheadingskip} \def\subsubsecheadingbreak{\subsecheadingbreak} % Print any size, any type, section title. % % #1 is the text, #2 is the section level (sec/subsec/subsubsec), #3 is % the section type for xrefs (Ynumbered, Ynothing, Yappendix), #4 is the % section number. % \def\sectionheading#1#2#3#4{% {% % Switch to the right set of fonts. \csname #2fonts\endcsname \rm % % Insert space above the heading. \csname #2headingbreak\endcsname % % Only insert the space after the number if we have a section number. \def\sectionlevel{#2}% \def\temptype{#3}% % \ifx\temptype\Ynothingkeyword \setbox0 = \hbox{}% \def\toctype{unn}% \gdef\thissection{#1}% \else\ifx\temptype\Yomitfromtockeyword % for @headings -- no section number, don't include in toc, % and don't redefine \thissection. \setbox0 = \hbox{}% \def\toctype{omit}% \let\sectionlevel=\empty \else\ifx\temptype\Yappendixkeyword \setbox0 = \hbox{#4\enspace}% \def\toctype{app}% \gdef\thissection{#1}% \else \setbox0 = \hbox{#4\enspace}% \def\toctype{num}% \gdef\thissection{#1}% \fi\fi\fi % % Write the toc entry (before \donoderef). See comments in \chfplain. \writetocentry{\toctype\sectionlevel}{#1}{#4}% % % Write the node reference (= pdf destination for pdftex). % Again, see comments in \chfplain. \donoderef{#3}% % % Output the actual section heading. \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright \hangindent=\wd0 % zero if no section number \unhbox0 #1}% }% % Add extra space after the heading -- half of whatever came above it. % Don't allow stretch, though. \kern .5 \csname #2headingskip\endcsname % % Do not let the kern be a potential breakpoint, as it would be if it % was followed by glue. \nobreak % % We'll almost certainly start a paragraph next, so don't let that % glue accumulate. (Not a breakpoint because it's preceded by a % discardable item.) \vskip-\parskip % % This is purely so the last item on the list is a known \penalty > % 10000. This is so \startdefun can avoid allowing breakpoints after % section headings. Otherwise, it would insert a valid breakpoint between: % % @section sec-whatever % @deffn def-whatever \penalty 10001 } \message{toc,} % Table of contents. \newwrite\tocfile % Write an entry to the toc file, opening it if necessary. % Called from @chapter, etc. % % Example usage: \writetocentry{sec}{Section Name}{\the\chapno.\the\secno} % We append the current node name (if any) and page number as additional % arguments for the \{chap,sec,...}entry macros which will eventually % read this. The node name is used in the pdf outlines as the % destination to jump to. % % We open the .toc file for writing here instead of at @setfilename (or % any other fixed time) so that @contents can be anywhere in the document. % But if #1 is `omit', then we don't do anything. This is used for the % table of contents chapter openings themselves. % \newif\iftocfileopened \def\omitkeyword{omit}% % \def\writetocentry#1#2#3{% \edef\writetoctype{#1}% \ifx\writetoctype\omitkeyword \else \iftocfileopened\else \immediate\openout\tocfile = \jobname.toc \global\tocfileopenedtrue \fi % \iflinks {\atdummies \edef\temp{% \write\tocfile{@#1entry{#2}{#3}{\lastnode}{\noexpand\folio}}}% \temp } \fi \fi % % Tell \shipout to create a pdf destination on each page, if we're % writing pdf. These are used in the table of contents. We can't % just write one on every page because the title pages are numbered % 1 and 2 (the page numbers aren't printed), and so are the first % two pages of the document. Thus, we'd have two destinations named % `1', and two named `2'. \ifpdf \global\pdfmakepagedesttrue \fi } % These characters do not print properly in the Computer Modern roman % fonts, so we must take special care. This is more or less redundant % with the Texinfo input format setup at the end of this file. % \def\activecatcodes{% \catcode`\"=\active \catcode`\$=\active \catcode`\<=\active \catcode`\>=\active \catcode`\\=\active \catcode`\^=\active \catcode`\_=\active \catcode`\|=\active \catcode`\~=\active } % Read the toc file, which is essentially Texinfo input. \def\readtocfile{% \setupdatafile \activecatcodes \input \jobname.toc } \newskip\contentsrightmargin \contentsrightmargin=1in \newcount\savepageno \newcount\lastnegativepageno \lastnegativepageno = -1 % Prepare to read what we've written to \tocfile. % \def\startcontents#1{% % If @setchapternewpage on, and @headings double, the contents should % start on an odd page, unlike chapters. Thus, we maintain % \contentsalignmacro in parallel with \pagealignmacro. % From: Torbjorn Granlund \contentsalignmacro \immediate\closeout\tocfile % % Don't need to put `Contents' or `Short Contents' in the headline. % It is abundantly clear what they are. \def\thischapter{}% \chapmacro{#1}{Yomitfromtoc}{}% % \savepageno = \pageno \begingroup % Set up to handle contents files properly. \raggedbottom % Worry more about breakpoints than the bottom. \advance\hsize by -\contentsrightmargin % Don't use the full line length. % % Roman numerals for page numbers. \ifnum \pageno>0 \global\pageno = \lastnegativepageno \fi } % Normal (long) toc. \def\contents{% \startcontents{\putwordTOC}% \openin 1 \jobname.toc \ifeof 1 \else \readtocfile \fi \vfill \eject \contentsalignmacro % in case @setchapternewpage odd is in effect \ifeof 1 \else \pdfmakeoutlines \fi \closein 1 \endgroup \lastnegativepageno = \pageno \global\pageno = \savepageno } % And just the chapters. \def\summarycontents{% \startcontents{\putwordShortTOC}% % \let\numchapentry = \shortchapentry \let\appentry = \shortchapentry \let\unnchapentry = \shortunnchapentry % We want a true roman here for the page numbers. \secfonts \let\rm=\shortcontrm \let\bf=\shortcontbf \let\sl=\shortcontsl \let\tt=\shortconttt \rm \hyphenpenalty = 10000 \advance\baselineskip by 1pt % Open it up a little. \def\numsecentry##1##2##3##4{} \let\appsecentry = \numsecentry \let\unnsecentry = \numsecentry \let\numsubsecentry = \numsecentry \let\appsubsecentry = \numsecentry \let\unnsubsecentry = \numsecentry \let\numsubsubsecentry = \numsecentry \let\appsubsubsecentry = \numsecentry \let\unnsubsubsecentry = \numsecentry \openin 1 \jobname.toc \ifeof 1 \else \readtocfile \fi \closein 1 \vfill \eject \contentsalignmacro % in case @setchapternewpage odd is in effect \endgroup \lastnegativepageno = \pageno \global\pageno = \savepageno } \let\shortcontents = \summarycontents % Typeset the label for a chapter or appendix for the short contents. % The arg is, e.g., `A' for an appendix, or `3' for a chapter. % \def\shortchaplabel#1{% % This space should be enough, since a single number is .5em, and the % widest letter (M) is 1em, at least in the Computer Modern fonts. % But use \hss just in case. % (This space doesn't include the extra space that gets added after % the label; that gets put in by \shortchapentry above.) % % We'd like to right-justify chapter numbers, but that looks strange % with appendix letters. And right-justifying numbers and % left-justifying letters looks strange when there is less than 10 % chapters. Have to read the whole toc once to know how many chapters % there are before deciding ... \hbox to 1em{#1\hss}% } % These macros generate individual entries in the table of contents. % The first argument is the chapter or section name. % The last argument is the page number. % The arguments in between are the chapter number, section number, ... % Chapters, in the main contents. \def\numchapentry#1#2#3#4{\dochapentry{#2\labelspace#1}{#4}} % % Chapters, in the short toc. % See comments in \dochapentry re vbox and related settings. \def\shortchapentry#1#2#3#4{% \tocentry{\shortchaplabel{#2}\labelspace #1}{\doshortpageno\bgroup#4\egroup}% } % Appendices, in the main contents. % Need the word Appendix, and a fixed-size box. % \def\appendixbox#1{% % We use M since it's probably the widest letter. \setbox0 = \hbox{\putwordAppendix{} M}% \hbox to \wd0{\putwordAppendix{} #1\hss}} % \def\appentry#1#2#3#4{\dochapentry{\appendixbox{#2}\labelspace#1}{#4}} % Unnumbered chapters. \def\unnchapentry#1#2#3#4{\dochapentry{#1}{#4}} \def\shortunnchapentry#1#2#3#4{\tocentry{#1}{\doshortpageno\bgroup#4\egroup}} % Sections. \def\numsecentry#1#2#3#4{\dosecentry{#2\labelspace#1}{#4}} \let\appsecentry=\numsecentry \def\unnsecentry#1#2#3#4{\dosecentry{#1}{#4}} % Subsections. \def\numsubsecentry#1#2#3#4{\dosubsecentry{#2\labelspace#1}{#4}} \let\appsubsecentry=\numsubsecentry \def\unnsubsecentry#1#2#3#4{\dosubsecentry{#1}{#4}} % And subsubsections. \def\numsubsubsecentry#1#2#3#4{\dosubsubsecentry{#2\labelspace#1}{#4}} \let\appsubsubsecentry=\numsubsubsecentry \def\unnsubsubsecentry#1#2#3#4{\dosubsubsecentry{#1}{#4}} % This parameter controls the indentation of the various levels. % Same as \defaultparindent. \newdimen\tocindent \tocindent = 15pt % Now for the actual typesetting. In all these, #1 is the text and #2 is the % page number. % % If the toc has to be broken over pages, we want it to be at chapters % if at all possible; hence the \penalty. \def\dochapentry#1#2{% \penalty-300 \vskip1\baselineskip plus.33\baselineskip minus.25\baselineskip \begingroup \chapentryfonts \tocentry{#1}{\dopageno\bgroup#2\egroup}% \endgroup \nobreak\vskip .25\baselineskip plus.1\baselineskip } \def\dosecentry#1#2{\begingroup \secentryfonts \leftskip=\tocindent \tocentry{#1}{\dopageno\bgroup#2\egroup}% \endgroup} \def\dosubsecentry#1#2{\begingroup \subsecentryfonts \leftskip=2\tocindent \tocentry{#1}{\dopageno\bgroup#2\egroup}% \endgroup} \def\dosubsubsecentry#1#2{\begingroup \subsubsecentryfonts \leftskip=3\tocindent \tocentry{#1}{\dopageno\bgroup#2\egroup}% \endgroup} % We use the same \entry macro as for the index entries. \let\tocentry = \entry % Space between chapter (or whatever) number and the title. \def\labelspace{\hskip1em \relax} \def\dopageno#1{{\rm #1}} \def\doshortpageno#1{{\rm #1}} \def\chapentryfonts{\secfonts \rm} \def\secentryfonts{\textfonts} \def\subsecentryfonts{\textfonts} \def\subsubsecentryfonts{\textfonts} \message{environments,} % @foo ... @end foo. % @point{}, @result{}, @expansion{}, @print{}, @equiv{}. % % Since these characters are used in examples, it should be an even number of % \tt widths. Each \tt character is 1en, so two makes it 1em. % \def\point{$\star$} \def\result{\leavevmode\raise.15ex\hbox to 1em{\hfil$\Rightarrow$\hfil}} \def\expansion{\leavevmode\raise.1ex\hbox to 1em{\hfil$\mapsto$\hfil}} \def\print{\leavevmode\lower.1ex\hbox to 1em{\hfil$\dashv$\hfil}} \def\equiv{\leavevmode\lower.1ex\hbox to 1em{\hfil$\ptexequiv$\hfil}} % The @error{} command. % Adapted from the TeXbook's \boxit. % \newbox\errorbox % {\tentt \global\dimen0 = 3em}% Width of the box. \dimen2 = .55pt % Thickness of rules % The text. (`r' is open on the right, `e' somewhat less so on the left.) \setbox0 = \hbox{\kern-.75pt \tensf error\kern-1.5pt} % \setbox\errorbox=\hbox to \dimen0{\hfil \hsize = \dimen0 \advance\hsize by -5.8pt % Space to left+right. \advance\hsize by -2\dimen2 % Rules. \vbox{% \hrule height\dimen2 \hbox{\vrule width\dimen2 \kern3pt % Space to left of text. \vtop{\kern2.4pt \box0 \kern2.4pt}% Space above/below. \kern3pt\vrule width\dimen2}% Space to right. \hrule height\dimen2} \hfil} % \def\error{\leavevmode\lower.7ex\copy\errorbox} % @tex ... @end tex escapes into raw Tex temporarily. % One exception: @ is still an escape character, so that @end tex works. % But \@ or @@ will get a plain tex @ character. \envdef\tex{% \catcode `\\=0 \catcode `\{=1 \catcode `\}=2 \catcode `\$=3 \catcode `\&=4 \catcode `\#=6 \catcode `\^=7 \catcode `\_=8 \catcode `\~=\active \let~=\tie \catcode `\%=14 \catcode `\+=\other \catcode `\"=\other \catcode `\|=\other \catcode `\<=\other \catcode `\>=\other \escapechar=`\\ % \let\b=\ptexb \let\bullet=\ptexbullet \let\c=\ptexc \let\,=\ptexcomma \let\.=\ptexdot \let\dots=\ptexdots \let\equiv=\ptexequiv \let\!=\ptexexclam \let\i=\ptexi \let\indent=\ptexindent \let\noindent=\ptexnoindent \let\{=\ptexlbrace \let\+=\tabalign \let\}=\ptexrbrace \let\/=\ptexslash \let\*=\ptexstar \let\t=\ptext \let\frenchspacing=\plainfrenchspacing % \def\endldots{\mathinner{\ldots\ldots\ldots\ldots}}% \def\enddots{\relax\ifmmode\endldots\else$\mathsurround=0pt \endldots\,$\fi}% \def\@{@}% } % There is no need to define \Etex. % Define @lisp ... @end lisp. % @lisp environment forms a group so it can rebind things, % including the definition of @end lisp (which normally is erroneous). % Amount to narrow the margins by for @lisp. \newskip\lispnarrowing \lispnarrowing=0.4in % This is the definition that ^^M gets inside @lisp, @example, and other % such environments. \null is better than a space, since it doesn't % have any width. \def\lisppar{\null\endgraf} % This space is always present above and below environments. \newskip\envskipamount \envskipamount = 0pt % Make spacing and below environment symmetrical. We use \parskip here % to help in doing that, since in @example-like environments \parskip % is reset to zero; thus the \afterenvbreak inserts no space -- but the % start of the next paragraph will insert \parskip. % \def\aboveenvbreak{{% % =10000 instead of <10000 because of a special case in \itemzzz and % \sectionheading, q.v. \ifnum \lastpenalty=10000 \else \advance\envskipamount by \parskip \endgraf \ifdim\lastskip<\envskipamount \removelastskip % it's not a good place to break if the last penalty was \nobreak % or better ... \ifnum\lastpenalty<10000 \penalty-50 \fi \vskip\envskipamount \fi \fi }} \let\afterenvbreak = \aboveenvbreak % \nonarrowing is a flag. If "set", @lisp etc don't narrow margins; it will % also clear it, so that its embedded environments do the narrowing again. \let\nonarrowing=\relax % @cartouche ... @end cartouche: draw rectangle w/rounded corners around % environment contents. \font\circle=lcircle10 \newdimen\circthick \newdimen\cartouter\newdimen\cartinner \newskip\normbskip\newskip\normpskip\newskip\normlskip \circthick=\fontdimen8\circle % \def\ctl{{\circle\char'013\hskip -6pt}}% 6pt from pl file: 1/2charwidth \def\ctr{{\hskip 6pt\circle\char'010}} \def\cbl{{\circle\char'012\hskip -6pt}} \def\cbr{{\hskip 6pt\circle\char'011}} \def\carttop{\hbox to \cartouter{\hskip\lskip \ctl\leaders\hrule height\circthick\hfil\ctr \hskip\rskip}} \def\cartbot{\hbox to \cartouter{\hskip\lskip \cbl\leaders\hrule height\circthick\hfil\cbr \hskip\rskip}} % \newskip\lskip\newskip\rskip \envdef\cartouche{% \ifhmode\par\fi % can't be in the midst of a paragraph. \startsavinginserts \lskip=\leftskip \rskip=\rightskip \leftskip=0pt\rightskip=0pt % we want these *outside*. \cartinner=\hsize \advance\cartinner by-\lskip \advance\cartinner by-\rskip \cartouter=\hsize \advance\cartouter by 18.4pt % allow for 3pt kerns on either % side, and for 6pt waste from % each corner char, and rule thickness \normbskip=\baselineskip \normpskip=\parskip \normlskip=\lineskip % Flag to tell @lisp, etc., not to narrow margin. \let\nonarrowing = t% \vbox\bgroup \baselineskip=0pt\parskip=0pt\lineskip=0pt \carttop \hbox\bgroup \hskip\lskip \vrule\kern3pt \vbox\bgroup \kern3pt \hsize=\cartinner \baselineskip=\normbskip \lineskip=\normlskip \parskip=\normpskip \vskip -\parskip \comment % For explanation, see the end of \def\group. } \def\Ecartouche{% \ifhmode\par\fi \kern3pt \egroup \kern3pt\vrule \hskip\rskip \egroup \cartbot \egroup \checkinserts } % This macro is called at the beginning of all the @example variants, % inside a group. \def\nonfillstart{% \aboveenvbreak \hfuzz = 12pt % Don't be fussy \sepspaces % Make spaces be word-separators rather than space tokens. \let\par = \lisppar % don't ignore blank lines \obeylines % each line of input is a line of output \parskip = 0pt \parindent = 0pt \emergencystretch = 0pt % don't try to avoid overfull boxes \ifx\nonarrowing\relax \advance \leftskip by \lispnarrowing \exdentamount=\lispnarrowing \else \let\nonarrowing = \relax \fi \let\exdent=\nofillexdent } % If you want all examples etc. small: @set dispenvsize small. % If you want even small examples the full size: @set dispenvsize nosmall. % This affects the following displayed environments: % @example, @display, @format, @lisp % \def\smallword{small} \def\nosmallword{nosmall} \let\SETdispenvsize\relax \def\setnormaldispenv{% \ifx\SETdispenvsize\smallword \smallexamplefonts \rm \fi } \def\setsmalldispenv{% \ifx\SETdispenvsize\nosmallword \else \smallexamplefonts \rm \fi } % We often define two environments, @foo and @smallfoo. % Let's do it by one command: \def\makedispenv #1#2{ \expandafter\envdef\csname#1\endcsname {\setnormaldispenv #2} \expandafter\envdef\csname small#1\endcsname {\setsmalldispenv #2} \expandafter\let\csname E#1\endcsname \afterenvbreak \expandafter\let\csname Esmall#1\endcsname \afterenvbreak } % Define two synonyms: \def\maketwodispenvs #1#2#3{ \makedispenv{#1}{#3} \makedispenv{#2}{#3} } % @lisp: indented, narrowed, typewriter font; @example: same as @lisp. % % @smallexample and @smalllisp: use smaller fonts. % Originally contributed by Pavel@xerox. % \maketwodispenvs {lisp}{example}{% \nonfillstart \tt \let\kbdfont = \kbdexamplefont % Allow @kbd to do something special. \gobble % eat return } % @display/@smalldisplay: same as @lisp except keep current font. % \makedispenv {display}{% \nonfillstart \gobble } % @format/@smallformat: same as @display except don't narrow margins. % \makedispenv{format}{% \let\nonarrowing = t% \nonfillstart \gobble } % @flushleft: same as @format, but doesn't obey \SETdispenvsize. \envdef\flushleft{% \let\nonarrowing = t% \nonfillstart \gobble } \let\Eflushleft = \afterenvbreak % @flushright. % \envdef\flushright{% \let\nonarrowing = t% \nonfillstart \advance\leftskip by 0pt plus 1fill \gobble } \let\Eflushright = \afterenvbreak % @quotation does normal linebreaking (hence we can't use \nonfillstart) % and narrows the margins. We keep \parskip nonzero in general, since % we're doing normal filling. So, when using \aboveenvbreak and % \afterenvbreak, temporarily make \parskip 0. % \envdef\quotation{% {\parskip=0pt \aboveenvbreak}% because \aboveenvbreak inserts \parskip \parindent=0pt % % @cartouche defines \nonarrowing to inhibit narrowing at next level down. \ifx\nonarrowing\relax \advance\leftskip by \lispnarrowing \advance\rightskip by \lispnarrowing \exdentamount = \lispnarrowing \else \let\nonarrowing = \relax \fi \parsearg\quotationlabel } % We have retained a nonzero parskip for the environment, since we're % doing normal filling. % \def\Equotation{% \par \ifx\quotationauthor\undefined\else % indent a bit. \leftline{\kern 2\leftskip \sl ---\quotationauthor}% \fi {\parskip=0pt \afterenvbreak}% } % If we're given an argument, typeset it in bold with a colon after. \def\quotationlabel#1{% \def\temp{#1}% \ifx\temp\empty \else {\bf #1: }% \fi } % LaTeX-like @verbatim...@end verbatim and @verb{...} % If we want to allow any as delimiter, % we need the curly braces so that makeinfo sees the @verb command, eg: % `@verbx...x' would look like the '@verbx' command. --janneke@gnu.org % % [Knuth]: Donald Ervin Knuth, 1996. The TeXbook. % % [Knuth] p.344; only we need to do the other characters Texinfo sets % active too. Otherwise, they get lost as the first character on a % verbatim line. \def\dospecials{% \do\ \do\\\do\{\do\}\do\$\do\&% \do\#\do\^\do\^^K\do\_\do\^^A\do\%\do\~% \do\<\do\>\do\|\do\@\do+\do\"% } % % [Knuth] p. 380 \def\uncatcodespecials{% \def\do##1{\catcode`##1=\other}\dospecials} % % [Knuth] pp. 380,381,391 % Disable Spanish ligatures ?` and !` of \tt font \begingroup \catcode`\`=\active\gdef`{\relax\lq} \endgroup % % Setup for the @verb command. % % Eight spaces for a tab \begingroup \catcode`\^^I=\active \gdef\tabeightspaces{\catcode`\^^I=\active\def^^I{\ \ \ \ \ \ \ \ }} \endgroup % \def\setupverb{% \tt % easiest (and conventionally used) font for verbatim \def\par{\leavevmode\endgraf}% \catcode`\`=\active \tabeightspaces % Respect line breaks, % print special symbols as themselves, and % make each space count % must do in this order: \obeylines \uncatcodespecials \sepspaces } % Setup for the @verbatim environment % % Real tab expansion \newdimen\tabw \setbox0=\hbox{\tt\space} \tabw=8\wd0 % tab amount % \def\starttabbox{\setbox0=\hbox\bgroup} \begingroup \catcode`\^^I=\active \gdef\tabexpand{% \catcode`\^^I=\active \def^^I{\leavevmode\egroup \dimen0=\wd0 % the width so far, or since the previous tab \divide\dimen0 by\tabw \multiply\dimen0 by\tabw % compute previous multiple of \tabw \advance\dimen0 by\tabw % advance to next multiple of \tabw \wd0=\dimen0 \box0 \starttabbox }% } \endgroup \def\setupverbatim{% \let\nonarrowing = t% \nonfillstart % Easiest (and conventionally used) font for verbatim \tt \def\par{\leavevmode\egroup\box0\endgraf}% \catcode`\`=\active \tabexpand % Respect line breaks, % print special symbols as themselves, and % make each space count % must do in this order: \obeylines \uncatcodespecials \sepspaces \everypar{\starttabbox}% } % Do the @verb magic: verbatim text is quoted by unique % delimiter characters. Before first delimiter expect a % right brace, after last delimiter expect closing brace: % % \def\doverb'{'#1'}'{#1} % % [Knuth] p. 382; only eat outer {} \begingroup \catcode`[=1\catcode`]=2\catcode`\{=\other\catcode`\}=\other \gdef\doverb{#1[\def\next##1#1}[##1\endgroup]\next] \endgroup % \def\verb{\begingroup\setupverb\doverb} % % % Do the @verbatim magic: define the macro \doverbatim so that % the (first) argument ends when '@end verbatim' is reached, ie: % % \def\doverbatim#1@end verbatim{#1} % % For Texinfo it's a lot easier than for LaTeX, % because texinfo's \verbatim doesn't stop at '\end{verbatim}': % we need not redefine '\', '{' and '}'. % % Inspired by LaTeX's verbatim command set [latex.ltx] % \begingroup \catcode`\ =\active \obeylines % % ignore everything up to the first ^^M, that's the newline at the end % of the @verbatim input line itself. Otherwise we get an extra blank % line in the output. \xdef\doverbatim#1^^M#2@end verbatim{#2\noexpand\end\gobble verbatim}% % We really want {...\end verbatim} in the body of the macro, but % without the active space; thus we have to use \xdef and \gobble. \endgroup % \envdef\verbatim{% \setupverbatim\doverbatim } \let\Everbatim = \afterenvbreak % @verbatiminclude FILE - insert text of file in verbatim environment. % \def\verbatiminclude{\parseargusing\filenamecatcodes\doverbatiminclude} % \def\doverbatiminclude#1{% {% \makevalueexpandable \setupverbatim \input #1 \afterenvbreak }% } % @copying ... @end copying. % Save the text away for @insertcopying later. % % We save the uninterpreted tokens, rather than creating a box. % Saving the text in a box would be much easier, but then all the % typesetting commands (@smallbook, font changes, etc.) have to be done % beforehand -- and a) we want @copying to be done first in the source % file; b) letting users define the frontmatter in as flexible order as % possible is very desirable. % \def\copying{\checkenv{}\begingroup\scanargctxt\docopying} \def\docopying#1@end copying{\endgroup\def\copyingtext{#1}} % \def\insertcopying{% \begingroup \parindent = 0pt % paragraph indentation looks wrong on title page \scanexp\copyingtext \endgroup } \message{defuns,} % @defun etc. \newskip\defbodyindent \defbodyindent=.4in \newskip\defargsindent \defargsindent=50pt \newskip\deflastargmargin \deflastargmargin=18pt % Start the processing of @deffn: \def\startdefun{% \ifnum\lastpenalty<10000 \medbreak \else % If there are two @def commands in a row, we'll have a \nobreak, % which is there to keep the function description together with its % header. But if there's nothing but headers, we need to allow a % break somewhere. Check specifically for penalty 10002, inserted % by \defargscommonending, instead of 10000, since the sectioning % commands also insert a nobreak penalty, and we don't want to allow % a break between a section heading and a defun. % \ifnum\lastpenalty=10002 \penalty2000 \fi % % Similarly, after a section heading, do not allow a break. % But do insert the glue. \medskip % preceded by discardable penalty, so not a breakpoint \fi % \parindent=0in \advance\leftskip by \defbodyindent \exdentamount=\defbodyindent } \def\dodefunx#1{% % First, check whether we are in the right environment: \checkenv#1% % % As above, allow line break if we have multiple x headers in a row. % It's not a great place, though. \ifnum\lastpenalty=10002 \penalty3000 \fi % % And now, it's time to reuse the body of the original defun: \expandafter\gobbledefun#1% } \def\gobbledefun#1\startdefun{} % \printdefunline \deffnheader{text} % \def\printdefunline#1#2{% \begingroup % call \deffnheader: #1#2 \endheader % common ending: \interlinepenalty = 10000 \advance\rightskip by 0pt plus 1fil \endgraf \nobreak\vskip -\parskip \penalty 10002 % signal to \startdefun and \dodefunx % Some of the @defun-type tags do not enable magic parentheses, % rendering the following check redundant. But we don't optimize. \checkparencounts \endgroup } \def\Edefun{\endgraf\medbreak} % \makedefun{deffn} creates \deffn, \deffnx and \Edeffn; % the only thing remainnig is to define \deffnheader. % \def\makedefun#1{% \expandafter\let\csname E#1\endcsname = \Edefun \edef\temp{\noexpand\domakedefun \makecsname{#1}\makecsname{#1x}\makecsname{#1header}}% \temp } % \domakedefun \deffn \deffnx \deffnheader % % Define \deffn and \deffnx, without parameters. % \deffnheader has to be defined explicitly. % \def\domakedefun#1#2#3{% \envdef#1{% \startdefun \parseargusing\activeparens{\printdefunline#3}% }% \def#2{\dodefunx#1}% \def#3% } %%% Untyped functions: % @deffn category name args \makedefun{deffn}{\deffngeneral{}} % @deffn category class name args \makedefun{defop}#1 {\defopon{#1\ \putwordon}} % \defopon {category on}class name args \def\defopon#1#2 {\deffngeneral{\putwordon\ \code{#2}}{#1\ \code{#2}} } % \deffngeneral {subind}category name args % \def\deffngeneral#1#2 #3 #4\endheader{% % Remember that \dosubind{fn}{foo}{} is equivalent to \doind{fn}{foo}. \dosubind{fn}{\code{#3}}{#1}% \defname{#2}{}{#3}\magicamp\defunargs{#4\unskip}% } %%% Typed functions: % @deftypefn category type name args \makedefun{deftypefn}{\deftypefngeneral{}} % @deftypeop category class type name args \makedefun{deftypeop}#1 {\deftypeopon{#1\ \putwordon}} % \deftypeopon {category on}class type name args \def\deftypeopon#1#2 {\deftypefngeneral{\putwordon\ \code{#2}}{#1\ \code{#2}} } % \deftypefngeneral {subind}category type name args % \def\deftypefngeneral#1#2 #3 #4 #5\endheader{% \dosubind{fn}{\code{#4}}{#1}% \defname{#2}{#3}{#4}\defunargs{#5\unskip}% } %%% Typed variables: % @deftypevr category type var args \makedefun{deftypevr}{\deftypecvgeneral{}} % @deftypecv category class type var args \makedefun{deftypecv}#1 {\deftypecvof{#1\ \putwordof}} % \deftypecvof {category of}class type var args \def\deftypecvof#1#2 {\deftypecvgeneral{\putwordof\ \code{#2}}{#1\ \code{#2}} } % \deftypecvgeneral {subind}category type var args % \def\deftypecvgeneral#1#2 #3 #4 #5\endheader{% \dosubind{vr}{\code{#4}}{#1}% \defname{#2}{#3}{#4}\defunargs{#5\unskip}% } %%% Untyped variables: % @defvr category var args \makedefun{defvr}#1 {\deftypevrheader{#1} {} } % @defcv category class var args \makedefun{defcv}#1 {\defcvof{#1\ \putwordof}} % \defcvof {category of}class var args \def\defcvof#1#2 {\deftypecvof{#1}#2 {} } %%% Type: % @deftp category name args \makedefun{deftp}#1 #2 #3\endheader{% \doind{tp}{\code{#2}}% \defname{#1}{}{#2}\defunargs{#3\unskip}% } % Remaining @defun-like shortcuts: \makedefun{defun}{\deffnheader{\putwordDeffunc} } \makedefun{defmac}{\deffnheader{\putwordDefmac} } \makedefun{defspec}{\deffnheader{\putwordDefspec} } \makedefun{deftypefun}{\deftypefnheader{\putwordDeffunc} } \makedefun{defvar}{\defvrheader{\putwordDefvar} } \makedefun{defopt}{\defvrheader{\putwordDefopt} } \makedefun{deftypevar}{\deftypevrheader{\putwordDefvar} } \makedefun{defmethod}{\defopon\putwordMethodon} \makedefun{deftypemethod}{\deftypeopon\putwordMethodon} \makedefun{defivar}{\defcvof\putwordInstanceVariableof} \makedefun{deftypeivar}{\deftypecvof\putwordInstanceVariableof} % \defname, which formats the name of the @def (not the args). % #1 is the category, such as "Function". % #2 is the return type, if any. % #3 is the function name. % % We are followed by (but not passed) the arguments, if any. % \def\defname#1#2#3{% % Get the values of \leftskip and \rightskip as they were outside the @def... \advance\leftskip by -\defbodyindent % % How we'll format the type name. Putting it in brackets helps % distinguish it from the body text that may end up on the next line % just below it. \def\temp{#1}% \setbox0=\hbox{\kern\deflastargmargin \ifx\temp\empty\else [\rm\temp]\fi} % % Figure out line sizes for the paragraph shape. % The first line needs space for \box0; but if \rightskip is nonzero, % we need only space for the part of \box0 which exceeds it: \dimen0=\hsize \advance\dimen0 by -\wd0 \advance\dimen0 by \rightskip % The continuations: \dimen2=\hsize \advance\dimen2 by -\defargsindent % (plain.tex says that \dimen1 should be used only as global.) \parshape 2 0in \dimen0 \defargsindent \dimen2 % % Put the type name to the right margin. \noindent \hbox to 0pt{% \hfil\box0 \kern-\hsize % \hsize has to be shortened this way: \kern\leftskip % Intentionally do not respect \rightskip, since we need the space. }% % % Allow all lines to be underfull without complaint: \tolerance=10000 \hbadness=10000 \exdentamount=\defbodyindent {% % defun fonts. We use typewriter by default (used to be bold) because: % . we're printing identifiers, they should be in tt in principle. % . in languages with many accents, such as Czech or French, it's % common to leave accents off identifiers. The result looks ok in % tt, but exceedingly strange in rm. % . we don't want -- and --- to be treated as ligatures. % . this still does not fix the ?` and !` ligatures, but so far no % one has made identifiers using them :). \df \tt \def\temp{#2}% return value type \ifx\temp\empty\else \tclose{\temp} \fi #3% output function name }% {\rm\enskip}% hskip 0.5 em of \tenrm % \boldbrax % arguments will be output next, if any. } % Print arguments in slanted roman (not ttsl), inconsistently with using % tt for the name. This is because literal text is sometimes needed in % the argument list (groff manual), and ttsl and tt are not very % distinguishable. Prevent hyphenation at `-' chars. % \def\defunargs#1{% % use sl by default (not ttsl), % tt for the names. \df \sl \hyphenchar\font=0 % % On the other hand, if an argument has two dashes (for instance), we % want a way to get ttsl. Let's try @var for that. \let\var=\ttslanted #1% \sl\hyphenchar\font=45 } % We want ()&[] to print specially on the defun line. % \def\activeparens{% \catcode`\(=\active \catcode`\)=\active \catcode`\[=\active \catcode`\]=\active \catcode`\&=\active } % Make control sequences which act like normal parenthesis chars. \let\lparen = ( \let\rparen = ) % Be sure that we always have a definition for `(', etc. For example, % if the fn name has parens in it, \boldbrax will not be in effect yet, % so TeX would otherwise complain about undefined control sequence. { \activeparens \global\let(=\lparen \global\let)=\rparen \global\let[=\lbrack \global\let]=\rbrack \global\let& = \& \gdef\boldbrax{\let(=\opnr\let)=\clnr\let[=\lbrb\let]=\rbrb} \gdef\magicamp{\let&=\amprm} } \newcount\parencount % If we encounter &foo, then turn on ()-hacking afterwards \newif\ifampseen \def\amprm#1 {\ampseentrue{\bf\ }} \def\parenfont{% \ifampseen % At the first level, print parens in roman, % otherwise use the default font. \ifnum \parencount=1 \rm \fi \else % The \sf parens (in \boldbrax) actually are a little bolder than % the contained text. This is especially needed for [ and ] . \sf \fi } \def\infirstlevel#1{% \ifampseen \ifnum\parencount=1 #1% \fi \fi } \def\bfafterword#1 {#1 \bf} \def\opnr{% \global\advance\parencount by 1 {\parenfont(}% \infirstlevel \bfafterword } \def\clnr{% {\parenfont)}% \infirstlevel \sl \global\advance\parencount by -1 } \newcount\brackcount \def\lbrb{% \global\advance\brackcount by 1 {\bf[}% } \def\rbrb{% {\bf]}% \global\advance\brackcount by -1 } \def\checkparencounts{% \ifnum\parencount=0 \else \badparencount \fi \ifnum\brackcount=0 \else \badbrackcount \fi } \def\badparencount{% \errmessage{Unbalanced parentheses in @def}% \global\parencount=0 } \def\badbrackcount{% \errmessage{Unbalanced square braces in @def}% \global\brackcount=0 } \message{macros,} % @macro. % To do this right we need a feature of e-TeX, \scantokens, % which we arrange to emulate with a temporary file in ordinary TeX. \ifx\eTeXversion\undefined \newwrite\macscribble \def\scantokens#1{% \toks0={#1}% \immediate\openout\macscribble=\jobname.tmp \immediate\write\macscribble{\the\toks0}% \immediate\closeout\macscribble \input \jobname.tmp } \fi \def\scanmacro#1{% \begingroup \newlinechar`\^^M \let\xeatspaces\eatspaces % Undo catcode changes of \startcontents and \doprintindex % When called from @insertcopying or (short)caption, we need active % backslash to get it printed correctly. Previously, we had % \catcode`\\=\other instead. We'll see whether a problem appears % with macro expansion. --kasal, 19aug04 \catcode`\@=0 \catcode`\\=\active \escapechar=`\@ % ... and \example \spaceisspace % % Append \endinput to make sure that TeX does not see the ending newline. % % I've verified that it is necessary both for e-TeX and for ordinary TeX % --kasal, 29nov03 \scantokens{#1\endinput}% \endgroup } \def\scanexp#1{% \edef\temp{\noexpand\scanmacro{#1}}% \temp } \newcount\paramno % Count of parameters \newtoks\macname % Macro name \newif\ifrecursive % Is it recursive? % List of all defined macros in the form % \definedummyword\macro1\definedummyword\macro2... % Currently is also contains all @aliases; the list can be split % if there is a need. \def\macrolist{} % Add the macro to \macrolist \def\addtomacrolist#1{\expandafter \addtomacrolistxxx \csname#1\endcsname} \def\addtomacrolistxxx#1{% \toks0 = \expandafter{\macrolist\definedummyword#1}% \xdef\macrolist{\the\toks0}% } % Utility routines. % This does \let #1 = #2, with \csnames; that is, % \let \csname#1\endcsname = \csname#2\endcsname % (except of course we have to play expansion games). % \def\cslet#1#2{% \expandafter\let \csname#1\expandafter\endcsname \csname#2\endcsname } % Trim leading and trailing spaces off a string. % Concepts from aro-bend problem 15 (see CTAN). {\catcode`\@=11 \gdef\eatspaces #1{\expandafter\trim@\expandafter{#1 }} \gdef\trim@ #1{\trim@@ @#1 @ #1 @ @@} \gdef\trim@@ #1@ #2@ #3@@{\trim@@@\empty #2 @} \def\unbrace#1{#1} \unbrace{\gdef\trim@@@ #1 } #2@{#1} } % Trim a single trailing ^^M off a string. {\catcode`\^^M=\other \catcode`\Q=3% \gdef\eatcr #1{\eatcra #1Q^^MQ}% \gdef\eatcra#1^^MQ{\eatcrb#1Q}% \gdef\eatcrb#1Q#2Q{#1}% } % Macro bodies are absorbed as an argument in a context where % all characters are catcode 10, 11 or 12, except \ which is active % (as in normal texinfo). It is necessary to change the definition of \. % It's necessary to have hard CRs when the macro is executed. This is % done by making ^^M (\endlinechar) catcode 12 when reading the macro % body, and then making it the \newlinechar in \scanmacro. \def\scanctxt{% \catcode`\"=\other \catcode`\+=\other \catcode`\<=\other \catcode`\>=\other \catcode`\@=\other \catcode`\^=\other \catcode`\_=\other \catcode`\|=\other \catcode`\~=\other } \def\scanargctxt{% \scanctxt \catcode`\\=\other \catcode`\^^M=\other } \def\macrobodyctxt{% \scanctxt \catcode`\{=\other \catcode`\}=\other \catcode`\^^M=\other \usembodybackslash } \def\macroargctxt{% \scanctxt \catcode`\\=\other } % \mbodybackslash is the definition of \ in @macro bodies. % It maps \foo\ => \csname macarg.foo\endcsname => #N % where N is the macro parameter number. % We define \csname macarg.\endcsname to be \realbackslash, so % \\ in macro replacement text gets you a backslash. {\catcode`@=0 @catcode`@\=@active @gdef@usembodybackslash{@let\=@mbodybackslash} @gdef@mbodybackslash#1\{@csname macarg.#1@endcsname} } \expandafter\def\csname macarg.\endcsname{\realbackslash} \def\macro{\recursivefalse\parsearg\macroxxx} \def\rmacro{\recursivetrue\parsearg\macroxxx} \def\macroxxx#1{% \getargs{#1}% now \macname is the macname and \argl the arglist \ifx\argl\empty % no arguments \paramno=0% \else \expandafter\parsemargdef \argl;% \fi \if1\csname ismacro.\the\macname\endcsname \message{Warning: redefining \the\macname}% \else \expandafter\ifx\csname \the\macname\endcsname \relax \else \errmessage{Macro name \the\macname\space already defined}\fi \global\cslet{macsave.\the\macname}{\the\macname}% \global\expandafter\let\csname ismacro.\the\macname\endcsname=1% \addtomacrolist{\the\macname}% \fi \begingroup \macrobodyctxt \ifrecursive \expandafter\parsermacbody \else \expandafter\parsemacbody \fi} \parseargdef\unmacro{% \if1\csname ismacro.#1\endcsname \global\cslet{#1}{macsave.#1}% \global\expandafter\let \csname ismacro.#1\endcsname=0% % Remove the macro name from \macrolist: \begingroup \expandafter\let\csname#1\endcsname \relax \let\definedummyword\unmacrodo \xdef\macrolist{\macrolist}% \endgroup \else \errmessage{Macro #1 not defined}% \fi } % Called by \do from \dounmacro on each macro. The idea is to omit any % macro definitions that have been changed to \relax. % \def\unmacrodo#1{% \ifx #1\relax % remove this \else \noexpand\definedummyword \noexpand#1% \fi } % This makes use of the obscure feature that if the last token of a % is #, then the preceding argument is delimited by % an opening brace, and that opening brace is not consumed. \def\getargs#1{\getargsxxx#1{}} \def\getargsxxx#1#{\getmacname #1 \relax\getmacargs} \def\getmacname #1 #2\relax{\macname={#1}} \def\getmacargs#1{\def\argl{#1}} % Parse the optional {params} list. Set up \paramno and \paramlist % so \defmacro knows what to do. Define \macarg.blah for each blah % in the params list, to be ##N where N is the position in that list. % That gets used by \mbodybackslash (above). % We need to get `macro parameter char #' into several definitions. % The technique used is stolen from LaTeX: let \hash be something % unexpandable, insert that wherever you need a #, and then redefine % it to # just before using the token list produced. % % The same technique is used to protect \eatspaces till just before % the macro is used. \def\parsemargdef#1;{\paramno=0\def\paramlist{}% \let\hash\relax\let\xeatspaces\relax\parsemargdefxxx#1,;,} \def\parsemargdefxxx#1,{% \if#1;\let\next=\relax \else \let\next=\parsemargdefxxx \advance\paramno by 1% \expandafter\edef\csname macarg.\eatspaces{#1}\endcsname {\xeatspaces{\hash\the\paramno}}% \edef\paramlist{\paramlist\hash\the\paramno,}% \fi\next} % These two commands read recursive and nonrecursive macro bodies. % (They're different since rec and nonrec macros end differently.) \long\def\parsemacbody#1@end macro% {\xdef\temp{\eatcr{#1}}\endgroup\defmacro}% \long\def\parsermacbody#1@end rmacro% {\xdef\temp{\eatcr{#1}}\endgroup\defmacro}% % This defines the macro itself. There are six cases: recursive and % nonrecursive macros of zero, one, and many arguments. % Much magic with \expandafter here. % \xdef is used so that macro definitions will survive the file % they're defined in; @include reads the file inside a group. \def\defmacro{% \let\hash=##% convert placeholders to macro parameter chars \ifrecursive \ifcase\paramno % 0 \expandafter\xdef\csname\the\macname\endcsname{% \noexpand\scanmacro{\temp}}% \or % 1 \expandafter\xdef\csname\the\macname\endcsname{% \bgroup\noexpand\macroargctxt \noexpand\braceorline \expandafter\noexpand\csname\the\macname xxx\endcsname}% \expandafter\xdef\csname\the\macname xxx\endcsname##1{% \egroup\noexpand\scanmacro{\temp}}% \else % many \expandafter\xdef\csname\the\macname\endcsname{% \bgroup\noexpand\macroargctxt \noexpand\csname\the\macname xx\endcsname}% \expandafter\xdef\csname\the\macname xx\endcsname##1{% \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}% \expandafter\expandafter \expandafter\xdef \expandafter\expandafter \csname\the\macname xxx\endcsname \paramlist{\egroup\noexpand\scanmacro{\temp}}% \fi \else \ifcase\paramno % 0 \expandafter\xdef\csname\the\macname\endcsname{% \noexpand\norecurse{\the\macname}% \noexpand\scanmacro{\temp}\egroup}% \or % 1 \expandafter\xdef\csname\the\macname\endcsname{% \bgroup\noexpand\macroargctxt \noexpand\braceorline \expandafter\noexpand\csname\the\macname xxx\endcsname}% \expandafter\xdef\csname\the\macname xxx\endcsname##1{% \egroup \noexpand\norecurse{\the\macname}% \noexpand\scanmacro{\temp}\egroup}% \else % many \expandafter\xdef\csname\the\macname\endcsname{% \bgroup\noexpand\macroargctxt \expandafter\noexpand\csname\the\macname xx\endcsname}% \expandafter\xdef\csname\the\macname xx\endcsname##1{% \expandafter\noexpand\csname\the\macname xxx\endcsname ##1,}% \expandafter\expandafter \expandafter\xdef \expandafter\expandafter \csname\the\macname xxx\endcsname \paramlist{% \egroup \noexpand\norecurse{\the\macname}% \noexpand\scanmacro{\temp}\egroup}% \fi \fi} \def\norecurse#1{\bgroup\cslet{#1}{macsave.#1}} % \braceorline decides whether the next nonwhitespace character is a % {. If so it reads up to the closing }, if not, it reads the whole % line. Whatever was read is then fed to the next control sequence % as an argument (by \parsebrace or \parsearg) \def\braceorline#1{\let\next=#1\futurelet\nchar\braceorlinexxx} \def\braceorlinexxx{% \ifx\nchar\bgroup\else \expandafter\parsearg \fi \next} % @alias. % We need some trickery to remove the optional spaces around the equal % sign. Just make them active and then expand them all to nothing. \def\alias{\parseargusing\obeyspaces\aliasxxx} \def\aliasxxx #1{\aliasyyy#1\relax} \def\aliasyyy #1=#2\relax{% {% \expandafter\let\obeyedspace=\empty \addtomacrolist{#1}% \xdef\next{\global\let\makecsname{#1}=\makecsname{#2}}% }% \next } \message{cross references,} \newwrite\auxfile \newif\ifhavexrefs % True if xref values are known. \newif\ifwarnedxrefs % True if we warned once that they aren't known. % @inforef is relatively simple. \def\inforef #1{\inforefzzz #1,,,,**} \def\inforefzzz #1,#2,#3,#4**{\putwordSee{} \putwordInfo{} \putwordfile{} \file{\ignorespaces #3{}}, node \samp{\ignorespaces#1{}}} % @node's only job in TeX is to define \lastnode, which is used in % cross-references. The @node line might or might not have commas, and % might or might not have spaces before the first comma, like: % @node foo , bar , ... % We don't want such trailing spaces in the node name. % \parseargdef\node{\checkenv{}\donode #1 ,\finishnodeparse} % % also remove a trailing comma, in case of something like this: % @node Help-Cross, , , Cross-refs \def\donode#1 ,#2\finishnodeparse{\dodonode #1,\finishnodeparse} \def\dodonode#1,#2\finishnodeparse{\gdef\lastnode{#1}} \let\nwnode=\node \let\lastnode=\empty % Write a cross-reference definition for the current node. #1 is the % type (Ynumbered, Yappendix, Ynothing). % \def\donoderef#1{% \ifx\lastnode\empty\else \setref{\lastnode}{#1}% \global\let\lastnode=\empty \fi } % @anchor{NAME} -- define xref target at arbitrary point. % \newcount\savesfregister % \def\savesf{\relax \ifhmode \savesfregister=\spacefactor \fi} \def\restoresf{\relax \ifhmode \spacefactor=\savesfregister \fi} \def\anchor#1{\savesf \setref{#1}{Ynothing}\restoresf \ignorespaces} % \setref{NAME}{SNT} defines a cross-reference point NAME (a node or an % anchor), which consists of three parts: % 1) NAME-title - the current sectioning name taken from \thissection, % or the anchor name. % 2) NAME-snt - section number and type, passed as the SNT arg, or % empty for anchors. % 3) NAME-pg - the page number. % % This is called from \donoderef, \anchor, and \dofloat. In the case of % floats, there is an additional part, which is not written here: % 4) NAME-lof - the text as it should appear in a @listoffloats. % \def\setref#1#2{% \pdfmkdest{#1}% \iflinks {% \atdummies % preserve commands, but don't expand them \edef\writexrdef##1##2{% \write\auxfile{@xrdef{#1-% #1 of \setref, expanded by the \edef ##1}{##2}}% these are parameters of \writexrdef }% \toks0 = \expandafter{\thissection}% \immediate \writexrdef{title}{\the\toks0 }% \immediate \writexrdef{snt}{\csname #2\endcsname}% \Ynumbered etc. \writexrdef{pg}{\folio}% will be written later, during \shipout }% \fi } % @xref, @pxref, and @ref generate cross-references. For \xrefX, #1 is % the node name, #2 the name of the Info cross-reference, #3 the printed % node name, #4 the name of the Info file, #5 the name of the printed % manual. All but the node name can be omitted. % \def\pxref#1{\putwordsee{} \xrefX[#1,,,,,,,]} \def\xref#1{\putwordSee{} \xrefX[#1,,,,,,,]} \def\ref#1{\xrefX[#1,,,,,,,]} \def\xrefX[#1,#2,#3,#4,#5,#6]{\begingroup \unsepspaces \def\printedmanual{\ignorespaces #5}% \def\printedrefname{\ignorespaces #3}% \setbox1=\hbox{\printedmanual\unskip}% \setbox0=\hbox{\printedrefname\unskip}% \ifdim \wd0 = 0pt % No printed node name was explicitly given. \expandafter\ifx\csname SETxref-automatic-section-title\endcsname\relax % Use the node name inside the square brackets. \def\printedrefname{\ignorespaces #1}% \else % Use the actual chapter/section title appear inside % the square brackets. Use the real section title if we have it. \ifdim \wd1 > 0pt % It is in another manual, so we don't have it. \def\printedrefname{\ignorespaces #1}% \else \ifhavexrefs % We know the real title if we have the xref values. \def\printedrefname{\refx{#1-title}{}}% \else % Otherwise just copy the Info node name. \def\printedrefname{\ignorespaces #1}% \fi% \fi \fi \fi % % Make link in pdf output. \ifpdf \leavevmode \getfilename{#4}% {\turnoffactive % See comments at \activebackslashdouble. {\activebackslashdouble \xdef\pdfxrefdest{#1}% \backslashparens\pdfxrefdest}% % \ifnum\filenamelength>0 \startlink attr{/Border [0 0 0]}% goto file{\the\filename.pdf} name{\pdfxrefdest}% \else \startlink attr{/Border [0 0 0]}% goto name{\pdfmkpgn{\pdfxrefdest}}% \fi }% \linkcolor \fi % % Float references are printed completely differently: "Figure 1.2" % instead of "[somenode], p.3". We distinguish them by the % LABEL-title being set to a magic string. {% % Have to otherify everything special to allow the \csname to % include an _ in the xref name, etc. \indexnofonts \turnoffactive \expandafter\global\expandafter\let\expandafter\Xthisreftitle \csname XR#1-title\endcsname }% \iffloat\Xthisreftitle % If the user specified the print name (third arg) to the ref, % print it instead of our usual "Figure 1.2". \ifdim\wd0 = 0pt \refx{#1-snt}% \else \printedrefname \fi % % if the user also gave the printed manual name (fifth arg), append % "in MANUALNAME". \ifdim \wd1 > 0pt \space \putwordin{} \cite{\printedmanual}% \fi \else % node/anchor (non-float) references. % % If we use \unhbox0 and \unhbox1 to print the node names, TeX does not % insert empty discretionaries after hyphens, which means that it will % not find a line break at a hyphen in a node names. Since some manuals % are best written with fairly long node names, containing hyphens, this % is a loss. Therefore, we give the text of the node name again, so it % is as if TeX is seeing it for the first time. \ifdim \wd1 > 0pt \putwordsection{} ``\printedrefname'' \putwordin{} \cite{\printedmanual}% \else % _ (for example) has to be the character _ for the purposes of the % control sequence corresponding to the node, but it has to expand % into the usual \leavevmode...\vrule stuff for purposes of % printing. So we \turnoffactive for the \refx-snt, back on for the % printing, back off for the \refx-pg. {\turnoffactive % Only output a following space if the -snt ref is nonempty; for % @unnumbered and @anchor, it won't be. \setbox2 = \hbox{\ignorespaces \refx{#1-snt}{}}% \ifdim \wd2 > 0pt \refx{#1-snt}\space\fi }% % output the `[mynode]' via a macro so it can be overridden. \xrefprintnodename\printedrefname % % But we always want a comma and a space: ,\space % % output the `page 3'. \turnoffactive \putwordpage\tie\refx{#1-pg}{}% \fi \fi \endlink \endgroup} % This macro is called from \xrefX for the `[nodename]' part of xref % output. It's a separate macro only so it can be changed more easily, % since square brackets don't work well in some documents. Particularly % one that Bob is working on :). % \def\xrefprintnodename#1{[#1]} % Things referred to by \setref. % \def\Ynothing{} \def\Yomitfromtoc{} \def\Ynumbered{% \ifnum\secno=0 \putwordChapter@tie \the\chapno \else \ifnum\subsecno=0 \putwordSection@tie \the\chapno.\the\secno \else \ifnum\subsubsecno=0 \putwordSection@tie \the\chapno.\the\secno.\the\subsecno \else \putwordSection@tie \the\chapno.\the\secno.\the\subsecno.\the\subsubsecno \fi\fi\fi } \def\Yappendix{% \ifnum\secno=0 \putwordAppendix@tie @char\the\appendixno{}% \else \ifnum\subsecno=0 \putwordSection@tie @char\the\appendixno.\the\secno \else \ifnum\subsubsecno=0 \putwordSection@tie @char\the\appendixno.\the\secno.\the\subsecno \else \putwordSection@tie @char\the\appendixno.\the\secno.\the\subsecno.\the\subsubsecno \fi\fi\fi } % Define \refx{NAME}{SUFFIX} to reference a cross-reference string named NAME. % If its value is nonempty, SUFFIX is output afterward. % \def\refx#1#2{% {% \indexnofonts \otherbackslash \expandafter\global\expandafter\let\expandafter\thisrefX \csname XR#1\endcsname }% \ifx\thisrefX\relax % If not defined, say something at least. \angleleft un\-de\-fined\angleright \iflinks \ifhavexrefs \message{\linenumber Undefined cross reference `#1'.}% \else \ifwarnedxrefs\else \global\warnedxrefstrue \message{Cross reference values unknown; you must run TeX again.}% \fi \fi \fi \else % It's defined, so just use it. \thisrefX \fi #2% Output the suffix in any case. } % This is the macro invoked by entries in the aux file. Usually it's % just a \def (we prepend XR to the control sequence name to avoid % collisions). But if this is a float type, we have more work to do. % \def\xrdef#1#2{% \expandafter\gdef\csname XR#1\endcsname{#2}% remember this xref value. % % Was that xref control sequence that we just defined for a float? \expandafter\iffloat\csname XR#1\endcsname % it was a float, and we have the (safe) float type in \iffloattype. \expandafter\let\expandafter\floatlist \csname floatlist\iffloattype\endcsname % % Is this the first time we've seen this float type? \expandafter\ifx\floatlist\relax \toks0 = {\do}% yes, so just \do \else % had it before, so preserve previous elements in list. \toks0 = \expandafter{\floatlist\do}% \fi % % Remember this xref in the control sequence \floatlistFLOATTYPE, % for later use in \listoffloats. \expandafter\xdef\csname floatlist\iffloattype\endcsname{\the\toks0{#1}}% \fi } % Read the last existing aux file, if any. No error if none exists. % \def\tryauxfile{% \openin 1 \jobname.aux \ifeof 1 \else \readdatafile{aux}% \global\havexrefstrue \fi \closein 1 } \def\setupdatafile{% \catcode`\^^@=\other \catcode`\^^A=\other \catcode`\^^B=\other \catcode`\^^C=\other \catcode`\^^D=\other \catcode`\^^E=\other \catcode`\^^F=\other \catcode`\^^G=\other \catcode`\^^H=\other \catcode`\^^K=\other \catcode`\^^L=\other \catcode`\^^N=\other \catcode`\^^P=\other \catcode`\^^Q=\other \catcode`\^^R=\other \catcode`\^^S=\other \catcode`\^^T=\other \catcode`\^^U=\other \catcode`\^^V=\other \catcode`\^^W=\other \catcode`\^^X=\other \catcode`\^^Z=\other \catcode`\^^[=\other \catcode`\^^\=\other \catcode`\^^]=\other \catcode`\^^^=\other \catcode`\^^_=\other % It was suggested to set the catcode of ^ to 7, which would allow ^^e4 etc. % in xref tags, i.e., node names. But since ^^e4 notation isn't % supported in the main text, it doesn't seem desirable. Furthermore, % that is not enough: for node names that actually contain a ^ % character, we would end up writing a line like this: 'xrdef {'hat % b-title}{'hat b} and \xrdef does a \csname...\endcsname on the first % argument, and \hat is not an expandable control sequence. It could % all be worked out, but why? Either we support ^^ or we don't. % % The other change necessary for this was to define \auxhat: % \def\auxhat{\def^{'hat }}% extra space so ok if followed by letter % and then to call \auxhat in \setq. % \catcode`\^=\other % % Special characters. Should be turned off anyway, but... \catcode`\~=\other \catcode`\[=\other \catcode`\]=\other \catcode`\"=\other \catcode`\_=\other \catcode`\|=\other \catcode`\<=\other \catcode`\>=\other \catcode`\$=\other \catcode`\#=\other \catcode`\&=\other \catcode`\%=\other \catcode`+=\other % avoid \+ for paranoia even though we've turned it off % % This is to support \ in node names and titles, since the \ % characters end up in a \csname. It's easier than % leaving it active and making its active definition an actual \ % character. What I don't understand is why it works in the *value* % of the xrdef. Seems like it should be a catcode12 \, and that % should not typeset properly. But it works, so I'm moving on for % now. --karl, 15jan04. \catcode`\\=\other % % Make the characters 128-255 be printing characters. {% \count1=128 \def\loop{% \catcode\count1=\other \advance\count1 by 1 \ifnum \count1<256 \loop \fi }% }% % % @ is our escape character in .aux files, and we need braces. \catcode`\{=1 \catcode`\}=2 \catcode`\@=0 } \def\readdatafile#1{% \begingroup \setupdatafile \input\jobname.#1 \endgroup} \message{insertions,} % including footnotes. \newcount \footnoteno % The trailing space in the following definition for supereject is % vital for proper filling; pages come out unaligned when you do a % pagealignmacro call if that space before the closing brace is % removed. (Generally, numeric constants should always be followed by a % space to prevent strange expansion errors.) \def\supereject{\par\penalty -20000\footnoteno =0 } % @footnotestyle is meaningful for info output only. \let\footnotestyle=\comment {\catcode `\@=11 % % Auto-number footnotes. Otherwise like plain. \gdef\footnote{% \let\indent=\ptexindent \let\noindent=\ptexnoindent \global\advance\footnoteno by \@ne \edef\thisfootno{$^{\the\footnoteno}$}% % % In case the footnote comes at the end of a sentence, preserve the % extra spacing after we do the footnote number. \let\@sf\empty \ifhmode\edef\@sf{\spacefactor\the\spacefactor}\ptexslash\fi % % Remove inadvertent blank space before typesetting the footnote number. \unskip \thisfootno\@sf \dofootnote }% % Don't bother with the trickery in plain.tex to not require the % footnote text as a parameter. Our footnotes don't need to be so general. % % Oh yes, they do; otherwise, @ifset (and anything else that uses % \parseargline) fails inside footnotes because the tokens are fixed when % the footnote is read. --karl, 16nov96. % \gdef\dofootnote{% \insert\footins\bgroup % We want to typeset this text as a normal paragraph, even if the % footnote reference occurs in (for example) a display environment. % So reset some parameters. \hsize=\pagewidth \interlinepenalty\interfootnotelinepenalty \splittopskip\ht\strutbox % top baseline for broken footnotes \splitmaxdepth\dp\strutbox \floatingpenalty\@MM \leftskip\z@skip \rightskip\z@skip \spaceskip\z@skip \xspaceskip\z@skip \parindent\defaultparindent % \smallfonts \rm % % Because we use hanging indentation in footnotes, a @noindent appears % to exdent this text, so make it be a no-op. makeinfo does not use % hanging indentation so @noindent can still be needed within footnote % text after an @example or the like (not that this is good style). \let\noindent = \relax % % Hang the footnote text off the number. Use \everypar in case the % footnote extends for more than one paragraph. \everypar = {\hang}% \textindent{\thisfootno}% % % Don't crash into the line above the footnote text. Since this % expands into a box, it must come within the paragraph, lest it % provide a place where TeX can split the footnote. \footstrut \futurelet\next\fo@t } }%end \catcode `\@=11 % In case a @footnote appears in a vbox, save the footnote text and create % the real \insert just after the vbox finished. Otherwise, the insertion % would be lost. % Similarily, if a @footnote appears inside an alignment, save the footnote % text to a box and make the \insert when a row of the table is finished. % And the same can be done for other insert classes. --kasal, 16nov03. % Replace the \insert primitive by a cheating macro. % Deeper inside, just make sure that the saved insertions are not spilled % out prematurely. % \def\startsavinginserts{% \ifx \insert\ptexinsert \let\insert\saveinsert \else \let\checkinserts\relax \fi } % This \insert replacement works for both \insert\footins{foo} and % \insert\footins\bgroup foo\egroup, but it doesn't work for \insert27{foo}. % \def\saveinsert#1{% \edef\next{\noexpand\savetobox \makeSAVEname#1}% \afterassignment\next % swallow the left brace \let\temp = } \def\makeSAVEname#1{\makecsname{SAVE\expandafter\gobble\string#1}} \def\savetobox#1{\global\setbox#1 = \vbox\bgroup \unvbox#1} \def\checksaveins#1{\ifvoid#1\else \placesaveins#1\fi} \def\placesaveins#1{% \ptexinsert \csname\expandafter\gobblesave\string#1\endcsname {\box#1}% } % eat @SAVE -- beware, all of them have catcode \other: { \def\dospecials{\do S\do A\do V\do E} \uncatcodespecials % ;-) \gdef\gobblesave @SAVE{} } % initialization: \def\newsaveins #1{% \edef\next{\noexpand\newsaveinsX \makeSAVEname#1}% \next } \def\newsaveinsX #1{% \csname newbox\endcsname #1% \expandafter\def\expandafter\checkinserts\expandafter{\checkinserts \checksaveins #1}% } % initialize: \let\checkinserts\empty \newsaveins\footins \newsaveins\margin % @image. We use the macros from epsf.tex to support this. % If epsf.tex is not installed and @image is used, we complain. % % Check for and read epsf.tex up front. If we read it only at @image % time, we might be inside a group, and then its definitions would get % undone and the next image would fail. \openin 1 = epsf.tex \ifeof 1 \else % Do not bother showing banner with epsf.tex v2.7k (available in % doc/epsf.tex and on ctan). \def\epsfannounce{\toks0 = }% \input epsf.tex \fi \closein 1 % % We will only complain once about lack of epsf.tex. \newif\ifwarnednoepsf \newhelp\noepsfhelp{epsf.tex must be installed for images to work. It is also included in the Texinfo distribution, or you can get it from ftp://tug.org/tex/epsf.tex.} % \def\image#1{% \ifx\epsfbox\undefined \ifwarnednoepsf \else \errhelp = \noepsfhelp \errmessage{epsf.tex not found, images will be ignored}% \global\warnednoepsftrue \fi \else \imagexxx #1,,,,,\finish \fi } % % Arguments to @image: % #1 is (mandatory) image filename; we tack on .eps extension. % #2 is (optional) width, #3 is (optional) height. % #4 is (ignored optional) html alt text. % #5 is (ignored optional) extension. % #6 is just the usual extra ignored arg for parsing this stuff. \newif\ifimagevmode \def\imagexxx#1,#2,#3,#4,#5,#6\finish{\begingroup \catcode`\^^M = 5 % in case we're inside an example \normalturnoffactive % allow _ et al. in names % If the image is by itself, center it. \ifvmode \imagevmodetrue \nobreak\bigskip % Usually we'll have text after the image which will insert % \parskip glue, so insert it here too to equalize the space % above and below. \nobreak\vskip\parskip \nobreak \line\bgroup\hss \fi % % Output the image. \ifpdf \dopdfimage{#1}{#2}{#3}% \else % \epsfbox itself resets \epsf?size at each figure. \setbox0 = \hbox{\ignorespaces #2}\ifdim\wd0 > 0pt \epsfxsize=#2\relax \fi \setbox0 = \hbox{\ignorespaces #3}\ifdim\wd0 > 0pt \epsfysize=#3\relax \fi \epsfbox{#1.eps}% \fi % \ifimagevmode \hss \egroup \bigbreak \fi % space after the image \endgroup} % @float FLOATTYPE,LABEL,LOC ... @end float for displayed figures, tables, % etc. We don't actually implement floating yet, we always include the % float "here". But it seemed the best name for the future. % \envparseargdef\float{\eatcommaspace\eatcommaspace\dofloat#1, , ,\finish} % There may be a space before second and/or third parameter; delete it. \def\eatcommaspace#1, {#1,} % #1 is the optional FLOATTYPE, the text label for this float, typically % "Figure", "Table", "Example", etc. Can't contain commas. If omitted, % this float will not be numbered and cannot be referred to. % % #2 is the optional xref label. Also must be present for the float to % be referable. % % #3 is the optional positioning argument; for now, it is ignored. It % will somehow specify the positions allowed to float to (here, top, bottom). % % We keep a separate counter for each FLOATTYPE, which we reset at each % chapter-level command. \let\resetallfloatnos=\empty % \def\dofloat#1,#2,#3,#4\finish{% \let\thiscaption=\empty \let\thisshortcaption=\empty % % don't lose footnotes inside @float. % % BEWARE: when the floats start float, we have to issue warning whenever an % insert appears inside a float which could possibly float. --kasal, 26may04 % \startsavinginserts % % We can't be used inside a paragraph. \par % \vtop\bgroup \def\floattype{#1}% \def\floatlabel{#2}% \def\floatloc{#3}% we do nothing with this yet. % \ifx\floattype\empty \let\safefloattype=\empty \else {% % the floattype might have accents or other special characters, % but we need to use it in a control sequence name. \indexnofonts \turnoffactive \xdef\safefloattype{\floattype}% }% \fi % % If label is given but no type, we handle that as the empty type. \ifx\floatlabel\empty \else % We want each FLOATTYPE to be numbered separately (Figure 1, % Table 1, Figure 2, ...). (And if no label, no number.) % \expandafter\getfloatno\csname\safefloattype floatno\endcsname \global\advance\floatno by 1 % {% % This magic value for \thissection is output by \setref as the % XREFLABEL-title value. \xrefX uses it to distinguish float % labels (which have a completely different output format) from % node and anchor labels. And \xrdef uses it to construct the % lists of floats. % \edef\thissection{\floatmagic=\safefloattype}% \setref{\floatlabel}{Yfloat}% }% \fi % % start with \parskip glue, I guess. \vskip\parskip % % Don't suppress indentation if a float happens to start a section. \restorefirstparagraphindent } % we have these possibilities: % @float Foo,lbl & @caption{Cap}: Foo 1.1: Cap % @float Foo,lbl & no caption: Foo 1.1 % @float Foo & @caption{Cap}: Foo: Cap % @float Foo & no caption: Foo % @float ,lbl & Caption{Cap}: 1.1: Cap % @float ,lbl & no caption: 1.1 % @float & @caption{Cap}: Cap % @float & no caption: % \def\Efloat{% \let\floatident = \empty % % In all cases, if we have a float type, it comes first. \ifx\floattype\empty \else \def\floatident{\floattype}\fi % % If we have an xref label, the number comes next. \ifx\floatlabel\empty \else \ifx\floattype\empty \else % if also had float type, need tie first. \appendtomacro\floatident{\tie}% \fi % the number. \appendtomacro\floatident{\chaplevelprefix\the\floatno}% \fi % % Start the printed caption with what we've constructed in % \floatident, but keep it separate; we need \floatident again. \let\captionline = \floatident % \ifx\thiscaption\empty \else \ifx\floatident\empty \else \appendtomacro\captionline{: }% had ident, so need a colon between \fi % % caption text. \appendtomacro\captionline{\scanexp\thiscaption}% \fi % % If we have anything to print, print it, with space before. % Eventually this needs to become an \insert. \ifx\captionline\empty \else \vskip.5\parskip \captionline % % Space below caption. \vskip\parskip \fi % % If have an xref label, write the list of floats info. Do this % after the caption, to avoid chance of it being a breakpoint. \ifx\floatlabel\empty \else % Write the text that goes in the lof to the aux file as % \floatlabel-lof. Besides \floatident, we include the short % caption if specified, else the full caption if specified, else nothing. {% \atdummies % since we read the caption text in the macro world, where ^^M % is turned into a normal character, we have to scan it back, so % we don't write the literal three characters "^^M" into the aux file. \scanexp{% \xdef\noexpand\gtemp{% \ifx\thisshortcaption\empty \thiscaption \else \thisshortcaption \fi }% }% \immediate\write\auxfile{@xrdef{\floatlabel-lof}{\floatident \ifx\gtemp\empty \else : \gtemp \fi}}% }% \fi \egroup % end of \vtop % % place the captured inserts % % BEWARE: when the floats start float, we have to issue warning whenever an % insert appears inside a float which could possibly float. --kasal, 26may04 % \checkinserts } % Append the tokens #2 to the definition of macro #1, not expanding either. % \def\appendtomacro#1#2{% \expandafter\def\expandafter#1\expandafter{#1#2}% } % @caption, @shortcaption % \def\caption{\docaption\thiscaption} \def\shortcaption{\docaption\thisshortcaption} \def\docaption{\checkenv\float \bgroup\scanargctxt\defcaption} \def\defcaption#1#2{\egroup \def#1{#2}} % The parameter is the control sequence identifying the counter we are % going to use. Create it if it doesn't exist and assign it to \floatno. \def\getfloatno#1{% \ifx#1\relax % Haven't seen this figure type before. \csname newcount\endcsname #1% % % Remember to reset this floatno at the next chap. \expandafter\gdef\expandafter\resetallfloatnos \expandafter{\resetallfloatnos #1=0 }% \fi \let\floatno#1% } % \setref calls this to get the XREFLABEL-snt value. We want an @xref % to the FLOATLABEL to expand to "Figure 3.1". We call \setref when we % first read the @float command. % \def\Yfloat{\floattype@tie \chaplevelprefix\the\floatno}% % Magic string used for the XREFLABEL-title value, so \xrefX can % distinguish floats from other xref types. \def\floatmagic{!!float!!} % #1 is the control sequence we are passed; we expand into a conditional % which is true if #1 represents a float ref. That is, the magic % \thissection value which we \setref above. % \def\iffloat#1{\expandafter\doiffloat#1==\finish} % % #1 is (maybe) the \floatmagic string. If so, #2 will be the % (safe) float type for this float. We set \iffloattype to #2. % \def\doiffloat#1=#2=#3\finish{% \def\temp{#1}% \def\iffloattype{#2}% \ifx\temp\floatmagic } % @listoffloats FLOATTYPE - print a list of floats like a table of contents. % \parseargdef\listoffloats{% \def\floattype{#1}% floattype {% % the floattype might have accents or other special characters, % but we need to use it in a control sequence name. \indexnofonts \turnoffactive \xdef\safefloattype{\floattype}% }% % % \xrdef saves the floats as a \do-list in \floatlistSAFEFLOATTYPE. \expandafter\ifx\csname floatlist\safefloattype\endcsname \relax \ifhavexrefs % if the user said @listoffloats foo but never @float foo. \message{\linenumber No `\safefloattype' floats to list.}% \fi \else \begingroup \leftskip=\tocindent % indent these entries like a toc \let\do=\listoffloatsdo \csname floatlist\safefloattype\endcsname \endgroup \fi } % This is called on each entry in a list of floats. We're passed the % xref label, in the form LABEL-title, which is how we save it in the % aux file. We strip off the -title and look up \XRLABEL-lof, which % has the text we're supposed to typeset here. % % Figures without xref labels will not be included in the list (since % they won't appear in the aux file). % \def\listoffloatsdo#1{\listoffloatsdoentry#1\finish} \def\listoffloatsdoentry#1-title\finish{{% % Can't fully expand XR#1-lof because it can contain anything. Just % pass the control sequence. On the other hand, XR#1-pg is just the % page number, and we want to fully expand that so we can get a link % in pdf output. \toksA = \expandafter{\csname XR#1-lof\endcsname}% % % use the same \entry macro we use to generate the TOC and index. \edef\writeentry{\noexpand\entry{\the\toksA}{\csname XR#1-pg\endcsname}}% \writeentry }} \message{localization,} % and i18n. % @documentlanguage is usually given very early, just after % @setfilename. If done too late, it may not override everything % properly. Single argument is the language abbreviation. % It would be nice if we could set up a hyphenation file here. % \parseargdef\documentlanguage{% \tex % read txi-??.tex file in plain TeX. % Read the file if it exists. \openin 1 txi-#1.tex \ifeof 1 \errhelp = \nolanghelp \errmessage{Cannot read language file txi-#1.tex}% \else \input txi-#1.tex \fi \closein 1 \endgroup } \newhelp\nolanghelp{The given language definition file cannot be found or is empty. Maybe you need to install it? In the current directory should work if nowhere else does.} % @documentencoding should change something in TeX eventually, most % likely, but for now just recognize it. \let\documentencoding = \comment % Page size parameters. % \newdimen\defaultparindent \defaultparindent = 15pt \chapheadingskip = 15pt plus 4pt minus 2pt \secheadingskip = 12pt plus 3pt minus 2pt \subsecheadingskip = 9pt plus 2pt minus 2pt % Prevent underfull vbox error messages. \vbadness = 10000 % Don't be so finicky about underfull hboxes, either. \hbadness = 2000 % Following George Bush, just get rid of widows and orphans. \widowpenalty=10000 \clubpenalty=10000 % Use TeX 3.0's \emergencystretch to help line breaking, but if we're % using an old version of TeX, don't do anything. We want the amount of % stretch added to depend on the line length, hence the dependence on % \hsize. We call this whenever the paper size is set. % \def\setemergencystretch{% \ifx\emergencystretch\thisisundefined % Allow us to assign to \emergencystretch anyway. \def\emergencystretch{\dimen0}% \else \emergencystretch = .15\hsize \fi } % Parameters in order: 1) textheight; 2) textwidth; % 3) voffset; 4) hoffset; 5) binding offset; 6) topskip; % 7) physical page height; 8) physical page width. % % We also call \setleading{\textleading}, so the caller should define % \textleading. The caller should also set \parskip. % \def\internalpagesizes#1#2#3#4#5#6#7#8{% \voffset = #3\relax \topskip = #6\relax \splittopskip = \topskip % \vsize = #1\relax \advance\vsize by \topskip \outervsize = \vsize \advance\outervsize by 2\topandbottommargin \pageheight = \vsize % \hsize = #2\relax \outerhsize = \hsize \advance\outerhsize by 0.5in \pagewidth = \hsize % \normaloffset = #4\relax \bindingoffset = #5\relax % \ifpdf \pdfpageheight #7\relax \pdfpagewidth #8\relax \fi % \setleading{\textleading} % \parindent = \defaultparindent \setemergencystretch } % @letterpaper (the default). \def\letterpaper{{\globaldefs = 1 \parskip = 3pt plus 2pt minus 1pt \textleading = 13.2pt % % If page is nothing but text, make it come out even. \internalpagesizes{46\baselineskip}{6in}% {\voffset}{.25in}% {\bindingoffset}{36pt}% {11in}{8.5in}% }} % Use @smallbook to reset parameters for 7x9.25 trim size. \def\smallbook{{\globaldefs = 1 \parskip = 2pt plus 1pt \textleading = 12pt % \internalpagesizes{7.5in}{5in}% {\voffset}{.25in}% {\bindingoffset}{16pt}% {9.25in}{7in}% % \lispnarrowing = 0.3in \tolerance = 700 \hfuzz = 1pt \contentsrightmargin = 0pt \defbodyindent = .5cm }} % Use @smallerbook to reset parameters for 6x9 trim size. % (Just testing, parameters still in flux.) \def\smallerbook{{\globaldefs = 1 \parskip = 1.5pt plus 1pt \textleading = 12pt % \internalpagesizes{7.4in}{4.8in}% {-.2in}{-.4in}% {0pt}{14pt}% {9in}{6in}% % \lispnarrowing = 0.25in \tolerance = 700 \hfuzz = 1pt \contentsrightmargin = 0pt \defbodyindent = .4cm }} % Use @afourpaper to print on European A4 paper. \def\afourpaper{{\globaldefs = 1 \parskip = 3pt plus 2pt minus 1pt \textleading = 13.2pt % % Double-side printing via postscript on Laserjet 4050 % prints double-sided nicely when \bindingoffset=10mm and \hoffset=-6mm. % To change the settings for a different printer or situation, adjust % \normaloffset until the front-side and back-side texts align. Then % do the same for \bindingoffset. You can set these for testing in % your texinfo source file like this: % @tex % \global\normaloffset = -6mm % \global\bindingoffset = 10mm % @end tex \internalpagesizes{51\baselineskip}{160mm} {\voffset}{\hoffset}% {\bindingoffset}{44pt}% {297mm}{210mm}% % \tolerance = 700 \hfuzz = 1pt \contentsrightmargin = 0pt \defbodyindent = 5mm }} % Use @afivepaper to print on European A5 paper. % From romildo@urano.iceb.ufop.br, 2 July 2000. % He also recommends making @example and @lisp be small. \def\afivepaper{{\globaldefs = 1 \parskip = 2pt plus 1pt minus 0.1pt \textleading = 12.5pt % \internalpagesizes{160mm}{120mm}% {\voffset}{\hoffset}% {\bindingoffset}{8pt}% {210mm}{148mm}% % \lispnarrowing = 0.2in \tolerance = 800 \hfuzz = 1.2pt \contentsrightmargin = 0pt \defbodyindent = 2mm \tableindent = 12mm }} % A specific text layout, 24x15cm overall, intended for A4 paper. \def\afourlatex{{\globaldefs = 1 \afourpaper \internalpagesizes{237mm}{150mm}% {\voffset}{4.6mm}% {\bindingoffset}{7mm}% {297mm}{210mm}% % % Must explicitly reset to 0 because we call \afourpaper. \globaldefs = 0 }} % Use @afourwide to print on A4 paper in landscape format. \def\afourwide{{\globaldefs = 1 \afourpaper \internalpagesizes{241mm}{165mm}% {\voffset}{-2.95mm}% {\bindingoffset}{7mm}% {297mm}{210mm}% \globaldefs = 0 }} % @pagesizes TEXTHEIGHT[,TEXTWIDTH] % Perhaps we should allow setting the margins, \topskip, \parskip, % and/or leading, also. Or perhaps we should compute them somehow. % \parseargdef\pagesizes{\pagesizesyyy #1,,\finish} \def\pagesizesyyy#1,#2,#3\finish{{% \setbox0 = \hbox{\ignorespaces #2}\ifdim\wd0 > 0pt \hsize=#2\relax \fi \globaldefs = 1 % \parskip = 3pt plus 2pt minus 1pt \setleading{\textleading}% % \dimen0 = #1 \advance\dimen0 by \voffset % \dimen2 = \hsize \advance\dimen2 by \normaloffset % \internalpagesizes{#1}{\hsize}% {\voffset}{\normaloffset}% {\bindingoffset}{44pt}% {\dimen0}{\dimen2}% }} % Set default to letter. % \letterpaper \message{and turning on texinfo input format.} % Define macros to output various characters with catcode for normal text. \catcode`\"=\other \catcode`\~=\other \catcode`\^=\other \catcode`\_=\other \catcode`\|=\other \catcode`\<=\other \catcode`\>=\other \catcode`\+=\other \catcode`\$=\other \def\normaldoublequote{"} \def\normaltilde{~} \def\normalcaret{^} \def\normalunderscore{_} \def\normalverticalbar{|} \def\normalless{<} \def\normalgreater{>} \def\normalplus{+} \def\normaldollar{$}%$ font-lock fix % This macro is used to make a character print one way in \tt % (where it can probably be output as-is), and another way in other fonts, % where something hairier probably needs to be done. % % #1 is what to print if we are indeed using \tt; #2 is what to print % otherwise. Since all the Computer Modern typewriter fonts have zero % interword stretch (and shrink), and it is reasonable to expect all % typewriter fonts to have this, we can check that font parameter. % \def\ifusingtt#1#2{\ifdim \fontdimen3\font=0pt #1\else #2\fi} % Same as above, but check for italic font. Actually this also catches % non-italic slanted fonts since it is impossible to distinguish them from % italic fonts. But since this is only used by $ and it uses \sl anyway % this is not a problem. \def\ifusingit#1#2{\ifdim \fontdimen1\font>0pt #1\else #2\fi} % Turn off all special characters except @ % (and those which the user can use as if they were ordinary). % Most of these we simply print from the \tt font, but for some, we can % use math or other variants that look better in normal text. \catcode`\"=\active \def\activedoublequote{{\tt\char34}} \let"=\activedoublequote \catcode`\~=\active \def~{{\tt\char126}} \chardef\hat=`\^ \catcode`\^=\active \def^{{\tt \hat}} \catcode`\_=\active \def_{\ifusingtt\normalunderscore\_} \let\realunder=_ % Subroutine for the previous macro. \def\_{\leavevmode \kern.07em \vbox{\hrule width.3em height.1ex}\kern .07em } \catcode`\|=\active \def|{{\tt\char124}} \chardef \less=`\< \catcode`\<=\active \def<{{\tt \less}} \chardef \gtr=`\> \catcode`\>=\active \def>{{\tt \gtr}} \catcode`\+=\active \def+{{\tt \char 43}} \catcode`\$=\active \def${\ifusingit{{\sl\$}}\normaldollar}%$ font-lock fix % If a .fmt file is being used, characters that might appear in a file % name cannot be active until we have parsed the command line. % So turn them off again, and have \everyjob (or @setfilename) turn them on. % \otherifyactive is called near the end of this file. \def\otherifyactive{\catcode`+=\other \catcode`\_=\other} \catcode`\@=0 % \backslashcurfont outputs one backslash character in current font, % as in \char`\\. \global\chardef\backslashcurfont=`\\ \global\let\rawbackslashxx=\backslashcurfont % let existing .??s files work % \rawbackslash defines an active \ to do \backslashcurfont. % \otherbackslash defines an active \ to be a literal `\' character with % catcode other. {\catcode`\\=\active @gdef@rawbackslash{@let\=@backslashcurfont} @gdef@otherbackslash{@let\=@realbackslash} } % \realbackslash is an actual character `\' with catcode other, and % \doublebackslash is two of them (for the pdf outlines). {\catcode`\\=\other @gdef@realbackslash{\} @gdef@doublebackslash{\\}} % \normalbackslash outputs one backslash in fixed width font. \def\normalbackslash{{\tt\backslashcurfont}} \catcode`\\=\active % Used sometimes to turn off (effectively) the active characters % even after parsing them. @def@turnoffactive{% @let"=@normaldoublequote @let\=@realbackslash @let~=@normaltilde @let^=@normalcaret @let_=@normalunderscore @let|=@normalverticalbar @let<=@normalless @let>=@normalgreater @let+=@normalplus @let$=@normaldollar %$ font-lock fix @unsepspaces } % Same as @turnoffactive except outputs \ as {\tt\char`\\} instead of % the literal character `\'. (Thus, \ is not expandable when this is in % effect.) % @def@normalturnoffactive{@turnoffactive @let\=@normalbackslash} % Make _ and + \other characters, temporarily. % This is canceled by @fixbackslash. @otherifyactive % If a .fmt file is being used, we don't want the `\input texinfo' to show up. % That is what \eatinput is for; after that, the `\' should revert to printing % a backslash. % @gdef@eatinput input texinfo{@fixbackslash} @global@let\ = @eatinput % On the other hand, perhaps the file did not have a `\input texinfo'. Then % the first `\{ in the file would cause an error. This macro tries to fix % that, assuming it is called before the first `\' could plausibly occur. % Also turn back on active characters that might appear in the input % file name, in case not using a pre-dumped format. % @gdef@fixbackslash{% @ifx\@eatinput @let\ = @normalbackslash @fi @catcode`+=@active @catcode`@_=@active } % Say @foo, not \foo, in error messages. @escapechar = `@@ % These look ok in all fonts, so just make them not special. @catcode`@& = @other @catcode`@# = @other @catcode`@% = @other @c Local variables: @c eval: (add-hook 'write-file-hooks 'time-stamp) @c page-delimiter: "^\\\\message" @c time-stamp-start: "def\\\\texinfoversion{" @c time-stamp-format: "%:y-%02m-%02d.%02H" @c time-stamp-end: "}" @c End: @c vim:sw=2: @ignore arch-tag: e1b36e32-c96e-4135-a41a-0b2efa2ea115 @end ignore make-doc-non-dfsg-3.81.orig/doc/version.texi0000644000175000017500000000013410416557457021210 0ustar srivastasrivasta@set UPDATED 1 April 2006 @set UPDATED-MONTH April 2006 @set EDITION 3.81 @set VERSION 3.81 make-doc-non-dfsg-3.81.orig/install-sh0000755000175000017500000002202110416557457020066 0ustar srivastasrivasta#!/bin/sh # install - install a program, script, or datafile scriptversion=2005-05-14.22 # This originates from X11R5 (mit/util/scripts/install.sh), which was # later released in X11R6 (xc/config/util/install.sh) with the # following copyright and license. # # Copyright (C) 1994 X Consortium # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN # AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC- # TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # # Except as contained in this notice, the name of the X Consortium shall not # be used in advertising or otherwise to promote the sale, use or other deal- # ings in this Software without prior written authorization from the X Consor- # tium. # # # FSF changes to this file are in the public domain. # # Calling this script install-sh is preferred over install.sh, to prevent # `make' implicit rules from creating a file called install from it # when there is no Makefile. # # This script is compatible with the BSD install script, but was written # from scratch. It can only install one file at a time, a restriction # shared with many OS's install programs. # set DOITPROG to echo to test this script # Don't use :- since 4.3BSD and earlier shells don't like it. doit="${DOITPROG-}" # put in absolute paths if you don't have them in your path; or use env. vars. mvprog="${MVPROG-mv}" cpprog="${CPPROG-cp}" chmodprog="${CHMODPROG-chmod}" chownprog="${CHOWNPROG-chown}" chgrpprog="${CHGRPPROG-chgrp}" stripprog="${STRIPPROG-strip}" rmprog="${RMPROG-rm}" mkdirprog="${MKDIRPROG-mkdir}" chmodcmd="$chmodprog 0755" chowncmd= chgrpcmd= stripcmd= rmcmd="$rmprog -f" mvcmd="$mvprog" src= dst= dir_arg= dstarg= no_target_directory= usage="Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE or: $0 [OPTION]... SRCFILES... DIRECTORY or: $0 [OPTION]... -t DIRECTORY SRCFILES... or: $0 [OPTION]... -d DIRECTORIES... In the 1st form, copy SRCFILE to DSTFILE. In the 2nd and 3rd, copy all SRCFILES to DIRECTORY. In the 4th, create DIRECTORIES. Options: -c (ignored) -d create directories instead of installing files. -g GROUP $chgrpprog installed files to GROUP. -m MODE $chmodprog installed files to MODE. -o USER $chownprog installed files to USER. -s $stripprog installed files. -t DIRECTORY install into DIRECTORY. -T report an error if DSTFILE is a directory. --help display this help and exit. --version display version info and exit. Environment variables override the default commands: CHGRPPROG CHMODPROG CHOWNPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG " while test -n "$1"; do case $1 in -c) shift continue;; -d) dir_arg=true shift continue;; -g) chgrpcmd="$chgrpprog $2" shift shift continue;; --help) echo "$usage"; exit $?;; -m) chmodcmd="$chmodprog $2" shift shift continue;; -o) chowncmd="$chownprog $2" shift shift continue;; -s) stripcmd=$stripprog shift continue;; -t) dstarg=$2 shift shift continue;; -T) no_target_directory=true shift continue;; --version) echo "$0 $scriptversion"; exit $?;; *) # When -d is used, all remaining arguments are directories to create. # When -t is used, the destination is already specified. test -n "$dir_arg$dstarg" && break # Otherwise, the last argument is the destination. Remove it from $@. for arg do if test -n "$dstarg"; then # $@ is not empty: it contains at least $arg. set fnord "$@" "$dstarg" shift # fnord fi shift # arg dstarg=$arg done break;; esac done if test -z "$1"; then if test -z "$dir_arg"; then echo "$0: no input file specified." >&2 exit 1 fi # It's OK to call `install-sh -d' without argument. # This can happen when creating conditional directories. exit 0 fi for src do # Protect names starting with `-'. case $src in -*) src=./$src ;; esac if test -n "$dir_arg"; then dst=$src src= if test -d "$dst"; then mkdircmd=: chmodcmd= else mkdircmd=$mkdirprog fi else # Waiting for this to be detected by the "$cpprog $src $dsttmp" command # might cause directories to be created, which would be especially bad # if $src (and thus $dsttmp) contains '*'. if test ! -f "$src" && test ! -d "$src"; then echo "$0: $src does not exist." >&2 exit 1 fi if test -z "$dstarg"; then echo "$0: no destination specified." >&2 exit 1 fi dst=$dstarg # Protect names starting with `-'. case $dst in -*) dst=./$dst ;; esac # If destination is a directory, append the input filename; won't work # if double slashes aren't ignored. if test -d "$dst"; then if test -n "$no_target_directory"; then echo "$0: $dstarg: Is a directory" >&2 exit 1 fi dst=$dst/`basename "$src"` fi fi # This sed command emulates the dirname command. dstdir=`echo "$dst" | sed -e 's,/*$,,;s,[^/]*$,,;s,/*$,,;s,^$,.,'` # Make sure that the destination directory exists. # Skip lots of stat calls in the usual case. if test ! -d "$dstdir"; then defaultIFS=' ' IFS="${IFS-$defaultIFS}" oIFS=$IFS # Some sh's can't handle IFS=/ for some reason. IFS='%' set x `echo "$dstdir" | sed -e 's@/@%@g' -e 's@^%@/@'` shift IFS=$oIFS pathcomp= while test $# -ne 0 ; do pathcomp=$pathcomp$1 shift if test ! -d "$pathcomp"; then $mkdirprog "$pathcomp" # mkdir can fail with a `File exist' error in case several # install-sh are creating the directory concurrently. This # is OK. test -d "$pathcomp" || exit fi pathcomp=$pathcomp/ done fi if test -n "$dir_arg"; then $doit $mkdircmd "$dst" \ && { test -z "$chowncmd" || $doit $chowncmd "$dst"; } \ && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } \ && { test -z "$stripcmd" || $doit $stripcmd "$dst"; } \ && { test -z "$chmodcmd" || $doit $chmodcmd "$dst"; } else dstfile=`basename "$dst"` # Make a couple of temp file names in the proper directory. dsttmp=$dstdir/_inst.$$_ rmtmp=$dstdir/_rm.$$_ # Trap to clean up those temp files at exit. trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0 trap '(exit $?); exit' 1 2 13 15 # Copy the file name to the temp name. $doit $cpprog "$src" "$dsttmp" && # and set any options; do chmod last to preserve setuid bits. # # If any of these fail, we abort the whole thing. If we want to # ignore errors from any of these, just make sure not to ignore # errors from the above "$doit $cpprog $src $dsttmp" command. # { test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } \ && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } \ && { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } \ && { test -z "$chmodcmd" || $doit $chmodcmd "$dsttmp"; } && # Now rename the file to the real destination. { $doit $mvcmd -f "$dsttmp" "$dstdir/$dstfile" 2>/dev/null \ || { # The rename failed, perhaps because mv can't rename something else # to itself, or perhaps because mv is so ancient that it does not # support -f. # Now remove or move aside any old file at destination location. # We try this two ways since rm can't unlink itself on some # systems and the destination file might be busy for other # reasons. In this case, the final cleanup might fail but the new # file should still install successfully. { if test -f "$dstdir/$dstfile"; then $doit $rmcmd -f "$dstdir/$dstfile" 2>/dev/null \ || $doit $mvcmd -f "$dstdir/$dstfile" "$rmtmp" 2>/dev/null \ || { echo "$0: cannot unlink or rename $dstdir/$dstfile" >&2 (exit 1); exit 1 } else : fi } && # Now rename the file to the real destination. $doit $mvcmd "$dsttmp" "$dstdir/$dstfile" } } fi || { (exit 1); exit 1; } done # The final little trick to "correctly" pass the exit status to the exit trap. { (exit 0); exit 0 } # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-end: "$" # End: make-doc-non-dfsg-3.81.orig/missing0000755000175000017500000002540610416557457017473 0ustar srivastasrivasta#! /bin/sh # Common stub for a few missing GNU programs while installing. scriptversion=2005-06-08.21 # Copyright (C) 1996, 1997, 1999, 2000, 2002, 2003, 2004, 2005 # Free Software Foundation, Inc. # Originally by Fran,cois Pinard , 1996. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA # 02110-1301, USA. # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. if test $# -eq 0; then echo 1>&2 "Try \`$0 --help' for more information" exit 1 fi run=: # In the cases where this matters, `missing' is being run in the # srcdir already. if test -f configure.ac; then configure_ac=configure.ac else configure_ac=configure.in fi msg="missing on your system" case "$1" in --run) # Try to run requested program, and just exit if it succeeds. run= shift "$@" && exit 0 # Exit code 63 means version mismatch. This often happens # when the user try to use an ancient version of a tool on # a file that requires a minimum version. In this case we # we should proceed has if the program had been absent, or # if --run hadn't been passed. if test $? = 63; then run=: msg="probably too old" fi ;; -h|--h|--he|--hel|--help) echo "\ $0 [OPTION]... PROGRAM [ARGUMENT]... Handle \`PROGRAM [ARGUMENT]...' for when PROGRAM is missing, or return an error status if there is no known handling for PROGRAM. Options: -h, --help display this help and exit -v, --version output version information and exit --run try to run the given command, and emulate it if it fails Supported PROGRAM values: aclocal touch file \`aclocal.m4' autoconf touch file \`configure' autoheader touch file \`config.h.in' automake touch all \`Makefile.in' files bison create \`y.tab.[ch]', if possible, from existing .[ch] flex create \`lex.yy.c', if possible, from existing .c help2man touch the output file lex create \`lex.yy.c', if possible, from existing .c makeinfo touch the output file tar try tar, gnutar, gtar, then tar without non-portable flags yacc create \`y.tab.[ch]', if possible, from existing .[ch] Send bug reports to ." exit $? ;; -v|--v|--ve|--ver|--vers|--versi|--versio|--version) echo "missing $scriptversion (GNU Automake)" exit $? ;; -*) echo 1>&2 "$0: Unknown \`$1' option" echo 1>&2 "Try \`$0 --help' for more information" exit 1 ;; esac # Now exit if we have it, but it failed. Also exit now if we # don't have it and --version was passed (most likely to detect # the program). case "$1" in lex|yacc) # Not GNU programs, they don't have --version. ;; tar) if test -n "$run"; then echo 1>&2 "ERROR: \`tar' requires --run" exit 1 elif test "x$2" = "x--version" || test "x$2" = "x--help"; then exit 1 fi ;; *) if test -z "$run" && ($1 --version) > /dev/null 2>&1; then # We have it, but it failed. exit 1 elif test "x$2" = "x--version" || test "x$2" = "x--help"; then # Could not run --version or --help. This is probably someone # running `$TOOL --version' or `$TOOL --help' to check whether # $TOOL exists and not knowing $TOOL uses missing. exit 1 fi ;; esac # If it does not exist, or fails to run (possibly an outdated version), # try to emulate it. case "$1" in aclocal*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`acinclude.m4' or \`${configure_ac}'. You might want to install the \`Automake' and \`Perl' packages. Grab them from any GNU archive site." touch aclocal.m4 ;; autoconf) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`${configure_ac}'. You might want to install the \`Autoconf' and \`GNU m4' packages. Grab them from any GNU archive site." touch configure ;; autoheader) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`acconfig.h' or \`${configure_ac}'. You might want to install the \`Autoconf' and \`GNU m4' packages. Grab them from any GNU archive site." files=`sed -n 's/^[ ]*A[CM]_CONFIG_HEADER(\([^)]*\)).*/\1/p' ${configure_ac}` test -z "$files" && files="config.h" touch_files= for f in $files; do case "$f" in *:*) touch_files="$touch_files "`echo "$f" | sed -e 's/^[^:]*://' -e 's/:.*//'`;; *) touch_files="$touch_files $f.in";; esac done touch $touch_files ;; automake*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`Makefile.am', \`acinclude.m4' or \`${configure_ac}'. You might want to install the \`Automake' and \`Perl' packages. Grab them from any GNU archive site." find . -type f -name Makefile.am -print | sed 's/\.am$/.in/' | while read f; do touch "$f"; done ;; autom4te) echo 1>&2 "\ WARNING: \`$1' is needed, but is $msg. You might have modified some files without having the proper tools for further handling them. You can get \`$1' as part of \`Autoconf' from any GNU archive site." file=`echo "$*" | sed -n 's/.*--output[ =]*\([^ ]*\).*/\1/p'` test -z "$file" && file=`echo "$*" | sed -n 's/.*-o[ ]*\([^ ]*\).*/\1/p'` if test -f "$file"; then touch $file else test -z "$file" || exec >$file echo "#! /bin/sh" echo "# Created by GNU Automake missing as a replacement of" echo "# $ $@" echo "exit 0" chmod +x $file exit 1 fi ;; bison|yacc) echo 1>&2 "\ WARNING: \`$1' $msg. You should only need it if you modified a \`.y' file. You may need the \`Bison' package in order for those modifications to take effect. You can get \`Bison' from any GNU archive site." rm -f y.tab.c y.tab.h if [ $# -ne 1 ]; then eval LASTARG="\${$#}" case "$LASTARG" in *.y) SRCFILE=`echo "$LASTARG" | sed 's/y$/c/'` if [ -f "$SRCFILE" ]; then cp "$SRCFILE" y.tab.c fi SRCFILE=`echo "$LASTARG" | sed 's/y$/h/'` if [ -f "$SRCFILE" ]; then cp "$SRCFILE" y.tab.h fi ;; esac fi if [ ! -f y.tab.h ]; then echo >y.tab.h fi if [ ! -f y.tab.c ]; then echo 'main() { return 0; }' >y.tab.c fi ;; lex|flex) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified a \`.l' file. You may need the \`Flex' package in order for those modifications to take effect. You can get \`Flex' from any GNU archive site." rm -f lex.yy.c if [ $# -ne 1 ]; then eval LASTARG="\${$#}" case "$LASTARG" in *.l) SRCFILE=`echo "$LASTARG" | sed 's/l$/c/'` if [ -f "$SRCFILE" ]; then cp "$SRCFILE" lex.yy.c fi ;; esac fi if [ ! -f lex.yy.c ]; then echo 'main() { return 0; }' >lex.yy.c fi ;; help2man) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified a dependency of a manual page. You may need the \`Help2man' package in order for those modifications to take effect. You can get \`Help2man' from any GNU archive site." file=`echo "$*" | sed -n 's/.*-o \([^ ]*\).*/\1/p'` if test -z "$file"; then file=`echo "$*" | sed -n 's/.*--output=\([^ ]*\).*/\1/p'` fi if [ -f "$file" ]; then touch $file else test -z "$file" || exec >$file echo ".ab help2man is required to generate this page" exit 1 fi ;; makeinfo) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified a \`.texi' or \`.texinfo' file, or any other file indirectly affecting the aspect of the manual. The spurious call might also be the consequence of using a buggy \`make' (AIX, DU, IRIX). You might want to install the \`Texinfo' package or the \`GNU make' package. Grab either from any GNU archive site." # The file to touch is that specified with -o ... file=`echo "$*" | sed -n 's/.*-o \([^ ]*\).*/\1/p'` if test -z "$file"; then # ... or it is the one specified with @setfilename ... infile=`echo "$*" | sed 's/.* \([^ ]*\) *$/\1/'` file=`sed -n '/^@setfilename/ { s/.* \([^ ]*\) *$/\1/; p; q; }' $infile` # ... or it is derived from the source name (dir/f.texi becomes f.info) test -z "$file" && file=`echo "$infile" | sed 's,.*/,,;s,.[^.]*$,,'`.info fi # If the file does not exist, the user really needs makeinfo; # let's fail without touching anything. test -f $file || exit 1 touch $file ;; tar) shift # We have already tried tar in the generic part. # Look for gnutar/gtar before invocation to avoid ugly error # messages. if (gnutar --version > /dev/null 2>&1); then gnutar "$@" && exit 0 fi if (gtar --version > /dev/null 2>&1); then gtar "$@" && exit 0 fi firstarg="$1" if shift; then case "$firstarg" in *o*) firstarg=`echo "$firstarg" | sed s/o//` tar "$firstarg" "$@" && exit 0 ;; esac case "$firstarg" in *h*) firstarg=`echo "$firstarg" | sed s/h//` tar "$firstarg" "$@" && exit 0 ;; esac fi echo 1>&2 "\ WARNING: I can't seem to be able to run \`tar' with the given arguments. You may want to install GNU tar or Free paxutils, or check the command line arguments." exit 1 ;; *) echo 1>&2 "\ WARNING: \`$1' is needed, and is $msg. You might have modified some files without having the proper tools for further handling them. Check the \`README' file, it often tells you about the needed prerequisites for installing this package. You may also peek at any GNU archive site, in case some other package would contain this missing \`$1' program." exit 1 ;; esac exit 0 # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-end: "$" # End: