pax_global_header00006660000000000000000000000064132656535040014523gustar00rootroot0000000000000052 comment=572ae5d67d7ff665a5a24f31980c43fdd9c7599a curtin-18.1-5-g572ae5d6/000077500000000000000000000000001326565350400144315ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/.gitignore000066400000000000000000000000621326565350400164170ustar00rootroot00000000000000*.pyc __pycache__ .tox .coverage curtin.egg-info/ curtin-18.1-5-g572ae5d6/HACKING.rst000066400000000000000000000100011326565350400162170ustar00rootroot00000000000000***************** Hacking on curtin ***************** This document describes how to contribute changes to curtin. It assumes you have a `Launchpad`_ account, and refers to your launchpad user as ``LP_USER`` throughout. Do these things once ==================== * To contribute, you must sign the Canonical `contributor license agreement`_ If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Scott Moser `_ and `Ryan Harper `_ or ping smoser or rharper in ``#curtin`` channel via Freenode IRC. When prompted for 'Project contact' or 'Canonical Project Manager' enter 'David Britton'. * Configure git with your email and name for commit messages. Your name will appear in commit messages and will also be used in changelogs or release notes. Give yourself credit! Please provide a valid email address:: git config user.name "Your Name" git config user.email "Your Email" * Clone the upstream `repository`_ on Launchpad:: git clone https://git.launchpad.net/curtin cd curtin There is more information on Launchpad as a git hosting site in `Launchpad git documentation`_. * Create a new remote pointing to your personal Launchpad repository. This is equivalent to 'fork' on github. .. code:: sh git remote add LP_USER ssh://LP_USER@git.launchpad.net/~LP_USER/curtin git push LP_USER master .. _repository: https://git.launchpad.net/curtin .. _contributor license agreement: http://www.canonical.com/contributors .. _contributor-agreement-canonical: https://launchpad.net/%7Econtributor-agreement-canonical/+members .. _Launchpad git documentation: https://help.launchpad.net/Code/Git Do these things for each feature or bug ======================================= * Create a new topic branch for your work:: git checkout -b my-topic-branch * Make and commit your changes (note, you can make multiple commits, fixes, more commits.):: git commit * Run unit tests and lint/formatting checks with `tox`_:: tox * Push your changes to your personal Launchpad repository:: git push -u LP_USER my-topic-branch * Use your browser to create a merge request: - Open the branch on Launchpad. - You can see a web view of your repository and navigate to the branch at: ``https://code.launchpad.net/~LP_USER/curtin/`` - It will typically be at: ``https://code.launchpad.net/~LP_USER/curtin/+git/curtin/+ref/BRANCHNAME`` Here is an example link: https://code.launchpad.net/~raharper/curtin/+git/curtin/+ref/feature/zfs-root - Click 'Propose for merging' - Select 'lp:curtin' as the target repository - Type '``master``' as the Target reference path - Click 'Propose Merge' - On the next page, hit 'Set commit message' and type a combined git style commit message. The commit message should be one summary line of less than 74 characters followed by a blank line, and then one or more paragraphs describing the change and why it was needed. If you have fixed a bug in your commit, reference it at the end of the message with ``LP: #XXXXXXX``. This is the message that will be used on the commit when it is sqaushed and merged into trunk. Here is an example: :: Activate the frobnicator. The frobnicator was previously inactive and now runs by default. This may save the world some day. Then, list the bugs you fixed as footers with syntax as shown here. LP: #1 Then, someone in the `curtin-dev` group will review your changes and follow up in the merge request. Feel free to ping and/or join ``#curtin`` on Freenode IRC if you have any questions. .. _tox: https://tox.readthedocs.io/en/latest/ .. _Launchpad: https://launchpad.net .. _curtin-dev: https://launchpad.net/~curtin-dev/+members#active curtin-18.1-5-g572ae5d6/LICENSE000066400000000000000000000012021326565350400154310ustar00rootroot00000000000000Copyright 2013 Canonical Ltd and contributors. SPDX-License-Identifier: AGPL-3.0-only Curtin is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, version 3. Curtin is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with Curtin. If not, see . curtin-18.1-5-g572ae5d6/LICENSE-AGPLv3000066400000000000000000001033301326565350400164300ustar00rootroot00000000000000 GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software. A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public. The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version. An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU Affero General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Remote Network Interaction; Use with the GNU General Public License. Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements. You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see . curtin-18.1-5-g572ae5d6/Makefile000066400000000000000000000026271326565350400161000ustar00rootroot00000000000000TOP := $(abspath $(dir $(lastword $(MAKEFILE_LIST)))) CWD := $(shell pwd) PYTHON ?= python3 COVERAGE ?= 1 DEFAULT_COVERAGEOPTS = --with-coverage --cover-erase --cover-branches --cover-package=curtin --cover-inclusive ifeq ($(COVERAGE), 1) coverageopts ?= $(DEFAULT_COVERAGEOPTS) endif CURTIN_VMTEST_IMAGE_SYNC ?= False export CURTIN_VMTEST_IMAGE_SYNC noseopts ?= -vv --nologcapture build: bin/curtin: curtin/pack.py tools/write-curtin $(PYTHON) tools/write-curtin bin/curtin check: unittest style-check: pep8 pyflakes pyflakes3 coverage: coverageopts ?= $(DEFAULT_COVERAGEOPTS) coverage: unittest pep8: @$(CWD)/tools/run-pep8 pyflakes: @$(CWD)/tools/run-pyflakes pyflakes3: @$(CWD)/tools/run-pyflakes3 unittest: nosetests $(coverageopts) $(noseopts) tests/unittests nosetests3 $(coverageopts) $(noseopts) tests/unittests docs: check-doc-deps make -C doc html check-doc-deps: @which sphinx-build && $(PYTHON) -c 'import sphinx_rtd_theme' || \ { echo "Missing doc dependencies. Install with:"; \ pkgs="python3-sphinx-rtd-theme python3-sphinx"; \ echo sudo apt-get install -qy $$pkgs ; exit 1; } # By default don't sync images when running all tests. vmtest: nosetests3 $(noseopts) tests/vmtests vmtest-deps: @$(CWD)/tools/vmtest-system-setup sync-images: @$(CWD)/tools/vmtest-sync-images clean: rm -rf doc/_build .PHONY: all clean test pyflakes pyflakes3 pep8 build style-check check-doc-deps curtin-18.1-5-g572ae5d6/README000066400000000000000000000002431326565350400153100ustar00rootroot00000000000000This is 'curtin', the curt installer. It is blunt, brief, snappish, snippety and unceremonious. Its goal is to install an operating system as quick as possible. curtin-18.1-5-g572ae5d6/bin/000077500000000000000000000000001326565350400152015ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/bin/curtin000077500000000000000000000036711326565350400164420ustar00rootroot00000000000000#!/bin/sh # This file is part of curtin. See LICENSE file for copyright and license info. PY3OR2_MAIN="curtin.commands.main" PY3OR2_MCHECK="curtin.deps.check" PY3OR2_PYTHONS=${PY3OR2_PYTHONS:-"python3:python"} PYTHON=${PY3OR2_PYTHON} PY3OR2_DEBUG=${PY3OR2_DEBUG:-0} debug() { [ "${PY3OR2_DEBUG}" != "0" ] || return 0 echo "$@" 1>&2 } fail() { echo "$@" 1>&2; exit 1; } # if $0 is is bin/ and dirname($0)/../module exists, then prepend PYTHONPATH mydir=${0%/*} updir=${mydir%/*} if [ "${mydir#${updir}/}" = "bin" -a -d "$updir/${PY3OR2_MCHECK%%.*}" ]; then updir=$(cd "$mydir/.." && pwd) case "$PYTHONPATH" in *:$updir:*|$updir:*|*:$updir) :;; *) export PYTHONPATH="$updir${PYTHONPATH:+:$PYTHONPATH}" debug "adding '$updir' to PYTHONPATH" ;; esac fi if [ ! -n "$PYTHON" ]; then first_exe="" oifs="$IFS"; IFS=":" best=0 best_exe="" [ "${PY3OR2_DEBUG}" = "0" ] && _v="" || _v="-v" for p in $PY3OR2_PYTHONS; do command -v "$p" >/dev/null 2>&1 || { debug "$p: not in path"; continue; } [ -z "$PY3OR2_MCHECK" ] && PYTHON=$p && break out=$($p -m "$PY3OR2_MCHECK" $_v -- "$@" 2>&1) && PYTHON="$p" && { debug "$p is good [$p -m $PY3OR2_MCHECK $_v -- $*]"; break; } ret=$? debug "$p [$ret]: $out" # exit code of 1 is unuseable [ $ret -eq 1 ] && continue [ -n "$first_exe" ] || first_exe="$p" # higher non-zero exit values indicate more plausible usability [ $best -lt $ret ] && best_exe="$p" && best=$ret && debug "current best: $best_exe" done IFS="$oifs" [ -z "$best_exe" -a -n "$first_exe" ] && best_exe="$first_exe" [ -n "$PYTHON" ] || PYTHON="$best_exe" [ -n "$PYTHON" ] || fail "no availble python? [PY3OR2_DEBUG=1 for more info]" fi debug "executing: $PYTHON -m \"$PY3OR2_MAIN\" $*" exec $PYTHON -m "$PY3OR2_MAIN" "$@" # vi: ts=4 expandtab syntax=sh curtin-18.1-5-g572ae5d6/curtin/000077500000000000000000000000001326565350400157355ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/curtin/__init__.py000066400000000000000000000022571326565350400200540ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. # This constant is made available so a caller can read it # it must be kept the same as that used in helpers/common:get_carryover_params KERNEL_CMDLINE_COPY_TO_INSTALL_SEP = "---" # The 'FEATURES' variable is provided so that users of curtin # can determine which features are supported. Each entry should have # a consistent meaning. FEATURES = [ # curtin can apply centos networking via centos_apply_network_config 'CENTOS_APPLY_NETWORK_CONFIG', # install supports the 'network' config version 1 'NETWORK_CONFIG_V1', # reporter supports 'webhook' type 'REPORTING_EVENTS_WEBHOOK', # install supports the 'storage' config version 1 'STORAGE_CONFIG_V1', # install supports the 'storage' config version 1 for DD images 'STORAGE_CONFIG_V1_DD', # subcommand 'system-install' is present 'SUBCOMMAND_SYSTEM_INSTALL', # subcommand 'system-upgrade' is present 'SUBCOMMAND_SYSTEM_UPGRADE', # supports new format of apt configuration 'APT_CONFIG_V1', # has version module 'HAS_VERSION_MODULE', ] __version__ = "18.1" # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/block/000077500000000000000000000000001326565350400170275ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/curtin/block/__init__.py000066400000000000000000001106241326565350400211440ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from contextlib import contextmanager import errno import itertools import os import stat import sys import tempfile from curtin import util from curtin.block import lvm from curtin.log import LOG from curtin.udev import udevadm_settle def get_dev_name_entry(devname): """ convert device name to path in /dev """ bname = devname.split('/dev/')[-1] return (bname, "/dev/" + bname) def is_valid_device(devname): """ check if device is a valid device """ devent = get_dev_name_entry(devname)[1] return is_block_device(devent) def is_block_device(path): """ check if path is a block device """ try: return stat.S_ISBLK(os.stat(path).st_mode) except OSError as e: if not util.is_file_not_found_exc(e): raise return False def dev_short(devname): """ get short form of device name """ devname = os.path.normpath(devname) if os.path.sep in devname: return os.path.basename(devname) return devname def dev_path(devname): """ convert device name to path in /dev """ if devname.startswith('/dev/'): return devname else: return '/dev/' + devname def path_to_kname(path): """ converts a path in /dev or a path in /sys/block to the device kname, taking special devices and unusual naming schemes into account """ # if path given is a link, get real path # only do this if given a path though, if kname is already specified then # this would cause a failure where the function should still be able to run if os.path.sep in path: path = os.path.realpath(path) # using basename here ensures that the function will work given a path in # /dev, a kname, or a path in /sys/block as an arg dev_kname = os.path.basename(path) # cciss devices need to have 'cciss!' prepended if path.startswith('/dev/cciss'): dev_kname = 'cciss!' + dev_kname return dev_kname def kname_to_path(kname): """ converts a kname to a path in /dev, taking special devices and unusual naming schemes into account """ # if given something that is already a dev path, return it if os.path.exists(kname) and is_valid_device(kname): path = kname return os.path.realpath(path) # adding '/dev' to path is not sufficient to handle cciss devices and # possibly other special devices which have not been encountered yet path = os.path.realpath(os.sep.join(['/dev'] + kname.split('!'))) # make sure path we get is correct if not (os.path.exists(path) and is_valid_device(path)): raise OSError('could not get path to dev from kname: {}'.format(kname)) return path def partition_kname(disk_kname, partition_number): """ Add number to disk_kname prepending a 'p' if needed """ for dev_type in ['nvme', 'mmcblk', 'cciss', 'mpath', 'dm', 'md']: if disk_kname.startswith(dev_type): partition_number = "p%s" % partition_number break return "%s%s" % (disk_kname, partition_number) def sysfs_to_devpath(sysfs_path): """ convert a path in /sys/class/block to a path in /dev """ path = kname_to_path(path_to_kname(sysfs_path)) if not is_block_device(path): raise ValueError('could not find blockdev for sys path: {}' .format(sysfs_path)) return path def sys_block_path(devname, add=None, strict=True): """ get path to device in /sys/class/block """ toks = ['/sys/class/block'] # insert parent dev if devname is partition devname = os.path.normpath(devname) (parent, partnum) = get_blockdev_for_partition(devname, strict=strict) if partnum: toks.append(path_to_kname(parent)) toks.append(path_to_kname(devname)) if add is not None: toks.append(add) path = os.sep.join(toks) if strict and not os.path.exists(path): err = OSError( "devname '{}' did not have existing syspath '{}'".format( devname, path)) err.errno = errno.ENOENT raise err return os.path.normpath(path) def get_holders(device): """ Look up any block device holders, return list of knames """ # block.sys_block_path works when given a /sys or /dev path sysfs_path = sys_block_path(device) # get holders holders = os.listdir(os.path.join(sysfs_path, 'holders')) LOG.debug("devname '%s' had holders: %s", device, holders) return holders def get_device_slave_knames(device): """ Find the underlying knames of a given device by walking sysfs recursively. Returns a list of knames """ slave_knames = [] slaves_dir_path = os.path.join(sys_block_path(device), 'slaves') # if we find a 'slaves' dir, recurse and check # the underlying devices if os.path.exists(slaves_dir_path): slaves = os.listdir(slaves_dir_path) if len(slaves) > 0: for slave_kname in slaves: slave_knames.extend(get_device_slave_knames(slave_kname)) else: slave_knames.append(path_to_kname(device)) return slave_knames else: # if a device has no 'slaves' attribute then # we've found the underlying device, return # the kname of the device return [path_to_kname(device)] def _lsblock_pairs_to_dict(lines): """ parse lsblock output and convert to dict """ ret = {} for line in lines.splitlines(): toks = util.shlex_split(line) cur = {} for tok in toks: k, v = tok.split("=", 1) cur[k] = v # use KNAME, as NAME may include spaces and other info, # for example, lvm decices may show 'dm0 lvm1' cur['device_path'] = get_dev_name_entry(cur['KNAME'])[1] ret[cur['KNAME']] = cur return ret def _lsblock(args=None): """ get lsblock data as dict """ # lsblk --help | sed -n '/Available/,/^$/p' | # sed -e 1d -e '$d' -e 's,^[ ]\+,,' -e 's, .*,,' | sort keys = ['ALIGNMENT', 'DISC-ALN', 'DISC-GRAN', 'DISC-MAX', 'DISC-ZERO', 'FSTYPE', 'GROUP', 'KNAME', 'LABEL', 'LOG-SEC', 'MAJ:MIN', 'MIN-IO', 'MODE', 'MODEL', 'MOUNTPOINT', 'NAME', 'OPT-IO', 'OWNER', 'PHY-SEC', 'RM', 'RO', 'ROTA', 'RQ-SIZE', 'SCHED', 'SIZE', 'STATE', 'TYPE', 'UUID'] if args is None: args = [] args = [x.replace('!', '/') for x in args] # in order to avoid a very odd error with '-o' and all output fields above # we just drop one. doesn't really matter which one. keys.remove('SCHED') basecmd = ['lsblk', '--noheadings', '--bytes', '--pairs', '--output=' + ','.join(keys)] (out, _err) = util.subp(basecmd + list(args), capture=True) out = out.replace('!', '/') return _lsblock_pairs_to_dict(out) def get_unused_blockdev_info(): """ return a list of unused block devices. These are devices that do not have anything mounted on them. """ # get a list of top level block devices, then iterate over it to get # devices dependent on those. If the lsblk call for that specific # call has nothing 'MOUNTED", then this is an unused block device bdinfo = _lsblock(['--nodeps']) unused = {} for devname, data in bdinfo.items(): cur = _lsblock([data['device_path']]) mountpoints = [x for x in cur if cur[x].get('MOUNTPOINT')] if len(mountpoints) == 0: unused[devname] = data return unused def get_devices_for_mp(mountpoint): """ return a list of devices (full paths) used by the provided mountpoint """ bdinfo = _lsblock() found = set() for devname, data in bdinfo.items(): if data['MOUNTPOINT'] == mountpoint: found.add(data['device_path']) if found: return list(found) # for some reason, on some systems, lsblk does not list mountpoint # for devices that are mounted. This happens on /dev/vdc1 during a run # using tools/launch. mountpoint = [os.path.realpath(dev) for (dev, mp, vfs, opts, freq, passno) in get_proc_mounts() if mp == mountpoint] return mountpoint def get_installable_blockdevs(include_removable=False, min_size=1024**3): """ find blockdevs suitable for installation """ good = [] unused = get_unused_blockdev_info() for devname, data in unused.items(): if not include_removable and data.get('RM') == "1": continue if data.get('RO') != "0" or data.get('TYPE') != "disk": continue if min_size is not None and int(data.get('SIZE', '0')) < min_size: continue good.append(devname) return good def get_blockdev_for_partition(devpath, strict=True): """ find the parent device for a partition. returns a tuple of the parent block device and the partition number if device is not a partition, None will be returned for partition number """ # normalize path rpath = os.path.realpath(devpath) # convert an entry in /dev/ to parent disk and partition number # if devpath is a block device and not a partition, return (devpath, None) base = '/sys/class/block' # input of /dev/vdb, /dev/disk/by-label/foo, /sys/block/foo, # /sys/block/class/foo, or just foo syspath = os.path.join(base, path_to_kname(devpath)) # don't need to try out multiple sysfs paths as path_to_kname handles cciss if strict and not os.path.exists(syspath): raise OSError("%s had no syspath (%s)" % (devpath, syspath)) ptpath = os.path.join(syspath, "partition") if not os.path.exists(ptpath): return (rpath, None) ptnum = util.load_file(ptpath).rstrip() # for a partition, real syspath is something like: # /sys/devices/pci0000:00/0000:00:04.0/virtio1/block/vda/vda1 rsyspath = os.path.realpath(syspath) disksyspath = os.path.dirname(rsyspath) diskmajmin = util.load_file(os.path.join(disksyspath, "dev")).rstrip() diskdevpath = os.path.realpath("/dev/block/%s" % diskmajmin) # diskdevpath has something like 253:0 # and udev has put links in /dev/block/253:0 to the device name in /dev/ return (diskdevpath, ptnum) def get_sysfs_partitions(device): """ get a list of sysfs paths for partitions under a block device accepts input as a device kname, sysfs path, or dev path returns empty list if no partitions available """ sysfs_path = sys_block_path(device) return [sys_block_path(kname) for kname in os.listdir(sysfs_path) if os.path.exists(os.path.join(sysfs_path, kname, 'partition'))] def get_pardevs_on_blockdevs(devs): """ return a dict of partitions with their info that are on provided devs """ if devs is None: devs = [] devs = [get_dev_name_entry(d)[1] for d in devs] found = _lsblock(devs) ret = {} for short in found: if found[short]['device_path'] not in devs: ret[short] = found[short] return ret def stop_all_unused_multipath_devices(): """ Stop all unused multipath devices. """ multipath = util.which('multipath') # Command multipath is not available only when multipath-tools package # is not installed. Nothing needs to be done in this case because system # doesn't create multipath devices without this package installed and we # have nothing to stop. if not multipath: return # Command multipath -F flushes all unused multipath device maps cmd = [multipath, '-F'] try: # unless multipath cleared *everything* it will exit with 1 util.subp(cmd, rcs=[0, 1]) except util.ProcessExecutionError as e: LOG.warn("Failed to stop multipath devices: %s", e) def rescan_block_devices(): """ run 'blockdev --rereadpt' for all block devices not currently mounted """ unused = get_unused_blockdev_info() devices = [] for devname, data in unused.items(): if data.get('RM') == "1": continue if data.get('RO') != "0" or data.get('TYPE') != "disk": continue devices.append(data['device_path']) if not devices: LOG.debug("no devices found to rescan") return cmd = ['blockdev', '--rereadpt'] + devices try: util.subp(cmd, capture=True) except util.ProcessExecutionError as e: # FIXME: its less than ideal to swallow this error, but until # we fix LP: #1489521 we kind of need to. LOG.warn("Error rescanning devices, possibly known issue LP: #1489521") # Reformatting the exception output so as to not trigger # vmtest scanning for Unexepected errors in install logfile LOG.warn("cmd: %s\nstdout:%s\nstderr:%s\nexit_code:%s", e.cmd, e.stdout, e.stderr, e.exit_code) udevadm_settle() return def blkid(devs=None, cache=True): """ get data about block devices from blkid and convert to dict """ if devs is None: devs = [] # 14.04 blkid reads undocumented /dev/.blkid.tab # man pages mention /run/blkid.tab and /etc/blkid.tab if not cache: cfiles = ("/run/blkid/blkid.tab", "/dev/.blkid.tab", "/etc/blkid.tab") for cachefile in cfiles: if os.path.exists(cachefile): os.unlink(cachefile) cmd = ['blkid', '-o', 'full'] cmd.extend(devs) # blkid output is : KEY=VALUE # where KEY is TYPE, UUID, PARTUUID, LABEL out, err = util.subp(cmd, capture=True) data = {} for line in out.splitlines(): curdev, curdata = line.split(":", 1) data[curdev] = dict(tok.split('=', 1) for tok in util.shlex_split(curdata)) return data def detect_multipath(target_mountpoint): """ Detect if the operating system has been installed to a multipath device. """ # The obvious way to detect multipath is to use multipath utility which is # provided by the multipath-tools package. Unfortunately, multipath-tools # package is not available in all ephemeral images hence we can't use it. # Another reasonable way to detect multipath is to look for two (or more) # devices with the same World Wide Name (WWN) which can be fetched using # scsi_id utility. This way doesn't work as well because WWNs are not # unique in some cases which leads to false positives which may prevent # system from booting (see LP: #1463046 for details). # Taking into account all the issues mentioned above, curent implementation # detects multipath by looking for a filesystem with the same UUID # as the target device. It relies on the fact that all alternative routes # to the same disk observe identical partition information including UUID. # There are some issues with this approach as well though. We won't detect # multipath disk if it doesn't any filesystems. Good news is that # target disk will always have a filesystem because curtin creates them # while installing the system. rescan_block_devices() binfo = blkid(cache=False) LOG.debug("detect_multipath found blkid info: %s", binfo) # get_devices_for_mp may return multiple devices by design. It is not yet # implemented but it should return multiple devices when installer creates # separate disk partitions for / and /boot. We need to do UUID-based # multipath detection against each of target devices. target_devs = get_devices_for_mp(target_mountpoint) LOG.debug("target_devs: %s" % target_devs) for devpath, data in binfo.items(): # We need to figure out UUID of the target device first if devpath not in target_devs: continue # This entry contains information about one of target devices target_uuid = data.get('UUID') # UUID-based multipath detection won't work if target partition # doesn't have UUID assigned if not target_uuid: LOG.warn("Target partition %s doesn't have UUID assigned", devpath) continue LOG.debug("%s: %s" % (devpath, data.get('UUID', ""))) # Iterating over available devices to see if any other device # has the same UUID as the target device. If such device exists # we probably installed the system to the multipath device. for other_devpath, other_data in binfo.items(): if ((other_data.get('UUID') == target_uuid) and (other_devpath != devpath)): return True # No other devices have the same UUID as the target devices. # We probably installed the system to the non-multipath device. return False def get_scsi_wwid(device, replace_whitespace=False): """ Issue a call to scsi_id utility to get WWID of the device. """ cmd = ['/lib/udev/scsi_id', '--whitelisted', '--device=%s' % device] if replace_whitespace: cmd.append('--replace-whitespace') try: (out, err) = util.subp(cmd, capture=True) LOG.debug("scsi_id output raw:\n%s\nerror:\n%s", out, err) scsi_wwid = out.rstrip('\n') return scsi_wwid except util.ProcessExecutionError as e: LOG.warn("Failed to get WWID: %s", e) return None def get_multipath_wwids(): """ Get WWIDs of all multipath devices available in the system. """ multipath_devices = set() multipath_wwids = set() devuuids = [(d, i['UUID']) for d, i in blkid().items() if 'UUID' in i] # Looking for two disks which contain filesystems with the same UUID. for (dev1, uuid1), (dev2, uuid2) in itertools.combinations(devuuids, 2): if uuid1 == uuid2: multipath_devices.add(get_blockdev_for_partition(dev1)[0]) for device in multipath_devices: wwid = get_scsi_wwid(device) # Function get_scsi_wwid() may return None in case of errors or # WWID field may be empty for some buggy disk. We don't want to # propagate both of these value further to avoid generation of # incorrect /etc/multipath/bindings file. if wwid: multipath_wwids.add(wwid) return multipath_wwids def get_root_device(dev, paths=None): """ Get root partition for specified device, based on presence of any paths in the provided paths list: """ if paths is None: paths = ["curtin"] LOG.debug('Searching for filesystem on %s containing one of: %s', dev, paths) partitions = get_pardevs_on_blockdevs(dev) target = None tmp_mount = tempfile.mkdtemp() for i in partitions: dev_path = partitions[i]['device_path'] mp = None try: util.do_mount(dev_path, tmp_mount) mp = tmp_mount for path in paths: fullpath = os.path.join(tmp_mount, path) if os.path.isdir(fullpath): target = dev_path LOG.debug("Found path '%s' on device '%s'", path, dev_path) break except Exception: pass finally: if mp: util.do_umount(mp) os.rmdir(tmp_mount) if target is None: raise ValueError( "Did not find any filesystem on %s that contained one of %s" % (dev, paths)) return target def get_blockdev_sector_size(devpath): """ Get the logical and physical sector size of device at devpath Returns a tuple of integer values (logical, physical). """ info = _lsblock([devpath]) LOG.debug('get_blockdev_sector_size: info:\n%s' % util.json_dumps(info)) # (LP: 1598310) The call to _lsblock() may return multiple results. # If it does, then search for a result with the correct device path. # If no such device is found among the results, then fall back to previous # behavior, which was taking the first of the results assert len(info) > 0 for (k, v) in info.items(): if v.get('device_path') == devpath: parent = k break else: parent = list(info.keys())[0] return (int(info[parent]['LOG-SEC']), int(info[parent]['PHY-SEC'])) def get_volume_uuid(path): """ Get uuid of disk with given path. This address uniquely identifies the device and remains consistant across reboots """ (out, _err) = util.subp(["blkid", "-o", "export", path], capture=True) for line in out.splitlines(): if "UUID" in line: return line.split('=')[-1] return '' def get_mountpoints(): """ Returns a list of all mountpoints where filesystems are currently mounted. """ info = _lsblock() proc_mounts = [mp for (dev, mp, vfs, opts, freq, passno) in get_proc_mounts()] lsblock_mounts = list(i.get("MOUNTPOINT") for name, i in info.items() if i.get("MOUNTPOINT") is not None and i.get("MOUNTPOINT") != "") return list(set(proc_mounts + lsblock_mounts)) def get_proc_mounts(): """ Returns a list of tuples for each entry in /proc/mounts """ mounts = [] with open("/proc/mounts", "r") as fp: for line in fp: try: (dev, mp, vfs, opts, freq, passno) = \ line.strip().split(None, 5) mounts.append((dev, mp, vfs, opts, freq, passno)) except ValueError: continue return mounts def get_dev_disk_byid(): """ Construct a dictionary mapping devname to disk/by-id paths :returns: Dictionary populated by examining /dev/disk/by-id/* { '/dev/sda': '/dev/disk/by-id/virtio-aaaa', '/dev/sda1': '/dev/disk/by-id/virtio-aaaa-part1', } """ prefix = '/dev/disk/by-id' return { os.path.realpath(byid): byid for byid in [os.path.join(prefix, path) for path in os.listdir(prefix)] } def disk_to_byid_path(kname): """" Return a /dev/disk/by-id path to kname if present. """ mapping = get_dev_disk_byid() return mapping.get(dev_path(kname)) def lookup_disk(serial): """ Search for a disk by its serial number using /dev/disk/by-id/ """ # Get all volumes in /dev/disk/by-id/ containing the serial string. The # string specified can be either in the short or long serial format # hack, some serials have spaces, udev usually converts ' ' -> '_' serial_udev = serial.replace(' ', '_') LOG.info('Processing serial %s via udev to %s', serial, serial_udev) disks = list(filter(lambda x: serial_udev in x, os.listdir("/dev/disk/by-id/"))) if not disks or len(disks) < 1: raise ValueError("no disk with serial '%s' found" % serial_udev) # Sort by length and take the shortest path name, as the longer path names # will be the partitions on the disk. Then use os.path.realpath to # determine the path to the block device in /dev/ disks.sort(key=lambda x: len(x)) path = os.path.realpath("/dev/disk/by-id/%s" % disks[0]) if not os.path.exists(path): raise ValueError("path '%s' to block device for disk with serial '%s' \ does not exist" % (path, serial_udev)) return path def sysfs_partition_data(blockdev=None, sysfs_path=None): # given block device or sysfs_path, return a list of tuples # of (kernel_name, number, offset, size) if blockdev: blockdev = os.path.normpath(blockdev) sysfs_path = sys_block_path(blockdev) elif sysfs_path: # use normpath to ensure that paths with trailing slash work sysfs_path = os.path.normpath(sysfs_path) blockdev = os.path.join('/dev', os.path.basename(sysfs_path)) else: raise ValueError("Blockdev and sysfs_path cannot both be None") # queue property is only on parent devices, ie, we can't read # /sys/class/block/vda/vda1/queue/* as queue is only on the # parent device sysfs_prefix = sysfs_path (parent, partnum) = get_blockdev_for_partition(blockdev) if partnum: sysfs_prefix = sys_block_path(parent) partnum = int(partnum) block_size = int(util.load_file(os.path.join( sysfs_prefix, 'queue/logical_block_size'))) unit = block_size ptdata = [] for part_sysfs in get_sysfs_partitions(sysfs_prefix): data = {} for sfile in ('partition', 'start', 'size'): dfile = os.path.join(part_sysfs, sfile) if not os.path.isfile(dfile): continue data[sfile] = int(util.load_file(dfile)) if partnum is None or data['partition'] == partnum: ptdata.append((path_to_kname(part_sysfs), data['partition'], data['start'] * unit, data['size'] * unit,)) return ptdata def get_part_table_type(device): """ check the type of partition table present on the specified device returns None if no ptable was present or device could not be read """ # it is neccessary to look for the gpt signature first, then the dos # signature, because a gpt formatted disk usually has a valid mbr to # protect the disk from being modified by older partitioning tools return ('gpt' if check_efi_signature(device) else 'dos' if check_dos_signature(device) else None) def check_dos_signature(device): """ check if there is a dos partition table signature present on device """ # the last 2 bytes of a dos partition table have the signature with the # value 0xAA55. the dos partition table is always 0x200 bytes long, even if # the underlying disk uses a larger logical block size, so the start of # this signature must be at 0x1fe # https://en.wikipedia.org/wiki/Master_boot_record#Sector_layout return (is_block_device(device) and util.file_size(device) >= 0x200 and (util.load_file(device, decode=False, read_len=2, offset=0x1fe) == b'\x55\xAA')) def check_efi_signature(device): """ check if there is a gpt partition table signature present on device """ # the gpt partition table header is always on lba 1, regardless of the # logical block size used by the underlying disk. therefore, a static # offset cannot be used, the offset to the start of the table header is # always the sector size of the disk # the start of the gpt partition table header shoult have the signaure # 'EFI PART'. # https://en.wikipedia.org/wiki/GUID_Partition_Table sector_size = get_blockdev_sector_size(device)[0] return (is_block_device(device) and util.file_size(device) >= 2 * sector_size and (util.load_file(device, decode=False, read_len=8, offset=sector_size) == b'EFI PART')) def is_extended_partition(device): """ check if the specified device path is a dos extended partition """ # an extended partition must be on a dos disk, must be a partition, must be # within the first 4 partitions and will have a valid dos signature, # because the format of the extended partition matches that of a real mbr (parent_dev, part_number) = get_blockdev_for_partition(device) return (get_part_table_type(parent_dev) in ['dos', 'msdos'] and part_number is not None and int(part_number) <= 4 and check_dos_signature(device)) def is_zfs_member(device): """ check if the specified device path is a zfs member """ info = _lsblock() kname = path_to_kname(device) if kname in info and info[kname].get('FSTYPE') == 'zfs_member': return True return False @contextmanager def exclusive_open(path, exclusive=True): """ Obtain an exclusive file-handle to the file/device specified unless caller specifics exclusive=False. """ mode = 'rb+' fd = None if not os.path.exists(path): raise ValueError("No such file at path: %s" % path) flags = os.O_RDWR if exclusive: flags += os.O_EXCL try: fd = os.open(path, flags) try: fd_needs_closing = True with os.fdopen(fd, mode) as fo: yield fo fd_needs_closing = False except OSError: LOG.exception("Failed to create file-object from fd") raise finally: # python2 leaves fd open if there os.fdopen fails if fd_needs_closing and sys.version_info.major == 2: os.close(fd) except OSError: LOG.error("Failed to exclusively open path: %s", path) holders = get_holders(path) LOG.error('Device holders with exclusive access: %s', holders) mount_points = util.list_device_mounts(path) LOG.error('Device mounts: %s', mount_points) fusers = util.fuser_mount(path) LOG.error('Possible users of %s:\n%s', path, fusers) raise def wipe_file(path, reader=None, buflen=4 * 1024 * 1024, exclusive=True): """ wipe the existing file at path. if reader is provided, it will be called as a 'reader(buflen)' to provide data for each write. Otherwise, zeros are used. writes will be done in size of buflen. """ if reader: readfunc = reader else: buf = buflen * b'\0' def readfunc(size): return buf size = util.file_size(path) LOG.debug("%s is %s bytes. wiping with buflen=%s", path, size, buflen) with exclusive_open(path, exclusive=exclusive) as fp: while True: pbuf = readfunc(buflen) pos = fp.tell() if len(pbuf) != buflen and len(pbuf) + pos < size: raise ValueError( "short read on reader got %d expected %d after %d" % (len(pbuf), buflen, pos)) if pos + buflen >= size: fp.write(pbuf[0:size-pos]) break else: fp.write(pbuf) def quick_zero(path, partitions=True, exclusive=True): """ zero 1M at front, 1M at end, and 1M at front if this is a block device and partitions is true, then zero 1M at front and end of each partition. """ buflen = 1024 count = 1024 zero_size = buflen * count offsets = [0, -zero_size] is_block = is_block_device(path) if not (is_block or os.path.isfile(path)): raise ValueError("%s: not an existing file or block device", path) pt_names = [] if partitions and is_block: ptdata = sysfs_partition_data(path) for kname, ptnum, start, size in ptdata: pt_names.append((dev_path(kname), kname, ptnum)) pt_names.reverse() for (pt, kname, ptnum) in pt_names: LOG.debug('Wiping path: dev:%s kname:%s partnum:%s', pt, kname, ptnum) quick_zero(pt, partitions=False) LOG.debug("wiping 1M on %s at offsets %s", path, offsets) return zero_file_at_offsets(path, offsets, buflen=buflen, count=count, exclusive=exclusive) def zero_file_at_offsets(path, offsets, buflen=1024, count=1024, strict=False, exclusive=True): """ write zeros to file at specified offsets """ bmsg = "{path} (size={size}): " m_short = bmsg + "{tot} bytes from {offset} > size." m_badoff = bmsg + "invalid offset {offset}." if not strict: m_short += " Shortened to {wsize} bytes." m_badoff += " Skipping." buf = b'\0' * buflen tot = buflen * count msg_vals = {'path': path, 'tot': buflen * count} # allow caller to control if we require exclusive open with exclusive_open(path, exclusive=exclusive) as fp: # get the size by seeking to end. fp.seek(0, 2) size = fp.tell() msg_vals['size'] = size for offset in offsets: if offset < 0: pos = size + offset else: pos = offset msg_vals['offset'] = offset msg_vals['pos'] = pos if pos > size or pos < 0: if strict: raise ValueError(m_badoff.format(**msg_vals)) else: LOG.debug(m_badoff.format(**msg_vals)) continue msg_vals['wsize'] = size - pos if pos + tot > size: if strict: raise ValueError(m_short.format(**msg_vals)) else: LOG.debug(m_short.format(**msg_vals)) fp.seek(pos) for i in range(count): pos = fp.tell() if pos + buflen > size: fp.write(buf[0:size-pos]) else: fp.write(buf) def wipe_volume(path, mode="superblock", exclusive=True): """wipe a volume/block device :param path: a path to a block device :param mode: how to wipe it. pvremove: wipe a lvm physical volume zero: write zeros to the entire volume random: write random data (/dev/urandom) to the entire volume superblock: zero the beginning and the end of the volume superblock-recursive: zero the beginning of the volume, the end of the volume and beginning and end of any partitions that are known to be on this device. :param exclusive: boolean to control how path is opened """ if mode == "pvremove": # We need to use --force --force in case it's already in a volgroup and # pvremove doesn't want to remove it # If pvremove is run and there is no label on the system, # then it exits with 5. That is also okay, because we might be # wiping something that is already blank util.subp(['pvremove', '--force', '--force', '--yes', path], rcs=[0, 5], capture=True) lvm.lvm_scan() elif mode == "zero": wipe_file(path, exclusive=exclusive) elif mode == "random": with open("/dev/urandom", "rb") as reader: wipe_file(path, reader=reader.read, exclusive=exclusive) elif mode == "superblock": quick_zero(path, partitions=False, exclusive=exclusive) elif mode == "superblock-recursive": quick_zero(path, partitions=True, exclusive=exclusive) else: raise ValueError("wipe mode %s not supported" % mode) def storage_config_required_packages(storage_config, mapping): """Read storage configuration dictionary and determine which packages are required for the supplied configuration to function. Return a list of packaged to install. """ if not storage_config or not isinstance(storage_config, dict): raise ValueError('Invalid storage configuration. ' 'Must be a dict:\n %s' % storage_config) if not mapping or not isinstance(mapping, dict): raise ValueError('Invalid storage mapping. Must be a dict') if 'storage' in storage_config: storage_config = storage_config.get('storage') needed_packages = [] # get reqs by device operation type dev_configs = set(operation['type'] for operation in storage_config['config']) for dev_type in dev_configs: if dev_type in mapping: needed_packages.extend(mapping[dev_type]) # for any format operations, check the fstype and # determine if we need any mkfs tools as well. format_configs = set([operation['fstype'] for operation in storage_config['config'] if operation['type'] == 'format']) for format_type in format_configs: if format_type in mapping: needed_packages.extend(mapping[format_type]) return needed_packages def detect_required_packages_mapping(): """Return a dictionary providing a versioned configuration which maps storage configuration elements to the packages which are required for functionality. The mapping key is either a config type value, or an fstype value. """ version = 1 mapping = { version: { 'handler': storage_config_required_packages, 'mapping': { 'bcache': ['bcache-tools'], 'btrfs': ['btrfs-tools'], 'ext2': ['e2fsprogs'], 'ext3': ['e2fsprogs'], 'ext4': ['e2fsprogs'], 'jfs': ['jfsutils'], 'lvm_partition': ['lvm2'], 'lvm_volgroup': ['lvm2'], 'ntfs': ['ntfs-3g'], 'raid': ['mdadm'], 'reiserfs': ['reiserfsprogs'], 'xfs': ['xfsprogs'], 'zfsroot': ['zfsutils-linux', 'zfs-initramfs'], 'zfs': ['zfsutils-linux', 'zfs-initramfs'], 'zpool': ['zfsutils-linux', 'zfs-initramfs'], }, }, } return mapping # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/block/clear_holders.py000066400000000000000000000550331326565350400222150ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ This module provides a mechanism for shutting down virtual storage layers on top of a block device, making it possible to reuse the block device without having to reboot the system """ import errno import os import time from curtin import (block, udev, util) from curtin.swap import is_swap_device from curtin.block import lvm from curtin.block import mdadm from curtin.block import zfs from curtin.log import LOG # poll frequenty, but wait up to 60 seconds total MDADM_RELEASE_RETRIES = [0.4] * 150 def _define_handlers_registry(): """ returns instantiated dev_types """ return { 'partition': {'shutdown': wipe_superblock, 'ident': identify_partition}, 'lvm': {'shutdown': shutdown_lvm, 'ident': identify_lvm}, 'crypt': {'shutdown': shutdown_crypt, 'ident': identify_crypt}, 'raid': {'shutdown': shutdown_mdadm, 'ident': identify_mdadm}, 'bcache': {'shutdown': shutdown_bcache, 'ident': identify_bcache}, 'disk': {'ident': lambda x: False, 'shutdown': wipe_superblock}, } def get_dmsetup_uuid(device): """ get the dm uuid for a specified dmsetup device """ blockdev = block.sysfs_to_devpath(device) (out, _) = util.subp(['dmsetup', 'info', blockdev, '-C', '-o', 'uuid', '--noheadings'], capture=True) return out.strip() def get_bcache_using_dev(device, strict=True): """ Get the /sys/fs/bcache/ path of the bcache cache device bound to specified device """ # FIXME: when block.bcache is written this should be moved there sysfs_path = block.sys_block_path(device) path = os.path.realpath(os.path.join(sysfs_path, 'bcache', 'cache')) if strict and not os.path.exists(path): err = OSError( "device '{}' did not have existing syspath '{}'".format( device, path)) err.errno = errno.ENOENT raise err return path def get_bcache_sys_path(device, strict=True): """ Get the /sys/class/block//bcache path """ sysfs_path = block.sys_block_path(device, strict=strict) path = os.path.join(sysfs_path, 'bcache') if strict and not os.path.exists(path): err = OSError( "device '{}' did not have existing syspath '{}'".format( device, path)) err.errno = errno.ENOENT raise err return path def maybe_stop_bcache_device(device): """Attempt to stop the provided device_path or raise unexpected errors.""" bcache_stop = os.path.join(device, 'stop') try: util.write_file(bcache_stop, '1', mode=None) except (IOError, OSError) as e: # Note: if we get any exceptions in the above exception classes # it is a result of attempting to write "1" into the sysfs path # The range of errors changes depending on when we race with # the kernel asynchronously removing the sysfs path. Therefore # we log the exception errno we got, but do not re-raise as # the calling process is watching whether the same sysfs path # is being removed; if it fails to go away then we'll have # a log of the exceptions to debug. LOG.debug('Error writing to bcache stop file %s, device removed: %s', bcache_stop, e) def shutdown_bcache(device): """ Shut down bcache for specified bcache device 1. Stop the cacheset that `device` is connected to 2. Stop the 'device' """ if not device.startswith('/sys/class/block'): raise ValueError('Invalid Device (%s): ' 'Device path must start with /sys/class/block/', device) # bcache device removal should be fast but in an extreme # case, might require the cache device to flush large # amounts of data to a backing device. The strategy here # is to wait for approximately 30 seconds but to check # frequently since curtin cannot proceed until devices # cleared. removal_retries = [0.2] * 150 # 30 seconds total bcache_shutdown_message = ('shutdown_bcache running on {} has determined ' 'that the device has already been shut down ' 'during handling of another bcache dev. ' 'skipping'.format(device)) if not os.path.exists(device): LOG.info(bcache_shutdown_message) return # get slaves [vdb1, vdc], allow for slaves to not have bcache dir slave_paths = [get_bcache_sys_path(k, strict=False) for k in os.listdir(os.path.join(device, 'slaves'))] # stop cacheset if it exists bcache_cache_sysfs = get_bcache_using_dev(device, strict=False) if not os.path.exists(bcache_cache_sysfs): LOG.info('bcache cacheset already removed: %s', os.path.basename(bcache_cache_sysfs)) else: LOG.info('stopping bcache cacheset at: %s', bcache_cache_sysfs) maybe_stop_bcache_device(bcache_cache_sysfs) try: util.wait_for_removal(bcache_cache_sysfs, retries=removal_retries) except OSError: LOG.info('Failed to stop bcache cacheset %s', bcache_cache_sysfs) raise # let kernel settle before the next remove udev.udevadm_settle() # after stopping cache set, we may need to stop the device # both the dev and sysfs entry should be gone. # we know the bcacheN device is really gone when we've removed: # /sys/class/block/{bcacheN} # /sys/class/block/slaveN1/bcache # /sys/class/block/slaveN2/bcache bcache_block_sysfs = get_bcache_sys_path(device, strict=False) to_check = [device] + slave_paths found_devs = [os.path.exists(p) for p in to_check] LOG.debug('os.path.exists on blockdevs:\n%s', list(zip(to_check, found_devs))) if not any(found_devs): LOG.info('bcache backing device already removed: %s (%s)', bcache_block_sysfs, device) LOG.debug('bcache slave paths checked: %s', slave_paths) return else: LOG.info('stopping bcache backing device at: %s', bcache_block_sysfs) maybe_stop_bcache_device(bcache_block_sysfs) try: # wait for them all to go away for dev in [device, bcache_block_sysfs] + slave_paths: util.wait_for_removal(dev, retries=removal_retries) except OSError: LOG.info('Failed to stop bcache backing device %s', bcache_block_sysfs) raise return def shutdown_lvm(device): """ Shutdown specified lvm device. """ device = block.sys_block_path(device) # lvm devices have a dm directory that containes a file 'name' containing # '{volume group}-{logical volume}'. The volume can be freed using lvremove name_file = os.path.join(device, 'dm', 'name') lvm_name = util.load_file(name_file).strip() (vg_name, lv_name) = lvm.split_lvm_name(lvm_name) # use dmsetup as lvm commands require valid /etc/lvm/* metadata LOG.debug('using "dmsetup remove" on %s', lvm_name) util.subp(['dmsetup', 'remove', lvm_name]) # if that was the last lvol in the volgroup, get rid of volgroup if len(lvm.get_lvols_in_volgroup(vg_name)) == 0: util.subp(['vgremove', '--force', '--force', vg_name], rcs=[0, 5]) # refresh lvmetad lvm.lvm_scan() def shutdown_crypt(device): """ Shutdown specified cryptsetup device """ blockdev = block.sysfs_to_devpath(device) util.subp(['cryptsetup', 'remove', blockdev], capture=True) def shutdown_mdadm(device): """ Shutdown specified mdadm device. """ blockdev = block.sysfs_to_devpath(device) LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev) mdadm.mdadm_stop(blockdev) # mdadm stop operation is asynchronous so we must wait for the kernel to # release resources. For more details see LP: #1682456 try: for wait in MDADM_RELEASE_RETRIES: if mdadm.md_present(block.path_to_kname(blockdev)): time.sleep(wait) else: LOG.debug('%s has been removed', blockdev) break if mdadm.md_present(block.path_to_kname(blockdev)): raise OSError('Timeout exceeded for removal of %s', blockdev) except OSError: LOG.critical('Failed to stop mdadm device %s', device) if os.path.exists('/proc/mdstat'): LOG.critical("/proc/mdstat:\n%s", util.load_file('/proc/mdstat')) raise def wipe_superblock(device): """ Wrapper for block.wipe_volume compatible with shutdown function interface """ blockdev = block.sysfs_to_devpath(device) # when operating on a disk that used to have a dos part table with an # extended partition, attempting to wipe the extended partition will fail if block.is_extended_partition(blockdev): LOG.info("extended partitions do not need wiping, so skipping: '%s'", blockdev) else: # release zfs member by exporting the pool if block.is_zfs_member(blockdev): poolname = zfs.device_to_poolname(blockdev) zfs.zpool_export(poolname) if is_swap_device(blockdev): shutdown_swap(blockdev) # some volumes will be claimed by the bcache layer but do not surface # an actual /dev/bcacheN device which owns the parts (backing, cache) # The result is that some volumes cannot be wiped while bcache claims # the device. Resolve this by stopping bcache layer on those volumes # if present. for bcache_path in ['bcache', 'bcache/set']: stop_path = os.path.join(device, bcache_path) if os.path.exists(stop_path): LOG.debug('Attempting to release bcache layer from device: %s', device) maybe_stop_bcache_device(stop_path) continue _wipe_superblock(blockdev) def _wipe_superblock(blockdev, exclusive=True): """ No checks, just call wipe_volume """ retries = [1, 3, 5, 7] LOG.info('wiping superblock on %s', blockdev) for attempt, wait in enumerate(retries): LOG.debug('wiping %s attempt %s/%s', blockdev, attempt + 1, len(retries)) try: block.wipe_volume(blockdev, mode='superblock', exclusive=exclusive) LOG.debug('successfully wiped device %s on attempt %s/%s', blockdev, attempt + 1, len(retries)) return except OSError: if attempt + 1 >= len(retries): raise else: LOG.debug("wiping device '%s' failed on attempt" " %s/%s. sleeping %ss before retry", blockdev, attempt + 1, len(retries), wait) time.sleep(wait) def identify_lvm(device): """ determine if specified device is a lvm device """ return (block.path_to_kname(device).startswith('dm') and get_dmsetup_uuid(device).startswith('LVM')) def identify_crypt(device): """ determine if specified device is dm-crypt device """ return (block.path_to_kname(device).startswith('dm') and get_dmsetup_uuid(device).startswith('CRYPT')) def identify_mdadm(device): """ determine if specified device is a mdadm device """ # RAID0 and 1 devices can be partitioned and the partitions are *not* # raid devices with a sysfs 'md' subdirectory partition = identify_partition(device) return block.path_to_kname(device).startswith('md') and not partition def identify_bcache(device): """ determine if specified device is a bcache device """ return block.path_to_kname(device).startswith('bcache') def identify_partition(device): """ determine if specified device is a partition """ path = os.path.join(block.sys_block_path(device), 'partition') return os.path.exists(path) def shutdown_swap(path): """release swap device from kernel swap pool if present""" procswaps = util.load_file('/proc/swaps') for swapline in procswaps.splitlines(): if swapline.startswith(path): msg = ('Removing %s from active use as swap device, ' 'needed for storage config' % path) LOG.warning(msg) util.subp(['swapoff', path]) return def get_holders(device): """ Look up any block device holders, return list of knames """ # block.sys_block_path works when given a /sys or /dev path sysfs_path = block.sys_block_path(device) # get holders holders = os.listdir(os.path.join(sysfs_path, 'holders')) LOG.debug("devname '%s' had holders: %s", device, holders) return holders def gen_holders_tree(device): """ generate a tree representing the current storage hirearchy above 'device' """ device = block.sys_block_path(device) dev_name = block.path_to_kname(device) # the holders for a device should consist of the devices in the holders/ # dir in sysfs and any partitions on the device. this ensures that a # storage tree starting from a disk will include all devices holding the # disk's partitions holder_paths = ([block.sys_block_path(h) for h in get_holders(device)] + block.get_sysfs_partitions(device)) # the DEV_TYPE registry contains a function under the key 'ident' for each # device type entry that returns true if the device passed to it is of the # correct type. there should never be a situation in which multiple # identify functions return true. therefore, it will always work to take # the device type with the first identify function that returns true as the # device type for the current device. in the event that no identify # functions return true, the device will be treated as a disk # (DEFAULT_DEV_TYPE). the identify function for disk never returns true. # the next() builtin in python will not raise a StopIteration exception if # there is a default value defined dev_type = next((k for k, v in DEV_TYPES.items() if v['ident'](device)), DEFAULT_DEV_TYPE) return { 'device': device, 'dev_type': dev_type, 'name': dev_name, 'holders': [gen_holders_tree(h) for h in holder_paths], } def plan_shutdown_holder_trees(holders_trees): """ plan best order to shut down holders in, taking into account high level storage layers that may have many devices below them returns a sorted list of descriptions of storage config entries including their path in /sys/block and their dev type can accept either a single storage tree or a list of storage trees assumed to start at an equal place in storage hirearchy (i.e. a list of trees starting from disk) """ # holds a temporary registry of holders to allow cross references # key = device sysfs path, value = {} of priority level, shutdown function reg = {} # normalize to list of trees if not isinstance(holders_trees, (list, tuple)): holders_trees = [holders_trees] def flatten_holders_tree(tree, level=0): """ add entries from holders tree to registry with level key corresponding to how many layers from raw disks the current device is at """ device = tree['device'] # always go with highest level if current device has been # encountered already. since the device and everything above it is # re-added to the registry it ensures that any increase of level # required here will propagate down the tree # this handles a scenario like mdadm + bcache, where the backing # device for bcache is a 3nd level item like mdadm, but the cache # device is 1st level (disk) or second level (partition), ensuring # that the bcache item is always considered higher level than # anything else regardless of whether it was added to the tree via # the cache device or backing device first if device in reg: level = max(reg[device]['level'], level) reg[device] = {'level': level, 'device': device, 'dev_type': tree['dev_type']} # handle holders above this level for holder in tree['holders']: flatten_holders_tree(holder, level=level + 1) # flatten the holders tree into the registry for holders_tree in holders_trees: flatten_holders_tree(holders_tree) # return list of entry dicts with highest level first return [reg[k] for k in sorted(reg, key=lambda x: reg[x]['level'] * -1)] def format_holders_tree(holders_tree): """ draw a nice dirgram of the holders tree """ # spacer styles based on output of 'tree --charset=ascii' spacers = (('`-- ', ' ' * 4), ('|-- ', '|' + ' ' * 3)) def format_tree(tree): """ format entry and any subentries """ result = [tree['name']] holders = tree['holders'] for (holder_no, holder) in enumerate(holders): spacer_style = spacers[min(len(holders) - (holder_no + 1), 1)] subtree_lines = format_tree(holder) for (line_no, line) in enumerate(subtree_lines): result.append(spacer_style[min(line_no, 1)] + line) return result return '\n'.join(format_tree(holders_tree)) def get_holder_types(tree): """ get flattened list of types of holders in holders tree and the devices they correspond to """ types = {(tree['dev_type'], tree['device'])} for holder in tree['holders']: types.update(get_holder_types(holder)) return types def assert_clear(base_paths): """ Check if all paths in base_paths are clear to use """ valid = ('disk', 'partition') if not isinstance(base_paths, (list, tuple)): base_paths = [base_paths] base_paths = [block.sys_block_path(path) for path in base_paths] for holders_tree in [gen_holders_tree(p) for p in base_paths]: if any(holder_type not in valid and path not in base_paths for (holder_type, path) in get_holder_types(holders_tree)): raise OSError('Storage not clear, remaining:\n{}' .format(format_holders_tree(holders_tree))) def clear_holders(base_paths, try_preserve=False): """ Clear all storage layers depending on the devices specified in 'base_paths' A single device or list of devices can be specified. Device paths can be specified either as paths in /dev or /sys/block Will throw OSError if any holders could not be shut down """ # handle single path if not isinstance(base_paths, (list, tuple)): base_paths = [base_paths] # get current holders and plan how to shut them down holder_trees = [gen_holders_tree(path) for path in base_paths] LOG.info('Current device storage tree:\n%s', '\n'.join(format_holders_tree(tree) for tree in holder_trees)) ordered_devs = plan_shutdown_holder_trees(holder_trees) # run wipe-superblock on layered devices for dev_info in ordered_devs: dev_type = DEV_TYPES.get(dev_info['dev_type']) shutdown_function = dev_type.get('shutdown') if not shutdown_function: continue if try_preserve and shutdown_function in DATA_DESTROYING_HANDLERS: LOG.info('shutdown function for holder type: %s is destructive. ' 'attempting to preserve data, so skipping' % dev_info['dev_type']) continue # for layered block devices, wipe first, then shutdown if dev_info['dev_type'] in ['bcache', 'raid']: LOG.info("Wiping superblock on layered device type: " "'%s' syspath: '%s'", dev_info['dev_type'], dev_info['device']) # we just want to wipe data, we don't care about exclusive _wipe_superblock(block.sysfs_to_devpath(dev_info['device']), exclusive=False) # run shutdown functions for dev_info in ordered_devs: dev_type = DEV_TYPES.get(dev_info['dev_type']) shutdown_function = dev_type.get('shutdown') if not shutdown_function: continue if try_preserve and shutdown_function in DATA_DESTROYING_HANDLERS: LOG.info('shutdown function for holder type: %s is destructive. ' 'attempting to preserve data, so skipping' % dev_info['dev_type']) continue if os.path.exists(dev_info['device']): LOG.info("shutdown running on holder type: '%s' syspath: '%s'", dev_info['dev_type'], dev_info['device']) shutdown_function(dev_info['device']) udev.udevadm_settle() def start_clear_holders_deps(): """ prepare system for clear holders to be able to scan old devices """ # a mdadm scan has to be started in case there is a md device that needs to # be detected. if the scan fails, it is either because there are no mdadm # devices on the system, or because there is a mdadm device in a damaged # state that could not be started. due to the nature of mdadm tools, it is # difficult to know which is the case. if any errors did occur, then ignore # them, since no action needs to be taken if there were no mdadm devices on # the system, and in the case where there is some mdadm metadata on a disk, # but there was not enough to start the array, the call to wipe_volume on # all disks and partitions should be sufficient to remove the mdadm # metadata mdadm.mdadm_assemble(scan=True, ignore_errors=True) # the bcache module needs to be present to properly detect bcache devs # on some systems (precise without hwe kernel) it may not be possible to # lad the bcache module bcause it is not present in the kernel. if this # happens then there is no need to halt installation, as the bcache devices # will never appear and will never prevent the disk from being reformatted util.load_kernel_module('bcache') # the zfs module is needed to find and export devices which may be in-use # and need to be cleared, only on xenial+. if not util.lsb_release()['codename'] in ['precise', 'trusty']: util.load_kernel_module('zfs') # anything that is not identified can assumed to be a 'disk' or similar DEFAULT_DEV_TYPE = 'disk' # handlers that should not be run if an attempt is being made to preserve data DATA_DESTROYING_HANDLERS = [wipe_superblock] # types of devices that could be encountered by clear holders and functions to # identify them and shut them down DEV_TYPES = _define_handlers_registry() # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/block/iscsi.py000066400000000000000000000415071326565350400205220ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. # This module wraps calls to the iscsiadm utility for examining iSCSI # devices. Functions prefixed with 'iscsiadm_' involve executing # the 'iscsiadm' command in a subprocess. The remaining functions handle # manipulation of the iscsiadm output. import os import re import shutil from curtin import (util, udev) from curtin.block import (get_device_slave_knames, path_to_kname) from curtin.log import LOG _ISCSI_DISKS = {} RFC4173_AUTH_REGEX = re.compile(r'''^ (?P[^:]*?):(?P[^:]*?) (?::(?P[^:]*?):(?P[^:]*?))? $ ''', re.VERBOSE) RFC4173_TARGET_REGEX = re.compile(r'''^ (?P[^@:\[\]]*|\[[^@]*\]): # greedy so ipv6 IPs are matched (?P[^:]*): (?P[^:]*): (?P[^:]*): (?P\S*) # greedy so entire suffix is matched $''', re.VERBOSE) ISCSI_PORTAL_REGEX = re.compile(r'^(?P\S*):(?P\d+)$') # @portal is of the form: HOST:PORT def assert_valid_iscsi_portal(portal): if not isinstance(portal, util.string_types): raise ValueError("iSCSI portal (%s) is not a string" % portal) m = re.match(ISCSI_PORTAL_REGEX, portal) if m is None: raise ValueError("iSCSI portal (%s) is not in the format " "(HOST:PORT)" % portal) host = m.group('host') if host.startswith('[') and host.endswith(']'): host = host[1:-1] if not util.is_valid_ipv6_address(host): raise ValueError("Invalid IPv6 address (%s) in iSCSI portal (%s)" % (host, portal)) try: port = int(m.group('port')) except ValueError: raise ValueError("iSCSI portal (%s) port (%s) is not an integer" % (portal, m.group('port'))) return host, port def iscsiadm_sessions(): cmd = ["iscsiadm", "--mode=session", "--op=show"] # rc 21 indicates no sessions currently exist, which is not # inherently incorrect (if not logged in yet) out, _ = util.subp(cmd, rcs=[0, 21], capture=True, log_captured=True) return out def iscsiadm_discovery(portal): # only supported type for now type = 'sendtargets' if not portal: raise ValueError("Portal must be specified for discovery") cmd = ["iscsiadm", "--mode=discovery", "--type=%s" % type, "--portal=%s" % portal] try: util.subp(cmd, capture=True, log_captured=True) except util.ProcessExecutionError as e: LOG.warning("iscsiadm_discovery to %s failed with exit code %d", portal, e.exit_code) raise def iscsiadm_login(target, portal): LOG.debug('iscsiadm_login: target=%s portal=%s', target, portal) cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--login'] util.subp(cmd, capture=True, log_captured=True) def iscsiadm_set_automatic(target, portal): LOG.debug('iscsiadm_set_automatic: target=%s portal=%s', target, portal) cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.startup', '--value=automatic'] util.subp(cmd, capture=True, log_captured=True) def iscsiadm_authenticate(target, portal, user=None, password=None, iuser=None, ipassword=None): LOG.debug('iscsiadm_authenticate: target=%s portal=%s ' 'user=%s password=%s iuser=%s ipassword=%s', target, portal, user, "HIDDEN" if password else None, iuser, "HIDDEN" if ipassword else None) if iuser or ipassword: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.authmethod', '--value=CHAP'] util.subp(cmd, capture=True, log_captured=True) if iuser: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.username_in', '--value=%s' % iuser] util.subp(cmd, capture=True, log_captured=True) if ipassword: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.password_in', '--value=%s' % ipassword] util.subp(cmd, capture=True, log_captured=True, logstring='iscsiadm --mode=node --targetname=%s ' '--portal=%s --op=update ' '--name=node.session.auth.password_in ' '--value=HIDDEN' % (target, portal)) if user or password: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.authmethod', '--value=CHAP'] util.subp(cmd, capture=True, log_captured=True) if user: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.username', '--value=%s' % user] util.subp(cmd, capture=True, log_captured=True) if password: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.password', '--value=%s' % password] util.subp(cmd, capture=True, log_captured=True, logstring='iscsiadm --mode=node --targetname=%s ' '--portal=%s --op=update ' '--name=node.session.auth.password ' '--value=HIDDEN' % (target, portal)) def iscsiadm_logout(target, portal): LOG.debug('iscsiadm_logout: target=%s portal=%s', target, portal) cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--logout'] util.subp(cmd, capture=True, log_captured=True) udev.udevadm_settle() def target_nodes_directory(state, iscsi_disk): # we just want to copy in the nodes portion target_nodes_location = os.path.dirname( os.path.join(os.path.split(state['fstab'])[0], iscsi_disk.etciscsi_nodefile[len('/etc/iscsi/'):])) os.makedirs(target_nodes_location) return target_nodes_location def restart_iscsi_service(): LOG.info('restarting iscsi service') if util.uses_systemd(): cmd = ['systemctl', 'reload-or-restart', 'open-iscsi'] else: cmd = ['service', 'open-iscsi', 'restart'] util.subp(cmd, capture=True) def save_iscsi_config(iscsi_disk): state = util.load_command_environment() # A nodes directory will be created in the same directory as the # fstab in the configuration. This will then be copied onto the # system later if state['fstab']: target_nodes_location = target_nodes_directory(state, iscsi_disk) shutil.copy(iscsi_disk.etciscsi_nodefile, target_nodes_location) else: LOG.info("fstab configuration is not present in environment, " "so cannot locate an appropriate directory to write " "iSCSI node file in so not writing iSCSI node file") def ensure_disk_connected(rfc4173, write_config=True): global _ISCSI_DISKS iscsi_disk = _ISCSI_DISKS.get(rfc4173) if not iscsi_disk: iscsi_disk = IscsiDisk(rfc4173) try: iscsi_disk.connect() except util.ProcessExecutionError: LOG.error('Unable to connect to iSCSI disk (%s)' % rfc4173) # what should we do in this case? raise if write_config: save_iscsi_config(iscsi_disk) _ISCSI_DISKS.update({rfc4173: iscsi_disk}) # this is just a sanity check that the disk is actually present and # the above did what we expected if not os.path.exists(iscsi_disk.devdisk_path): LOG.warn('Unable to find iSCSI disk for target (%s) by path (%s)', iscsi_disk.target, iscsi_disk.devdisk_path) return iscsi_disk def connected_disks(): global _ISCSI_DISKS return _ISCSI_DISKS def get_iscsi_disks_from_config(cfg): """Parse a curtin storage config and return a list of iscsi disk objects for each configuration present """ if not cfg: cfg = {} sconfig = cfg.get('storage', {}).get('config', {}) if not sconfig: LOG.warning('Configuration dictionary did not contain' ' a storage configuration') return [] # Construct IscsiDisk objects for each iscsi volume present iscsi_disks = [IscsiDisk(disk['path']) for disk in sconfig if disk['type'] == 'disk' and disk.get('path', "").startswith('iscsi:')] LOG.debug('Found %s iscsi disks in storage config', len(iscsi_disks)) return iscsi_disks def disconnect_target_disks(target_root_path=None): target_nodes_path = util.target_path(target_root_path, '/etc/iscsi/nodes') fails = [] if os.path.isdir(target_nodes_path): for target in os.listdir(target_nodes_path): if target not in iscsiadm_sessions(): LOG.debug('iscsi target %s not active, skipping', target) continue # conn is "host,port,lun" for conn in os.listdir( os.path.sep.join([target_nodes_path, target])): host, port, _ = conn.split(',') try: util.subp(['sync']) iscsiadm_logout(target, '%s:%s' % (host, port)) except util.ProcessExecutionError as e: fails.append(target) LOG.warn("Unable to logout of iSCSI target %s: %s", target, e) else: LOG.warning('Skipping disconnect: failed to find iscsi nodes path: %s', target_nodes_path) if fails: raise RuntimeError( "Unable to logout of iSCSI targets: %s" % ', '.join(fails)) # Determines if a /dev/disk/by-path symlink matching the udev pattern # for iSCSI disks is pointing at @kname def kname_is_iscsi(kname): by_path = "/dev/disk/by-path" if os.path.isdir(by_path): for path in os.listdir(by_path): path_target = os.path.realpath(os.path.sep.join([by_path, path])) if kname in path_target and 'iscsi' in path: LOG.debug('kname_is_iscsi: ' 'found by-path link %s for kname %s', path, kname) return True LOG.debug('kname_is_iscsi: no iscsi disk found for kname %s' % kname) return False def volpath_is_iscsi(volume_path): """ Determine if the volume_path's kname is backed by iSCSI. Recursively check volume_path's slave devices as well in case volume_path is a stacked block device (like LVM/MD) returns a boolean """ if not volume_path: raise ValueError("Invalid input for volume_path: '%s'", volume_path) volume_path_slaves = get_device_slave_knames(volume_path) LOG.debug('volume_path=%s found slaves: %s', volume_path, volume_path_slaves) knames = [path_to_kname(volume_path)] + volume_path_slaves return any([kname_is_iscsi(kname) for kname in knames]) class IscsiDisk(object): # Per Debian bug 804162, the iscsi specifier looks like # TARGETSPEC=host:proto:port:lun:targetname # root=iscsi:$TARGETSPEC # root=iscsi:user:password@$TARGETSPEC # root=iscsi:user:password:initiatoruser:initiatorpassword@$TARGETSPEC def __init__(self, rfc4173): auth_m = None _rfc4173 = rfc4173 if not rfc4173.startswith('iscsi:'): raise ValueError('iSCSI specification (%s) did not start with ' 'iscsi:. iSCSI disks must be specified as ' 'iscsi:[user:password[:initiatoruser:' 'initiatorpassword]@]' 'host:proto:port:lun:targetname' % _rfc4173) rfc4173 = rfc4173[6:] if '@' in rfc4173: if rfc4173.count('@') != 1: raise ValueError('Only one @ symbol allowed in iSCSI disk ' 'specification (%s). iSCSI disks must be ' 'specified as' 'iscsi:[user:password[:initiatoruser:' 'initiatorpassword]@]' 'host:proto:port:lun:targetname' % _rfc4173) auth, target = rfc4173.split('@') auth_m = RFC4173_AUTH_REGEX.match(auth) if auth_m is None: raise ValueError('Invalid authentication specified for iSCSI ' 'disk (%s). iSCSI disks must be specified as ' 'iscsi:[user:password[:initiatoruser:' 'initiatorpassword]@]' 'host:proto:port:lun:targetname' % _rfc4173) else: target = rfc4173 target_m = RFC4173_TARGET_REGEX.match(target) if target_m is None: raise ValueError('Invalid target specified for iSCSI disk (%s). ' 'iSCSI disks must be specified as ' 'iscsi:[user:password[:initiatoruser:' 'initiatorpassword]@]' 'host:proto:port:lun:targetname' % _rfc4173) if target_m.group('proto') and target_m.group('proto') != '6': LOG.warn('Specified protocol for iSCSI (%s) is unsupported, ' 'assuming 6 (TCP)', target_m.group('proto')) if not target_m.group('host') or not target_m.group('targetname'): raise ValueError('Both host and targetname must be specified for ' 'iSCSI disks') if auth_m: self.user = auth_m.group('user') self.password = auth_m.group('password') self.iuser = auth_m.group('initiatoruser') self.ipassword = auth_m.group('initiatorpassword') else: self.user = None self.password = None self.iuser = None self.ipassword = None self.host = target_m.group('host') self.proto = '6' self.lun = int(target_m.group('lun')) if target_m.group('lun') else 0 self.target = target_m.group('targetname') try: self.port = int(target_m.group('port')) if target_m.group('port') \ else 3260 except ValueError: raise ValueError('Specified iSCSI port (%s) is not an integer' % target_m.group('port')) portal = '%s:%s' % (self.host, self.port) if self.host.startswith('[') and self.host.endswith(']'): self.host = self.host[1:-1] if not util.is_valid_ipv6_address(self.host): raise ValueError('Specified iSCSI IPv6 address (%s) is not ' 'valid' % self.host) portal = '[%s]:%s' % (self.host, self.port) assert_valid_iscsi_portal(portal) self.portal = portal def __str__(self): rep = 'iscsi' if self.user: rep += ':%s:PASSWORD' % self.user if self.iuser: rep += ':%s:IPASSWORD' % self.iuser rep += ':%s:%s:%s:%s:%s' % (self.host, self.proto, self.port, self.lun, self.target) return rep @property def etciscsi_nodefile(self): return '/etc/iscsi/nodes/%s/%s,%s,%s/default' % ( self.target, self.host, self.port, self.lun) @property def devdisk_path(self): return '/dev/disk/by-path/ip-%s-iscsi-%s-lun-%s' % ( self.portal, self.target, self.lun) def connect(self): if self.target in iscsiadm_sessions(): return iscsiadm_discovery(self.portal) iscsiadm_authenticate(self.target, self.portal, self.user, self.password, self.iuser, self.ipassword) iscsiadm_login(self.target, self.portal) udev.udevadm_settle(self.devdisk_path) iscsiadm_set_automatic(self.target, self.portal) def disconnect(self): if self.target not in iscsiadm_sessions(): LOG.warning('Iscsi target %s not in active iscsi sessions', self.target) return try: util.subp(['sync']) iscsiadm_logout(self.target, self.portal) except util.ProcessExecutionError as e: LOG.warn("Unable to logout of iSCSI target %s from portal %s: %s", self.target, self.portal, e) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/block/lvm.py000066400000000000000000000051441326565350400202030ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ This module provides some helper functions for manipulating lvm devices """ from curtin import util from curtin.log import LOG import os # separator to use for lvm/dm tools _SEP = '=' def _filter_lvm_info(lvtool, match_field, query_field, match_key): """ filter output of pv/vg/lvdisplay tools """ (out, _) = util.subp([lvtool, '-C', '--separator', _SEP, '--noheadings', '-o', ','.join([match_field, query_field])], capture=True) return [qf for (mf, qf) in [l.strip().split(_SEP) for l in out.strip().splitlines()] if mf == match_key] def get_pvols_in_volgroup(vg_name): """ get physical volumes used by volgroup """ return _filter_lvm_info('pvdisplay', 'vg_name', 'pv_name', vg_name) def get_lvols_in_volgroup(vg_name): """ get logical volumes in volgroup """ return _filter_lvm_info('lvdisplay', 'vg_name', 'lv_name', vg_name) def split_lvm_name(full): """ split full lvm name into tuple of (volgroup, lv_name) """ # 'dmsetup splitname' is the authoratative source for lvm name parsing (out, _) = util.subp(['dmsetup', 'splitname', full, '-c', '--noheadings', '--separator', _SEP, '-o', 'vg_name,lv_name'], capture=True) return out.strip().split(_SEP) def lvmetad_running(): """ check if lvmetad is running """ return os.path.exists(os.environ.get('LVM_LVMETAD_PIDFILE', '/run/lvmetad.pid')) def lvm_scan(): """ run full scan for volgroups, logical volumes and physical volumes """ # the lvm tools lvscan, vgscan and pvscan on ubuntu precise do not # support the flag --cache. the flag is present for the tools in ubuntu # trusty and later. since lvmetad is used in current releases of # ubuntu, the --cache flag is needed to ensure that the data cached by # lvmetad is updated. # before appending the cache flag though, check if lvmetad is running. this # ensures that we do the right thing even if lvmetad is supported but is # not running release = util.lsb_release().get('codename') if release in [None, 'UNAVAILABLE']: LOG.warning('unable to find release number, assuming xenial or later') release = 'xenial' for cmd in [['pvscan'], ['vgscan', '--mknodes']]: if release != 'precise' and lvmetad_running(): cmd.append('--cache') util.subp(cmd, capture=True) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/block/mdadm.py000066400000000000000000000612221326565350400204660ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. # This module wraps calls to the mdadm utility for examing Linux SoftRAID # virtual devices. Functions prefixed with 'mdadm_' involve executing # the 'mdadm' command in a subprocess. The remaining functions handle # manipulation of the mdadm output. import os import re import shlex from subprocess import CalledProcessError import time from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path) from curtin.block import get_holders from curtin import (util, udev) from curtin.log import LOG NOSPARE_RAID_LEVELS = [ 'linear', 'raid0', '0', 0, ] SPARE_RAID_LEVELS = [ 'raid1', 'stripe', 'mirror', '1', 1, 'raid4', '4', 4, 'raid5', '5', 5, 'raid6', '6', 6, 'raid10', '10', 10, ] VALID_RAID_LEVELS = NOSPARE_RAID_LEVELS + SPARE_RAID_LEVELS # https://www.kernel.org/doc/Documentation/md.txt ''' clear No devices, no size, no level Writing is equivalent to STOP_ARRAY ioctl inactive May have some settings, but array is not active all IO results in error When written, doesn't tear down array, but just stops it suspended (not supported yet) All IO requests will block. The array can be reconfigured. Writing this, if accepted, will block until array is quiessent readonly no resync can happen. no superblocks get written. write requests fail read-auto like readonly, but behaves like 'clean' on a write request. clean - no pending writes, but otherwise active. When written to inactive array, starts without resync If a write request arrives then if metadata is known, mark 'dirty' and switch to 'active'. if not known, block and switch to write-pending If written to an active array that has pending writes, then fails. active fully active: IO and resync can be happening. When written to inactive array, starts with resync write-pending clean, but writes are blocked waiting for 'active' to be written. active-idle like active, but no writes have been seen for a while (safe_mode_delay). ''' ERROR_RAID_STATES = [ 'clear', 'inactive', 'suspended', ] READONLY_RAID_STATES = [ 'readonly', ] READWRITE_RAID_STATES = [ 'read-auto', 'clean', 'active', 'active-idle', 'write-pending', ] VALID_RAID_ARRAY_STATES = ( ERROR_RAID_STATES + READONLY_RAID_STATES + READWRITE_RAID_STATES ) # need a on-import check of version and set the value for later reference ''' mdadm version < 3.3 doesn't include enough info when using --export and we must use --detail and parse out information. This method checks the mdadm version and will return True if we can use --export for key=value list with enough info, false if version is less than ''' MDADM_USE_EXPORT = util.lsb_release()['codename'] not in ['precise', 'trusty'] # # mdadm executors # def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False, ignore_errors=False): # md_devname is a /dev/XXXX # devices is non-empty list of /dev/xxx # if spares is non-empt list append of /dev/xxx cmd = ["mdadm", "--assemble"] if scan: cmd += ['--scan', '-v'] else: valid_mdname(md_devname) cmd += [md_devname, "--run"] + devices if spares: cmd += spares try: # mdadm assemble returns 1 when no arrays are found. this might not be # an error depending on the situation this function was called in, so # accept a return code of 1 # mdadm assemble returns 2 when called on an array that is already # assembled. this is not an error, so accept return code of 2 # all other return codes can be accepted with ignore_error set to true scan, err = util.subp(cmd, capture=True, rcs=[0, 1, 2]) LOG.debug('mdadm assemble scan results:\n%s\n%s', scan, err) scan, err = util.subp(['mdadm', '--detail', '--scan', '-v'], capture=True, rcs=[0, 1]) LOG.debug('mdadm detail scan after assemble:\n%s\n%s', scan, err) except util.ProcessExecutionError: LOG.warning("mdadm_assemble had unexpected return code") if not ignore_errors: raise udev.udevadm_settle() def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name=""): LOG.debug('mdadm_create: ' + 'md_name=%s raidlevel=%s ' % (md_devname, raidlevel) + ' devices=%s spares=%s name=%s' % (devices, spares, md_name)) assert_valid_devpath(md_devname) if raidlevel not in VALID_RAID_LEVELS: raise ValueError('Invalid raidlevel: [{}]'.format(raidlevel)) min_devices = md_minimum_devices(raidlevel) if len(devices) < min_devices: err = 'Not enough devices for raidlevel: ' + str(raidlevel) err += ' minimum devices needed: ' + str(min_devices) raise ValueError(err) if spares and raidlevel not in SPARE_RAID_LEVELS: err = ('Raidlevel does not support spare devices: ' + str(raidlevel)) raise ValueError(err) (hostname, _err) = util.subp(["hostname", "-s"], rcs=[0], capture=True) cmd = ["mdadm", "--create", md_devname, "--run", "--homehost=%s" % hostname.strip(), "--level=%s" % raidlevel, "--raid-devices=%s" % len(devices)] if md_name: cmd.append("--name=%s" % md_name) for device in devices: # mdadm can be sticky, double check holders = get_holders(device) if len(holders) > 0: LOG.warning('Detected holders during mdadm creation: %s', holders) raise OSError('Failed to remove holders from %s', device) # Zero out device superblock just in case device has been used for raid # before, as this will cause many issues util.subp(["mdadm", "--zero-superblock", device], capture=True) cmd.append(device) if spares: cmd.append("--spare-devices=%s" % len(spares)) for device in spares: util.subp(["mdadm", "--zero-superblock", device], capture=True) cmd.append(device) # Create the raid device util.subp(["udevadm", "settle"]) util.subp(["udevadm", "control", "--stop-exec-queue"]) try: util.subp(cmd, capture=True) except util.ProcessExecutionError: # frequent issues by modules being missing (LP: #1519470) - add debug LOG.debug('mdadm_create failed - extra debug regarding md modules') (out, _err) = util.subp(["lsmod"], capture=True) if not _err: LOG.debug('modules loaded: \n%s' % out) raidmodpath = '/lib/modules/%s/kernel/drivers/md' % os.uname()[2] (out, _err) = util.subp(["find", raidmodpath], rcs=[0, 1], capture=True) if out: LOG.debug('available md modules: \n%s' % out) else: LOG.debug('no available md modules found') for dev in devices + spares: h = get_holders(dev) LOG.debug('Device %s has holders: %s', dev, h) raise util.subp(["udevadm", "control", "--start-exec-queue"]) util.subp(["udevadm", "settle", "--exit-if-exists=%s" % md_devname]) def mdadm_examine(devpath, export=MDADM_USE_EXPORT): ''' exectute mdadm --examine, and optionally append --export. Parse and return dict of key=val from output''' assert_valid_devpath(devpath) cmd = ["mdadm", "--examine"] if export: cmd.extend(["--export"]) cmd.extend([devpath]) try: (out, _err) = util.subp(cmd, capture=True) except CalledProcessError: LOG.exception('Error: not a valid md device: ' + devpath) return {} if export: data = __mdadm_export_to_dict(out) else: data = __upgrade_detail_dict(__mdadm_detail_to_dict(out)) return data def mdadm_stop(devpath, retries=None): assert_valid_devpath(devpath) if not retries: retries = [0.2] * 60 sync_action = md_sysfs_attr_path(devpath, 'sync_action') sync_max = md_sysfs_attr_path(devpath, 'sync_max') sync_min = md_sysfs_attr_path(devpath, 'sync_min') LOG.info("mdadm stopping: %s" % devpath) for (attempt, wait) in enumerate(retries): try: LOG.debug('mdadm: stop on %s attempt %s', devpath, attempt) # An array in 'resync' state may not be stoppable, attempt to # cancel an ongoing resync val = md_sysfs_attr(devpath, 'sync_action') LOG.debug('%s/sync_max = %s', sync_action, val) if val != "idle": LOG.debug("mdadm: setting array sync_action=idle") try: util.write_file(sync_action, content="idle") except (IOError, OSError) as e: LOG.debug("mdadm: (non-fatal) write to %s failed %s", sync_action, e) # Setting the sync_{max,min} may can help prevent the array from # changing back to 'resync' which may prevent the array from being # stopped val = md_sysfs_attr(devpath, 'sync_max') LOG.debug('%s/sync_max = %s', sync_max, val) if val != "0": LOG.debug("mdadm: setting array sync_{min,max}=0") try: for sync_file in [sync_max, sync_min]: util.write_file(sync_file, content="0") except (IOError, OSError) as e: LOG.debug('mdadm: (non-fatal) write to %s failed %s', sync_file, e) # one wonders why this command doesn't do any of the above itself? out, err = util.subp(["mdadm", "--manage", "--stop", devpath], capture=True) LOG.debug("mdadm stop command output:\n%s\n%s", out, err) LOG.info("mdadm: successfully stopped %s after %s attempt(s)", devpath, attempt+1) return except util.ProcessExecutionError: LOG.warning("mdadm stop failed, retrying ") if os.path.isfile('/proc/mdstat'): LOG.critical("/proc/mdstat:\n%s", util.load_file('/proc/mdstat')) LOG.debug("mdadm: stop failed, retrying in %s seconds", wait) time.sleep(wait) pass raise OSError('Failed to stop mdadm device %s', devpath) def mdadm_remove(devpath): assert_valid_devpath(devpath) LOG.info("mdadm removing: %s" % devpath) out, err = util.subp(["mdadm", "--remove", devpath], rcs=[0], capture=True) LOG.debug("mdadm remove:\n%s\n%s", out, err) def mdadm_query_detail(md_devname, export=MDADM_USE_EXPORT): valid_mdname(md_devname) cmd = ["mdadm", "--query", "--detail"] if export: cmd.extend(["--export"]) cmd.extend([md_devname]) (out, _err) = util.subp(cmd, capture=True) if export: data = __mdadm_export_to_dict(out) else: data = __upgrade_detail_dict(__mdadm_detail_to_dict(out)) return data def mdadm_detail_scan(): (out, _err) = util.subp(["mdadm", "--detail", "--scan"], capture=True) if not _err: return out def md_present(mdname): """Check if mdname is present in /proc/mdstat""" if not mdname: raise ValueError('md_present requires a valid md name') try: mdstat = util.load_file('/proc/mdstat') except IOError as e: if util.is_file_not_found_exc(e): LOG.warning('Failed to read /proc/mdstat; ' 'md modules might not be loaded') return False else: raise e md_kname = dev_short(mdname) # Find lines like: # md10 : active raid1 vdc1[1] vda2[0] present = [line for line in mdstat.splitlines() if line.split(":")[0].rstrip() == md_kname] if len(present) > 0: return True return False # ------------------------------ # def valid_mdname(md_devname): assert_valid_devpath(md_devname) if not is_valid_device(md_devname): raise ValueError('Specified md device does not exist: ' + md_devname) return False return True def valid_devpath(devpath): if devpath: return devpath.startswith('/dev') return False def assert_valid_devpath(devpath): if not valid_devpath(devpath): raise ValueError("Invalid devpath: '%s'" % devpath) def md_sysfs_attr_path(md_devname, attrname): """ Return the path to a md device attribute under the 'md' dir """ # build /sys/class/block//md sysmd = sys_block_path(md_devname, "md") # append attrname return os.path.join(sysmd, attrname) def md_sysfs_attr(md_devname, attrname): """ Return the attribute str of an md device found under the 'md' dir """ attrdata = '' if not valid_mdname(md_devname): raise ValueError('Invalid md devicename: [{}]'.format(md_devname)) sysfs_attr_path = md_sysfs_attr_path(md_devname, attrname) if os.path.isfile(sysfs_attr_path): attrdata = util.load_file(sysfs_attr_path).strip() return attrdata def md_raidlevel_short(raidlevel): if isinstance(raidlevel, int) or raidlevel in ['linear', 'stripe']: return raidlevel return int(raidlevel.replace('raid', '')) def md_minimum_devices(raidlevel): ''' return the minimum number of devices for a given raid level ''' rl = md_raidlevel_short(raidlevel) if rl in [0, 1, 'linear', 'stripe']: return 2 if rl in [5]: return 3 if rl in [6, 10]: return 4 return -1 def __md_check_array_state(md_devname, mode='READWRITE'): modes = { 'READWRITE': READWRITE_RAID_STATES, 'READONLY': READONLY_RAID_STATES, 'ERROR': ERROR_RAID_STATES, } if mode not in modes: raise ValueError('Invalid Array State mode: ' + mode) array_state = md_sysfs_attr(md_devname, 'array_state') if array_state in modes[mode]: return True return False def md_check_array_state_rw(md_devname): return __md_check_array_state(md_devname, mode='READWRITE') def md_check_array_state_ro(md_devname): return __md_check_array_state(md_devname, mode='READONLY') def md_check_array_state_error(md_devname): return __md_check_array_state(md_devname, mode='ERROR') def __mdadm_export_to_dict(output): ''' convert Key=Value text output into dictionary ''' return dict(tok.split('=', 1) for tok in shlex.split(output)) def __mdadm_detail_to_dict(input): ''' Convert mdadm --detail output to dictionary /dev/vde: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 93a73e10:427f280b:b7076c02:204b8f7a Name : wily-foobar:0 (local to host wily-foobar) Creation Time : Sat Dec 12 16:06:05 2015 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 20955136 (9.99 GiB 10.73 GB) Used Dev Size : 20955136 (9.99 GiB 10.73 GB) Array Size : 10477568 (9.99 GiB 10.73 GB) Data Offset : 16384 sectors Super Offset : 8 sectors Unused Space : before=16296 sectors, after=0 sectors State : clean Device UUID : 8fcd62e6:991acc6e:6cb71ee3:7c956919 Update Time : Sat Dec 12 16:09:09 2015 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 65b57c2e - correct Events : 17 Device Role : spare Array State : AA ('A' == active, '.' == missing, 'R' == replacing) ''' data = {} device = re.findall(r'^(\/dev\/[a-zA-Z0-9-\._]+)', input) if len(device) == 1: data.update({'device': device[0]}) else: raise ValueError('Failed to determine device in input') # FIXME: probably could do a better regex to match the LHS which # has one, two or three words rem = r'(\w+|\w+\ \w+|\w+\ \w+\ \w+)\ \:\ ([a-zA-Z0-9\-\.,: \(\)=\']+)' for f in re.findall(rem, input, re.MULTILINE): key = f[0].replace(' ', '_').lower() val = f[1] if key in data: raise ValueError('Duplicate key in mdadm regex parsing: ' + key) data.update({key: val}) return data def md_device_key_role(devname): if not devname: raise ValueError('Missing parameter devname') return 'MD_DEVICE_' + dev_short(devname) + '_ROLE' def md_device_key_dev(devname): if not devname: raise ValueError('Missing parameter devname') return 'MD_DEVICE_' + dev_short(devname) + '_DEV' def __upgrade_detail_dict(detail): ''' This method attempts to convert mdadm --detail output into a KEY=VALUE output the same as mdadm --detail --export from mdadm v3.3 ''' # if the input already has MD_UUID, it's already been converted if 'MD_UUID' in detail: return detail md_detail = { 'MD_LEVEL': detail['raid_level'], 'MD_DEVICES': detail['raid_devices'], 'MD_METADATA': detail['version'], 'MD_NAME': detail['name'].split()[0], } # exmaine has ARRAY UUID if 'array_uuid' in detail: md_detail.update({'MD_UUID': detail['array_uuid']}) # query,detail has UUID elif 'uuid' in detail: md_detail.update({'MD_UUID': detail['uuid']}) device = detail['device'] # MD_DEVICE_vdc1_DEV=/dev/vdc1 md_detail.update({md_device_key_dev(device): device}) if 'device_role' in detail: role = detail['device_role'] if role != 'spare': # device_role = Active device 1 role = role.split()[-1] # MD_DEVICE_vdc1_ROLE=spare md_detail.update({md_device_key_role(device): role}) return md_detail def md_read_run_mdadm_map(): ''' md1 1.2 59beb40f:4c202f67:088e702b:efdf577a /dev/md1 md0 0.90 077e6a9e:edf92012:e2a6e712:b193f786 /dev/md0 return # md_shortname = (metaversion, md_uuid, md_devpath) data = { 'md1': (1.2, 59beb40f:4c202f67:088e702b:efdf577a, /dev/md1) 'md0': (0.90, 077e6a9e:edf92012:e2a6e712:b193f786, /dev/md0) ''' mdadm_map = {} run_mdadm_map = '/run/mdadm/map' if os.path.exists(run_mdadm_map): with open(run_mdadm_map, 'r') as fp: data = fp.read().strip() for entry in data.split('\n'): (key, meta, md_uuid, dev) = entry.split() mdadm_map.update({key: (meta, md_uuid, dev)}) return mdadm_map def md_get_spares_list(devpath): sysfs_md = sys_block_path(devpath, "md") spares = [dev_path(dev[4:]) for dev in os.listdir(sysfs_md) if (dev.startswith('dev-') and util.load_file(os.path.join(sysfs_md, dev, 'state')).strip() == 'spare')] return spares def md_get_devices_list(devpath): sysfs_md = sys_block_path(devpath, "md") devices = [dev_path(dev[4:]) for dev in os.listdir(sysfs_md) if (dev.startswith('dev-') and util.load_file(os.path.join(sysfs_md, dev, 'state')).strip() != 'spare')] return devices def md_check_array_uuid(md_devname, md_uuid): valid_mdname(md_devname) # confirm we have /dev/{mdname} by following the udev symlink mduuid_path = ('/dev/disk/by-id/md-uuid-' + md_uuid) mdlink_devname = dev_path(os.path.realpath(mduuid_path)) if md_devname != mdlink_devname: err = ('Mismatch between devname and md-uuid symlink: ' + '%s -> %s != %s' % (mduuid_path, mdlink_devname, md_devname)) raise ValueError(err) return True def md_get_uuid(md_devname): valid_mdname(md_devname) md_query = mdadm_query_detail(md_devname) return md_query.get('MD_UUID', None) def _compare_devlist(expected, found): LOG.debug('comparing device lists: ' 'expected: {} found: {}'.format(expected, found)) expected = set(expected) found = set(found) if expected != found: missing = expected.difference(found) extra = found.difference(expected) raise ValueError("RAID array device list does not match." " Missing: {} Extra: {}".format(missing, extra)) def md_check_raidlevel(raidlevel): # Validate raidlevel against what curtin supports configuring if raidlevel not in VALID_RAID_LEVELS: err = ('Invalid raidlevel: ' + raidlevel + ' Must be one of: ' + str(VALID_RAID_LEVELS)) raise ValueError(err) return True def md_block_until_in_sync(md_devname): ''' sync_completed This shows the number of sectors that have been completed of whatever the current sync_action is, followed by the number of sectors in total that could need to be processed. The two numbers are separated by a '/' thus effectively showing one value, a fraction of the process that is complete. A 'select' on this attribute will return when resync completes, when it reaches the current sync_max (below) and possibly at other times. ''' # FIXME: use selectors to block on: /sys/class/block/mdX/md/sync_completed pass def md_check_array_state(md_devname): # check array state writable = md_check_array_state_rw(md_devname) degraded = md_sysfs_attr(md_devname, 'degraded') sync_action = md_sysfs_attr(md_devname, 'sync_action') if not writable: raise ValueError('Array not in writable state: ' + md_devname) if degraded != "0": raise ValueError('Array in degraded state: ' + md_devname) if sync_action != "idle": raise ValueError('Array syncing, not idle state: ' + md_devname) return True def md_check_uuid(md_devname): md_uuid = md_get_uuid(md_devname) if not md_uuid: raise ValueError('Failed to get md UUID from device: ' + md_devname) return md_check_array_uuid(md_devname, md_uuid) def md_check_devices(md_devname, devices): if not devices or len(devices) == 0: raise ValueError('Cannot verify raid array with empty device list') # collect and compare raid devices based on md name versus # expected device list. # # NB: In some cases, a device might report as a spare until # md has finished syncing it into the array. Currently # we fail the check since the specified raid device is not # yet in its proper role. Callers can check mdadm_sync_action # state to see if the array is currently recovering, which would # explain the failure. Also mdadm_degraded will indicate if the # raid is currently degraded or not, which would also explain the # failure. md_raid_devices = md_get_devices_list(md_devname) LOG.debug('md_check_devices: md_raid_devs: ' + str(md_raid_devices)) _compare_devlist(devices, md_raid_devices) def md_check_spares(md_devname, spares): # collect and compare spare devices based on md name versus # expected device list. md_raid_spares = md_get_spares_list(md_devname) _compare_devlist(spares, md_raid_spares) def md_check_array_membership(md_devname, devices): # validate that all devices are members of the correct array md_uuid = md_get_uuid(md_devname) for device in devices: dev_examine = mdadm_examine(device, export=False) if 'MD_UUID' not in dev_examine: raise ValueError('Device is not part of an array: ' + device) dev_uuid = dev_examine['MD_UUID'] if dev_uuid != md_uuid: err = "Device {} is not part of {} array. ".format(device, md_devname) err += "MD_UUID mismatch: device:{} != array:{}".format(dev_uuid, md_uuid) raise ValueError(err) def md_check(md_devname, raidlevel, devices=[], spares=[]): ''' Check passed in variables from storage configuration against the system we're running upon. ''' LOG.debug('RAID validation: ' + 'name={} raidlevel={} devices={} spares={}'.format(md_devname, raidlevel, devices, spares)) assert_valid_devpath(md_devname) md_check_array_state(md_devname) md_check_raidlevel(raidlevel) md_check_uuid(md_devname) md_check_devices(md_devname, devices) md_check_spares(md_devname, spares) md_check_array_membership(md_devname, devices + spares) LOG.debug('RAID array OK: ' + md_devname) return True # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/block/mkfs.py000066400000000000000000000171331326565350400203460ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. # This module wraps calls to mkfs. and determines the appropriate flags # for each filesystem type from curtin import util from curtin import block import string import os from uuid import uuid1 mkfs_commands = { "btrfs": "mkfs.btrfs", "ext2": "mkfs.ext2", "ext3": "mkfs.ext3", "ext4": "mkfs.ext4", "fat": "mkfs.vfat", "fat12": "mkfs.vfat", "fat16": "mkfs.vfat", "fat32": "mkfs.vfat", "vfat": "mkfs.vfat", "jfs": "jfs_mkfs", "ntfs": "mkntfs", "reiserfs": "mkfs.reiserfs", "swap": "mkswap", "xfs": "mkfs.xfs" } specific_to_family = { "ext2": "ext", "ext3": "ext", "ext4": "ext", "fat12": "fat", "fat16": "fat", "fat32": "fat", "vfat": "fat", } label_length_limits = { "btrfs": 256, "ext": 16, "fat": 11, "jfs": 16, # see jfs_tune manpage "ntfs": 32, "reiserfs": 16, "swap": 15, # not in manpages, found experimentally "xfs": 12 } family_flag_mappings = { "fatsize": {"fat": ("-F", "{fatsize}")}, # flag with no parameter "force": {"btrfs": "--force", "ext": "-F", "fat": "-I", "jfs": "-q", "ntfs": "--force", "reiserfs": "-f", "swap": "--force", "xfs": "-f"}, "label": {"btrfs": ("--label", "{label}"), "ext": ("-L", "{label}"), "fat": ("-n", "{label}"), "jfs": ("-L", "{label}"), "ntfs": ("--label", "{label}"), "reiserfs": ("--label", "{label}"), "swap": ("--label", "{label}"), "xfs": ("-L", "{label}")}, # flag with no parameter, N.B: this isn't used/exposed "quiet": {"ext": "-q", "ntfs": "-q", "reiserfs": "-q", "xfs": "--quiet"}, "sectorsize": { "btrfs": ("--sectorsize", "{sectorsize}",), "ext": ("-b", "{sectorsize}"), "fat": ("-S", "{sectorsize}"), "ntfs": ("--sector-size", "{sectorsize}"), "reiserfs": ("--block-size", "{sectorsize}"), "xfs": ("-s", "{sectorsize}")}, "uuid": {"btrfs": ("--uuid", "{uuid}"), "ext": ("-U", "{uuid}"), "reiserfs": ("--uuid", "{uuid}"), "swap": ("--uuid", "{uuid}"), "xfs": ("-m", "uuid={uuid}")}, } release_flag_mapping_overrides = { "precise": { "force": {"btrfs": None}, "uuid": {"btrfs": None}}, "trusty": { "uuid": {"btrfs": None, "xfs": None}}, } def valid_fstypes(): return list(mkfs_commands.keys()) def get_flag_mapping(flag_name, fs_family, param=None, strict=False): ret = [] release = util.lsb_release()['codename'] overrides = release_flag_mapping_overrides.get(release, {}) if flag_name in overrides and fs_family in overrides[flag_name]: flag_sym = overrides[flag_name][fs_family] else: flag_sym_families = family_flag_mappings.get(flag_name) if flag_sym_families is None: raise ValueError("unsupported flag '%s'" % flag_name) flag_sym = flag_sym_families.get(fs_family) if flag_sym is None: if strict: raise ValueError("flag '%s' not supported by fs family '%s'" % flag_name, fs_family) else: return ret if param is None: ret.append(flag_sym) else: params = [k.format(**{flag_name: param}) for k in flag_sym] if list(params) == list(flag_sym): raise ValueError("Param %s not used for flag_name=%s and " "fs_family=%s." % (param, flag_name, fs_family)) ret.extend(params) return ret def mkfs(path, fstype, strict=False, label=None, uuid=None, force=False): """Make filesystem on block device with given path using given fstype and appropriate flags for filesystem family. Filesystem uuid and label can be passed in as kwargs. By default no label or uuid will be used. If a filesystem label is too long curtin will raise a ValueError if the strict flag is true or will truncate it to the maximum possible length. If a flag is not supported by a filesystem family mkfs will raise a ValueError if the strict flag is true or silently ignore it otherwise. Force can be specified to force the mkfs command to continue even if it finds old data or filesystems on the partition. """ if path is None: raise ValueError("invalid block dev path '%s'" % path) if not os.path.exists(path): raise ValueError("'%s': no such file or directory" % path) fs_family = specific_to_family.get(fstype, fstype) mkfs_cmd = mkfs_commands.get(fstype) if not mkfs_cmd: raise ValueError("unsupported fs type '%s'" % fstype) if util.which(mkfs_cmd) is None: raise ValueError("need '%s' but it could not be found" % mkfs_cmd) cmd = [mkfs_cmd] # use device logical block size to ensure properly formated filesystems (logical_bsize, physical_bsize) = block.get_blockdev_sector_size(path) if logical_bsize > 512: lbs_str = ('size={}'.format(logical_bsize) if fs_family == "xfs" else str(logical_bsize)) cmd.extend(get_flag_mapping("sectorsize", fs_family, param=lbs_str, strict=strict)) if fs_family == 'fat': # mkfs.vfat doesn't calculate this right for non-512b sector size # lp:1569576 , d-i uses the same setting. cmd.extend(["-s", "1"]) if force: cmd.extend(get_flag_mapping("force", fs_family, strict=strict)) if label is not None: limit = label_length_limits.get(fs_family) if len(label) > limit: if strict: raise ValueError("length of fs label for '%s' exceeds max \ allowed for fstype '%s'. max is '%s'" % (path, fstype, limit)) else: label = label[:limit] cmd.extend(get_flag_mapping("label", fs_family, param=label, strict=strict)) # If uuid is not specified, generate one and try to use it if uuid is None: uuid = str(uuid1()) cmd.extend(get_flag_mapping("uuid", fs_family, param=uuid, strict=strict)) if fs_family == "fat": fat_size = fstype.strip(string.ascii_letters) if fat_size in ["12", "16", "32"]: cmd.extend(get_flag_mapping("fatsize", fs_family, param=fat_size, strict=strict)) cmd.append(path) util.subp(cmd, capture=True) # if fs_family does not support specifying uuid then use blkid to find it # if blkid is unable to then just return None for uuid if fs_family not in family_flag_mappings['uuid']: try: uuid = block.blkid()[path]['UUID'] except Exception: pass # return uuid, may be none if it could not be specified and blkid could not # find it return uuid def mkfs_from_config(path, info, strict=False): """Make filesystem on block device with given path according to storage config given""" fstype = info.get('fstype') if fstype is None: raise ValueError("fstype must be specified") # NOTE: Since old metadata on partitions that have not been wiped can cause # some mkfs commands to refuse to work, it's best to use force=True mkfs(path, fstype, strict=strict, force=True, uuid=info.get('uuid'), label=info.get('label')) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/block/zfs.py000066400000000000000000000176561326565350400202220ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ Wrap calls to the zfsutils-linux package (zpool, zfs) for creating zpools and volumes.""" import os from curtin.config import merge_config from curtin import util from . import blkid ZPOOL_DEFAULT_PROPERTIES = { 'ashift': 12, 'version': 28, } ZFS_DEFAULT_PROPERTIES = { 'atime': 'off', 'canmount': 'off', 'normalization': 'formD', } def _join_flags(optflag, params): """ Insert optflag for each param in params and return combined list. :param optflag: String of the optional flag, like '-o' :param params: dictionary of parameter names and values :returns: List of strings :raises: ValueError: if params are of incorrect type Example: optflag='-o', params={'foo': 1, 'bar': 2} => ['-o', 'foo=1', '-o', 'bar=2'] """ if not isinstance(optflag, str) or not optflag: raise ValueError("Invalid optflag: %s", optflag) if not isinstance(params, dict): raise ValueError("Invalid params: %s", params) # zfs flags and params require string booleans ('on', 'off') # yaml implicity converts those and others to booleans, we # revert that here def _b2s(value): if not isinstance(value, bool): return value if value: return 'on' return 'off' return [] if not params else ( [param for opt in zip([optflag] * len(params), ["%s=%s" % (k, _b2s(v)) for (k, v) in params.items()]) for param in opt]) def _join_pool_volume(poolname, volume): """ Combine poolname and volume. """ if not poolname or not volume: raise ValueError('Invalid pool (%s) or volume (%s)', poolname, volume) return os.path.normpath("%s/%s" % (poolname, volume)) def zpool_create(poolname, vdevs, mountpoint=None, altroot=None, pool_properties=None, zfs_properties=None): """ Create a zpool called comprised of devices specified in . :param poolname: String used to name the pool. :param vdevs: An iterable of strings of block devices paths which *should* start with '/dev/disk/by-id/' to follow best practices. :param pool_properties: A dictionary of key, value pairs to be passed to `zpool create` with the `-o` flag as properties of the zpool. If value is None, then ZPOOL_DEFAULT_PROPERTIES will be used. :param zfs_properties: A dictionary of key, value pairs to be passed to `zpool create` with the `-O` flag as properties of the filesystems created under the pool. If the value is None, then ZFS_DEFAULT_PROPERTIES will be used. :returns: None on success. :raises: ValueError: raises exceptions on missing/badd input :raises: ProcessExecutionError: raised on unhandled exceptions from invoking `zpool create`. """ if not isinstance(poolname, util.string_types) or not poolname: raise ValueError("Invalid poolname: %s", poolname) if isinstance(vdevs, util.string_types) or isinstance(vdevs, dict): raise TypeError("Invalid vdevs: expected list-like iterable") else: try: vdevs = list(vdevs) except TypeError: raise TypeError("vdevs must be iterable, not: %s" % str(vdevs)) pool_cfg = ZPOOL_DEFAULT_PROPERTIES.copy() if pool_properties: merge_config(pool_cfg, pool_properties) zfs_cfg = ZFS_DEFAULT_PROPERTIES.copy() if zfs_properties: merge_config(zfs_cfg, zfs_properties) options = _join_flags('-o', pool_cfg) options.extend(_join_flags('-O', zfs_cfg)) if mountpoint: options.extend(_join_flags('-O', {'mountpoint': mountpoint})) if altroot: options.extend(['-R', altroot]) cmd = ["zpool", "create"] + options + [poolname] + vdevs util.subp(cmd, capture=True) def zfs_create(poolname, volume, zfs_properties=None): """ Create a filesystem dataset within the specified zpool. :param poolname: String used to specify the pool in which to create the filesystem. :param volume: String used as the name of the filesystem. :param zfs_properties: A dict of properties to be passed to `zfs create` with the `-o` flag as properties of the filesystems created under the pool. If value is None then no properties will be set on the filesystem. :returns: None :raises: ValueError: raises exceptions on missing/bad input. :raises: ProcessExecutionError: raised on unhandled exceptions from invoking `zfs create`. """ if not isinstance(poolname, util.string_types) or not poolname: raise ValueError("Invalid poolname: %s", poolname) if not isinstance(volume, util.string_types) or not volume: raise ValueError("Invalid volume: %s", volume) zfs_cfg = {} if zfs_properties: merge_config(zfs_cfg, zfs_properties) options = _join_flags('-o', zfs_cfg) cmd = ["zfs", "create"] + options + [_join_pool_volume(poolname, volume)] util.subp(cmd, capture=True) # mount volume if it canmount=noauto if zfs_cfg.get('canmount') == 'noauto': zfs_mount(poolname, volume) def zfs_mount(poolname, volume): """ Mount zfs pool/volume :param poolname: String used to specify the pool in which to create the filesystem. :param volume: String used as the name of the filesystem. :returns: None :raises: ValueError: raises exceptions on missing/bad input. :raises: ProcessExecutionError: raised on unhandled exceptions from invoking `zfs mount`. """ if not isinstance(poolname, util.string_types) or not poolname: raise ValueError("Invalid poolname: %s", poolname) if not isinstance(volume, util.string_types) or not volume: raise ValueError("Invalid volume: %s", volume) cmd = ['zfs', 'mount', _join_pool_volume(poolname, volume)] util.subp(cmd, capture=True) def zpool_list(): """ Return a list of zfs pool names :returns: List of strings """ # -H drops the header, -o specifies an attribute to fetch out, _err = util.subp(['zpool', 'list', '-H', '-o', 'name'], capture=True) return out.splitlines() def zpool_export(poolname): """ Export specified zpool :param poolname: String used to specify the pool to export. :returns: None """ if not isinstance(poolname, util.string_types) or not poolname: raise ValueError("Invalid poolname: %s", poolname) util.subp(['zpool', 'export', poolname]) def device_to_poolname(devname): """ Use blkid information to map a devname to a zpool poolname stored in in 'LABEL' if devname is a zfs_member and LABEL is set. :param devname: A block device name :returns: String Example blkid output on a zfs vdev: {'/dev/vdb1': {'LABEL': 'rpool', 'PARTUUID': '52dff41a-49be-44b3-a36a-1b499e570e69', 'TYPE': 'zfs_member', 'UUID': '12590398935543668673', 'UUID_SUB': '7809435738165038086'}} device_to_poolname('/dev/vdb1') would return 'rpool' """ if not isinstance(devname, util.string_types) or not devname: raise ValueError("device_to_poolname: invalid devname: '%s'" % devname) blkid_info = blkid(devs=[devname]) if not blkid_info or devname not in blkid_info: return vdev = blkid_info.get(devname) vdev_type = vdev.get('TYPE') label = vdev.get('LABEL') if vdev_type == 'zfs_member' and label: return label # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/000077500000000000000000000000001326565350400175365ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/curtin/commands/__init__.py000066400000000000000000000005771326565350400216600ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. def populate_one_subcmd(parser, options_dict, handler): for ent in options_dict: args = ent[0] if not isinstance(args, (list, tuple)): args = (args,) parser.add_argument(*args, **ent[1]) parser.set_defaults(func=handler) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/apply_net.py000066400000000000000000000224461326565350400221130ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys from .. import log import curtin.net as net import curtin.util as util from curtin import config from . import populate_one_subcmd LOG = log.LOG IFUPDOWN_IPV6_MTU_PRE_HOOK = """#!/bin/bash -e # injected by curtin installer [ "${IFACE}" != "lo" ] || exit 0 # Trigger only if MTU configured [ -n "${IF_MTU}" ] || exit 0 read CUR_DEV_MTU /run/network/${IFACE}_dev.mtu [ -n "${CUR_IPV6_MTU}" ] && echo ${CUR_IPV6_MTU} > /run/network/${IFACE}_ipv6.mtu exit 0 """ IFUPDOWN_IPV6_MTU_POST_HOOK = """#!/bin/bash -e # injected by curtin installer [ "${IFACE}" != "lo" ] || exit 0 # Trigger only if MTU configured [ -n "${IF_MTU}" ] || exit 0 read PRE_DEV_MTU /proc/sys/net/ipv6/conf/${IFACE}/mtu ||: elif [ "${ADDRFAM}" = "inet" ]; then # handle the clobber case where inet mtu changes v6 mtu. # ifupdown will already have set dev mtu, so lower mtu # if needed. If v6 mtu was larger, it get's clamped down # to the dev MTU value. if [ ${PRE_IPV6_MTU} -lt ${CUR_IPV6_MTU} ]; then # sysctl -q -e -w net.ipv6.conf.${IFACE}.mtu=${PRE_IPV6_MTU} echo ${PRE_IPV6_MTU} >/proc/sys/net/ipv6/conf/${IFACE}/mtu ||: fi fi exit 0 """ def apply_net(target, network_state=None, network_config=None): if network_state is None and network_config is None: raise ValueError("Must provide network_config or network_state") if target is None: raise ValueError("target cannot be None.") passthrough = False if network_state: # NB: we cannot support passthrough until curtin can convert from # network_state to network-config yaml ns = net.network_state.from_state_file(network_state) raise ValueError('Not Supported; curtin lacks a network_state to ' 'network_config converter.') elif network_config: netcfg = config.load_config(network_config) # curtin will pass-through the netconfig into the target # for rendering at runtime unless the target OS does not # support NETWORK_CONFIG_V2 feature. LOG.info('Checking cloud-init in target [%s] for network ' 'configuration passthrough support.', target) try: passthrough = net.netconfig_passthrough_available(target) except util.ProcessExecutionError: LOG.warning('Failed to determine if passthrough is available') if passthrough: LOG.info('Passing network configuration through to target: %s', target) net.render_netconfig_passthrough(target, netconfig=netcfg) else: ns = net.parse_net_config_data(netcfg.get('network', {})) if not passthrough: LOG.info('Rendering network configuration in target') net.render_network_state(target=target, network_state=ns) _maybe_remove_legacy_eth0(target) _disable_ipv6_privacy_extensions(target) _patch_ifupdown_ipv6_mtu_hook(target) def _patch_ifupdown_ipv6_mtu_hook(target, prehookfn="etc/network/if-pre-up.d/mtuipv6", posthookfn="etc/network/if-up.d/mtuipv6"): contents = { 'prehook': IFUPDOWN_IPV6_MTU_PRE_HOOK, 'posthook': IFUPDOWN_IPV6_MTU_POST_HOOK, } hookfn = { 'prehook': prehookfn, 'posthook': posthookfn, } for hook in ['prehook', 'posthook']: fn = hookfn[hook] cfg = util.target_path(target, path=fn) LOG.info('Injecting fix for ipv6 mtu settings: %s', cfg) util.write_file(cfg, contents[hook], mode=0o755) def _disable_ipv6_privacy_extensions(target, path="etc/sysctl.d/10-ipv6-privacy.conf"): """Ubuntu server image sets a preference to use IPv6 privacy extensions by default; this races with the cloud-image desire to disable them. Resolve this by allowing the cloud-image setting to win. """ LOG.debug('Attempting to remove ipv6 privacy extensions') cfg = util.target_path(target, path=path) if not os.path.exists(cfg): LOG.warn('Failed to find ipv6 privacy conf file %s', cfg) return bmsg = "Disabling IPv6 privacy extensions config may not apply." try: contents = util.load_file(cfg) known_contents = ["net.ipv6.conf.all.use_tempaddr = 2", "net.ipv6.conf.default.use_tempaddr = 2"] lines = [f.strip() for f in contents.splitlines() if not f.startswith("#")] if lines == known_contents: LOG.info('Removing ipv6 privacy extension config file: %s', cfg) util.del_file(cfg) msg = "removed %s with known contents" % cfg curtin_contents = '\n'.join( ["# IPv6 Privacy Extensions (RFC 4941)", "# Disabled by curtin", "# net.ipv6.conf.all.use_tempaddr = 2", "# net.ipv6.conf.default.use_tempaddr = 2"]) util.write_file(cfg, curtin_contents) else: LOG.debug('skipping removal of %s, expected content not found', cfg) LOG.debug("Found content in file %s:\n%s", cfg, lines) LOG.debug("Expected contents in file %s:\n%s", cfg, known_contents) msg = (bmsg + " '%s' exists with user configured content." % cfg) except Exception as e: msg = bmsg + " %s exists, but could not be read. %s" % (cfg, e) LOG.exception(msg) raise def _maybe_remove_legacy_eth0(target, path="etc/network/interfaces.d/eth0.cfg"): """Ubuntu cloud images previously included a 'eth0.cfg' that had hard coded content. That file would interfere with the rendered configuration if it was present. if the file does not exist do nothing. If the file exists: - with known content, remove it and warn - with unknown content, leave it and warn """ cfg = util.target_path(target, path=path) if not os.path.exists(cfg): LOG.warn('Failed to find legacy network conf file %s', cfg) return bmsg = "Dynamic networking config may not apply." try: contents = util.load_file(cfg) known_contents = ["auto eth0", "iface eth0 inet dhcp"] lines = [f.strip() for f in contents.splitlines() if not f.startswith("#")] if lines == known_contents: util.del_file(cfg) msg = "removed %s with known contents" % cfg else: msg = (bmsg + " '%s' exists with user configured content." % cfg) except Exception: msg = bmsg + " %s exists, but could not be read." % cfg LOG.exception(msg) raise LOG.warn(msg) def apply_net_main(args): # curtin apply_net [--net-state=/config/netstate.yml] [--target=/] # [--net-config=/config/maas_net.yml] state = util.load_command_environment() log.basicConfig(stream=args.log_file, verbosity=1) if args.target is not None: state['target'] = args.target if args.net_state is not None: state['network_state'] = args.net_state if args.net_config is not None: state['network_config'] = args.net_config if state['target'] is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) if not state['network_config'] and not state['network_state']: sys.stderr.write("Must provide at least config or state\n") sys.exit(2) LOG.info('Applying network configuration') apply_net(target=state['target'], network_state=state['network_state'], network_config=state['network_config']) LOG.info('Applied network configuration successfully') sys.exit(0) CMD_ARGUMENTS = ( ((('-s', '--net-state'), {'help': ('file to read containing network state. ' 'defaults to env["OUTPUT_NETWORK_STATE"]'), 'metavar': 'NETSTATE', 'action': 'store', 'default': os.environ.get('OUTPUT_NETWORK_STATE')}), (('-t', '--target'), {'help': ('target filesystem root to configure networking to. ' 'default is env["TARGET_MOUNT_POINT"]'), 'metavar': 'TARGET', 'action': 'store', 'default': os.environ.get('TARGET_MOUNT_POINT')}), (('-c', '--net-config'), {'help': ('file to read containing curtin network config.' 'defaults to env["OUTPUT_NETWORK_CONFIG"]'), 'metavar': 'NETCONFIG', 'action': 'store', 'default': os.environ.get('OUTPUT_NETWORK_CONFIG')}))) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, apply_net_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/apt_config.py000066400000000000000000000553161326565350400222330ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ apt.py Handle the setup of apt related tasks like proxies, mirrors, repositories. """ import argparse import glob import os import re import sys import yaml from curtin.log import LOG from curtin import (config, util, gpg) from . import populate_one_subcmd # this will match 'XXX:YYY' (ie, 'cloud-archive:foo' or 'ppa:bar') ADD_APT_REPO_MATCH = r"^[\w-]+:\w" # place where apt stores cached repository data APT_LISTS = "/var/lib/apt/lists" # Files to store proxy information APT_CONFIG_FN = "/etc/apt/apt.conf.d/94curtin-config" APT_PROXY_FN = "/etc/apt/apt.conf.d/90curtin-aptproxy" # Default keyserver to use DEFAULT_KEYSERVER = "keyserver.ubuntu.com" # Default archive mirrors PRIMARY_ARCH_MIRRORS = {"PRIMARY": "http://archive.ubuntu.com/ubuntu/", "SECURITY": "http://security.ubuntu.com/ubuntu/"} PORTS_MIRRORS = {"PRIMARY": "http://ports.ubuntu.com/ubuntu-ports", "SECURITY": "http://ports.ubuntu.com/ubuntu-ports"} PRIMARY_ARCHES = ['amd64', 'i386'] PORTS_ARCHES = ['s390x', 'arm64', 'armhf', 'powerpc', 'ppc64el'] def get_default_mirrors(arch=None): """returns the default mirrors for the target. These depend on the architecture, for more see: https://wiki.ubuntu.com/UbuntuDevelopment/PackageArchive#Ports""" if arch is None: arch = util.get_architecture() if arch in PRIMARY_ARCHES: return PRIMARY_ARCH_MIRRORS.copy() if arch in PORTS_ARCHES: return PORTS_MIRRORS.copy() raise ValueError("No default mirror known for arch %s" % arch) def handle_apt(cfg, target=None): """ handle_apt process the config for apt_config. This can be called from curthooks if a global apt config was provided or via the "apt" standalone command. """ release = util.lsb_release(target=target)['codename'] arch = util.get_architecture(target) mirrors = find_apt_mirror_info(cfg, arch) LOG.debug("Apt Mirror info: %s", mirrors) apply_debconf_selections(cfg, target) if not config.value_as_boolean(cfg.get('preserve_sources_list', True)): generate_sources_list(cfg, release, mirrors, target) rename_apt_lists(mirrors, target) try: apply_apt_proxy_config(cfg, target + APT_PROXY_FN, target + APT_CONFIG_FN) except (IOError, OSError): LOG.exception("Failed to apply proxy or apt config info:") # Process 'apt_source -> sources {dict}' if 'sources' in cfg: params = mirrors params['RELEASE'] = release params['MIRROR'] = mirrors["MIRROR"] matcher = None matchcfg = cfg.get('add_apt_repo_match', ADD_APT_REPO_MATCH) if matchcfg: matcher = re.compile(matchcfg).search add_apt_sources(cfg['sources'], target, template_params=params, aa_repo_match=matcher) def debconf_set_selections(selections, target=None): util.subp(['debconf-set-selections'], data=selections, target=target, capture=True) def dpkg_reconfigure(packages, target=None): # For any packages that are already installed, but have preseed data # we populate the debconf database, but the filesystem configuration # would be preferred on a subsequent dpkg-reconfigure. # so, what we have to do is "know" information about certain packages # to unconfigure them. unhandled = [] to_config = [] for pkg in packages: if pkg in CONFIG_CLEANERS: LOG.debug("unconfiguring %s", pkg) CONFIG_CLEANERS[pkg](target) to_config.append(pkg) else: unhandled.append(pkg) if len(unhandled): LOG.warn("The following packages were installed and preseeded, " "but cannot be unconfigured: %s", unhandled) if len(to_config): util.subp(['dpkg-reconfigure', '--frontend=noninteractive'] + list(to_config), data=None, target=target, capture=True) def apply_debconf_selections(cfg, target=None): """apply_debconf_selections - push content to debconf""" # debconf_selections: # set1: | # cloud-init cloud-init/datasources multiselect MAAS # set2: pkg pkg/value string bar selsets = cfg.get('debconf_selections') if not selsets: LOG.debug("debconf_selections was not set in config") return selections = '\n'.join( [selsets[key] for key in sorted(selsets.keys())]) debconf_set_selections(selections.encode() + b"\n", target=target) # get a complete list of packages listed in input pkgs_cfgd = set() for key, content in selsets.items(): for line in content.splitlines(): if line.startswith("#"): continue pkg = re.sub(r"[:\s].*", "", line) pkgs_cfgd.add(pkg) pkgs_installed = util.get_installed_packages(target) LOG.debug("pkgs_cfgd: %s", pkgs_cfgd) LOG.debug("pkgs_installed: %s", pkgs_installed) need_reconfig = pkgs_cfgd.intersection(pkgs_installed) if len(need_reconfig) == 0: LOG.debug("no need for reconfig") return dpkg_reconfigure(need_reconfig, target=target) def clean_cloud_init(target): """clean out any local cloud-init config""" flist = glob.glob( util.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*")) LOG.debug("cleaning cloud-init config from: %s", flist) for dpkg_cfg in flist: os.unlink(dpkg_cfg) def mirrorurl_to_apt_fileprefix(mirror): """ mirrorurl_to_apt_fileprefix Convert a mirror url to the file prefix used by apt on disk to store cache information for that mirror. To do so do: - take off ???:// - drop tailing / - convert in string / to _ """ string = mirror if string.endswith("/"): string = string[0:-1] pos = string.find("://") if pos >= 0: string = string[pos + 3:] string = string.replace("/", "_") return string def rename_apt_lists(new_mirrors, target=None): """rename_apt_lists - rename apt lists to preserve old cache data""" default_mirrors = get_default_mirrors(util.get_architecture(target)) pre = util.target_path(target, APT_LISTS) for (name, omirror) in default_mirrors.items(): nmirror = new_mirrors.get(name) if not nmirror: continue oprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(omirror) nprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(nmirror) if oprefix == nprefix: continue olen = len(oprefix) for filename in glob.glob("%s_*" % oprefix): newname = "%s%s" % (nprefix, filename[olen:]) LOG.debug("Renaming apt list %s to %s", filename, newname) try: os.rename(filename, newname) except OSError: # since this is a best effort task, warn with but don't fail LOG.warn("Failed to rename apt list:", exc_info=True) def mirror_to_placeholder(tmpl, mirror, placeholder): """ mirror_to_placeholder replace the specified mirror in a template with a placeholder string Checks for existance of the expected mirror and warns if not found """ if mirror not in tmpl: if mirror.endswith("/") and mirror[:-1] in tmpl: LOG.debug("mirror_to_placeholder: '%s' did not exist in tmpl, " "did without a trailing /. Accomodating.", mirror) mirror = mirror[:-1] else: LOG.warn("Expected mirror '%s' not found in: %s", mirror, tmpl) return tmpl.replace(mirror, placeholder) def map_known_suites(suite): """there are a few default names which will be auto-extended. This comes at the inability to use those names literally as suites, but on the other hand increases readability of the cfg quite a lot""" mapping = {'updates': '$RELEASE-updates', 'backports': '$RELEASE-backports', 'security': '$RELEASE-security', 'proposed': '$RELEASE-proposed', 'release': '$RELEASE'} try: retsuite = mapping[suite] except KeyError: retsuite = suite return retsuite def disable_suites(disabled, src, release): """reads the config for suites to be disabled and removes those from the template""" if not disabled: return src retsrc = src for suite in disabled: suite = map_known_suites(suite) releasesuite = util.render_string(suite, {'RELEASE': release}) LOG.debug("Disabling suite %s as %s", suite, releasesuite) newsrc = "" for line in retsrc.splitlines(True): if line.startswith("#"): newsrc += line continue # sources.list allow options in cols[1] which can have spaces # so the actual suite can be [2] or later. example: # deb [ arch=amd64,armel k=v ] http://example.com/debian cols = line.split() if len(cols) > 1: pcol = 2 if cols[1].startswith("["): for col in cols[1:]: pcol += 1 if col.endswith("]"): break if cols[pcol] == releasesuite: line = '# suite disabled by curtin: %s' % line newsrc += line retsrc = newsrc return retsrc def generate_sources_list(cfg, release, mirrors, target=None): """ generate_sources_list create a source.list file based on a custom or default template by replacing mirrors and release in the template """ default_mirrors = get_default_mirrors(util.get_architecture(target)) aptsrc = "/etc/apt/sources.list" params = {'RELEASE': release} for k in mirrors: params[k] = mirrors[k] tmpl = cfg.get('sources_list', None) if tmpl is None: LOG.info("No custom template provided, fall back to modify" "mirrors in %s on the target system", aptsrc) tmpl = util.load_file(util.target_path(target, aptsrc)) # Strategy if no custom template was provided: # - Only replacing mirrors # - no reason to replace "release" as it is from target anyway # - The less we depend upon, the more stable this is against changes # - warn if expected original content wasn't found tmpl = mirror_to_placeholder(tmpl, default_mirrors['PRIMARY'], "$MIRROR") tmpl = mirror_to_placeholder(tmpl, default_mirrors['SECURITY'], "$SECURITY") orig = util.target_path(target, aptsrc) if os.path.exists(orig): os.rename(orig, orig + ".curtin.old") rendered = util.render_string(tmpl, params) disabled = disable_suites(cfg.get('disable_suites'), rendered, release) util.write_file(util.target_path(target, aptsrc), disabled, mode=0o644) # protect the just generated sources.list from cloud-init cloudfile = "/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg" # this has to work with older cloud-init as well, so use old key cloudconf = yaml.dump({'apt_preserve_sources_list': True}, indent=1) try: util.write_file(util.target_path(target, cloudfile), cloudconf, mode=0o644) except IOError: LOG.exception("Failed to protect source.list from cloud-init in (%s)", util.target_path(target, cloudfile)) raise def add_apt_key_raw(key, target=None): """ actual adding of a key as defined in key argument to the system """ LOG.debug("Adding key:\n'%s'", key) try: util.subp(['apt-key', 'add', '-'], data=key.encode(), target=target) except util.ProcessExecutionError: LOG.exception("failed to add apt GPG Key to apt keyring") raise def add_apt_key(ent, target=None): """ Add key to the system as defined in ent (if any). Supports raw keys or keyid's The latter will as a first step fetched to get the raw key """ if 'keyid' in ent and 'key' not in ent: keyserver = DEFAULT_KEYSERVER if 'keyserver' in ent: keyserver = ent['keyserver'] ent['key'] = gpg.getkeybyid(ent['keyid'], keyserver, retries=(1, 2, 5, 10)) if 'key' in ent: add_apt_key_raw(ent['key'], target) def add_apt_sources(srcdict, target=None, template_params=None, aa_repo_match=None): """ add entries in /etc/apt/sources.list.d for each abbreviated sources.list entry in 'srcdict'. When rendering template, also include the values in dictionary searchList """ if template_params is None: template_params = {} if aa_repo_match is None: raise ValueError('did not get a valid repo matcher') if not isinstance(srcdict, dict): raise TypeError('unknown apt format: %s' % (srcdict)) for filename in srcdict: ent = srcdict[filename] if 'filename' not in ent: ent['filename'] = filename add_apt_key(ent, target) if 'source' not in ent: continue source = ent['source'] source = util.render_string(source, template_params) if not ent['filename'].startswith("/"): ent['filename'] = os.path.join("/etc/apt/sources.list.d/", ent['filename']) if not ent['filename'].endswith(".list"): ent['filename'] += ".list" if aa_repo_match(source): with util.ChrootableTarget( target, sys_resolvconf=True) as in_chroot: try: in_chroot.subp(["add-apt-repository", source], retries=(1, 2, 5, 10)) except util.ProcessExecutionError: LOG.exception("add-apt-repository failed.") raise continue sourcefn = util.target_path(target, ent['filename']) try: contents = "%s\n" % (source) util.write_file(sourcefn, contents, omode="a") except IOError as detail: LOG.exception("failed write to file %s: %s", sourcefn, detail) raise util.apt_update(target=target, force=True, comment="apt-source changed config") return def search_for_mirror(candidates): """ Search through a list of mirror urls for one that works This needs to return quickly. """ if candidates is None: return None LOG.debug("search for mirror in candidates: '%s'", candidates) for cand in candidates: try: if util.is_resolvable_url(cand): LOG.debug("found working mirror: '%s'", cand) return cand except Exception: pass return None def update_mirror_info(pmirror, smirror, arch): """sets security mirror to primary if not defined. returns defaults if no mirrors are defined""" if pmirror is not None: if smirror is None: smirror = pmirror return {'PRIMARY': pmirror, 'SECURITY': smirror} return get_default_mirrors(arch) def get_arch_mirrorconfig(cfg, mirrortype, arch): """out of a list of potential mirror configurations select and return the one matching the architecture (or default)""" # select the mirror specification (if-any) mirror_cfg_list = cfg.get(mirrortype, None) if mirror_cfg_list is None: return None # select the specification matching the target arch default = None for mirror_cfg_elem in mirror_cfg_list: arches = mirror_cfg_elem.get("arches") if arch in arches: return mirror_cfg_elem if "default" in arches: default = mirror_cfg_elem return default def get_mirror(cfg, mirrortype, arch): """pass the three potential stages of mirror specification returns None is neither of them found anything otherwise the first hit is returned""" mcfg = get_arch_mirrorconfig(cfg, mirrortype, arch) if mcfg is None: return None # directly specified mirror = mcfg.get("uri", None) # fallback to search if specified if mirror is None: # list of mirrors to try to resolve mirror = search_for_mirror(mcfg.get("search", None)) return mirror def find_apt_mirror_info(cfg, arch=None): """find_apt_mirror_info find an apt_mirror given the cfg provided. It can check for separate config of primary and security mirrors If only primary is given security is assumed to be equal to primary If the generic apt_mirror is given that is defining for both """ if arch is None: arch = util.get_architecture() LOG.debug("got arch for mirror selection: %s", arch) pmirror = get_mirror(cfg, "primary", arch) LOG.debug("got primary mirror: %s", pmirror) smirror = get_mirror(cfg, "security", arch) LOG.debug("got security mirror: %s", smirror) # Note: curtin has no cloud-datasource fallback mirror_info = update_mirror_info(pmirror, smirror, arch) # less complex replacements use only MIRROR, derive from primary mirror_info["MIRROR"] = mirror_info["PRIMARY"] return mirror_info def apply_apt_proxy_config(cfg, proxy_fname, config_fname): """apply_apt_proxy_config Applies any apt*proxy config from if specified """ # Set up any apt proxy cfgs = (('proxy', 'Acquire::http::Proxy "%s";'), ('http_proxy', 'Acquire::http::Proxy "%s";'), ('ftp_proxy', 'Acquire::ftp::Proxy "%s";'), ('https_proxy', 'Acquire::https::Proxy "%s";')) proxies = [fmt % cfg.get(name) for (name, fmt) in cfgs if cfg.get(name)] if len(proxies): LOG.debug("write apt proxy info to %s", proxy_fname) util.write_file(proxy_fname, '\n'.join(proxies) + '\n') elif os.path.isfile(proxy_fname): util.del_file(proxy_fname) LOG.debug("no apt proxy configured, removed %s", proxy_fname) if cfg.get('conf', None): LOG.debug("write apt config info to %s", config_fname) util.write_file(config_fname, cfg.get('conf')) elif os.path.isfile(config_fname): util.del_file(config_fname) LOG.debug("no apt config configured, removed %s", config_fname) def apt_command(args): """ Main entry point for curtin apt-config standalone command This does not read the global config as handled by curthooks, but instead one can specify a different "target" and a new cfg via --config """ cfg = config.load_command_config(args, {}) if args.target is not None: target = args.target else: state = util.load_command_environment() target = state['target'] if target is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) apt_cfg = cfg.get("apt") # if no apt config section is available, do nothing if apt_cfg is not None: LOG.debug("Handling apt to target %s with config %s", target, apt_cfg) try: with util.ChrootableTarget(target, sys_resolvconf=True): handle_apt(apt_cfg, target) except (RuntimeError, TypeError, ValueError, IOError): LOG.exception("Failed to configure apt features '%s'", apt_cfg) sys.exit(1) else: LOG.info("No apt config provided, skipping") sys.exit(0) def translate_old_apt_features(cfg): """translate the few old apt related features into the new config format""" predef_apt_cfg = cfg.get("apt") if predef_apt_cfg is None: cfg['apt'] = {} predef_apt_cfg = cfg.get("apt") if cfg.get('apt_proxy') is not None: if predef_apt_cfg.get('proxy') is not None: msg = ("Error in apt_proxy configuration: " "old and new format of apt features " "are mutually exclusive") LOG.error(msg) raise ValueError(msg) cfg['apt']['proxy'] = cfg.get('apt_proxy') LOG.debug("Transferred %s into new format: %s", cfg.get('apt_proxy'), cfg.get('apte')) del cfg['apt_proxy'] if cfg.get('apt_mirrors') is not None: if predef_apt_cfg.get('mirrors') is not None: msg = ("Error in apt_mirror configuration: " "old and new format of apt features " "are mutually exclusive") LOG.error(msg) raise ValueError(msg) old = cfg.get('apt_mirrors') cfg['apt']['primary'] = [{"arches": ["default"], "uri": old.get('ubuntu_archive')}] cfg['apt']['security'] = [{"arches": ["default"], "uri": old.get('ubuntu_security')}] LOG.debug("Transferred %s into new format: %s", cfg.get('apt_mirror'), cfg.get('apt')) del cfg['apt_mirrors'] # to work this also needs to disable the default protection psl = predef_apt_cfg.get('preserve_sources_list') if psl is not None: if config.value_as_boolean(psl) is True: msg = ("Error in apt_mirror configuration: " "apt_mirrors and preserve_sources_list: True " "are mutually exclusive") LOG.error(msg) raise ValueError(msg) cfg['apt']['preserve_sources_list'] = False if cfg.get('debconf_selections') is not None: if predef_apt_cfg.get('debconf_selections') is not None: msg = ("Error in debconf_selections configuration: " "old and new format of apt features " "are mutually exclusive") LOG.error(msg) raise ValueError(msg) selsets = cfg.get('debconf_selections') cfg['apt']['debconf_selections'] = selsets LOG.info("Transferred %s into new format: %s", cfg.get('debconf_selections'), cfg.get('apt')) del cfg['debconf_selections'] return cfg CMD_ARGUMENTS = ( ((('-c', '--config'), {'help': 'read configuration from cfg', 'action': util.MergedCmdAppend, 'metavar': 'FILE', 'type': argparse.FileType("rb"), 'dest': 'cfgopts', 'default': []}), (('-t', '--target'), {'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}),) ) def POPULATE_SUBCMD(parser): """Populate subcommand option parsing for apt-config""" populate_one_subcmd(parser, CMD_ARGUMENTS, apt_command) CONFIG_CLEANERS = { 'cloud-init': clean_cloud_init, } # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/block_attach_iscsi.py000066400000000000000000000011701326565350400237170ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from . import populate_one_subcmd from curtin.block import iscsi def block_attach_iscsi_main(args): iscsi.ensure_disk_connected(args.disk, args.save_config) return 0 CMD_ARGUMENTS = ( ('disk', {'help': 'RFC4173 specification of iSCSI disk to attach'}), ('--save-config', {'help': 'save access configuration to local filesystem', 'default': False, 'action': 'store_true'}), ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, block_attach_iscsi_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/block_detach_iscsi.py000066400000000000000000000007521326565350400237100ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from . import populate_one_subcmd from curtin.block import iscsi def block_detach_iscsi_main(args): i = iscsi.IscsiDisk(args.disk) i.disconnect() return 0 CMD_ARGUMENTS = ( ('disk', {'help': 'RFC4173 specification of iSCSI disk to attach'}), ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, block_detach_iscsi_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/block_info.py000066400000000000000000000041651326565350400222230ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os from . import populate_one_subcmd from curtin import (block, util) def block_info_main(args): """get information about block devices, similar to lsblk""" if not args.devices: raise ValueError('devices to scan must be specified') if not all(block.is_block_device(d) for d in args.devices): raise ValueError('invalid device(s)') def add_size_to_holders_tree(tree): """add size information to generated holders trees""" size_file = os.path.join(tree['device'], 'size') # size file is always represented in 512 byte sectors even if # underlying disk uses a larger logical_block_size size = ((512 * int(util.load_file(size_file))) if os.path.exists(size_file) else None) tree['size'] = util.bytes2human(size) if args.human else str(size) for holder in tree['holders']: add_size_to_holders_tree(holder) return tree def format_name(tree): """format information for human readable display""" res = { 'name': ' - '.join((tree['name'], tree['dev_type'], tree['size'])), 'holders': [] } for holder in tree['holders']: res['holders'].append(format_name(holder)) return res trees = [add_size_to_holders_tree(t) for t in [block.clear_holders.gen_holders_tree(d) for d in args.devices]] print(util.json_dumps(trees) if args.json else '\n'.join(block.clear_holders.format_holders_tree(t) for t in [format_name(tree) for tree in trees])) return 0 CMD_ARGUMENTS = ( ('devices', {'help': 'devices to get info for', 'default': [], 'nargs': '+'}), ('--human', {'help': 'output size in human readable format', 'default': False, 'action': 'store_true'}), (('-j', '--json'), {'help': 'output data in json format', 'default': False, 'action': 'store_true'}), ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, block_info_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/block_meta.py000066400000000000000000001747471326565350400222340ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from collections import OrderedDict from curtin import (block, config, util) from curtin.block import (mdadm, mkfs, clear_holders, lvm, iscsi, zfs) from curtin.log import LOG from curtin.reporter import events from . import populate_one_subcmd from curtin.udev import compose_udev_equality, udevadm_settle, udevadm_trigger import glob import os import platform import string import sys import tempfile import time SIMPLE = 'simple' SIMPLE_BOOT = 'simple-boot' CUSTOM = 'custom' BCACHE_REGISTRATION_RETRY = [0.2] * 60 CMD_ARGUMENTS = ( ((('-D', '--devices'), {'help': 'which devices to operate on', 'action': 'append', 'metavar': 'DEVICE', 'default': None, }), ('--fstype', {'help': 'root partition filesystem type', 'choices': ['ext4', 'ext3'], 'default': 'ext4'}), (('-t', '--target'), {'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), ('--boot-fstype', {'help': 'boot partition filesystem type', 'choices': ['ext4', 'ext3'], 'default': None}), ('--umount', {'help': 'unmount any mounted filesystems before exit', 'action': 'store_true', 'default': False}), ('mode', {'help': 'meta-mode to use', 'choices': [CUSTOM, SIMPLE, SIMPLE_BOOT]}), ) ) def block_meta(args): # main entry point for the block-meta command. state = util.load_command_environment() cfg = config.load_command_config(args, state) dd_images = util.get_dd_images(cfg.get('sources', {})) if ((args.mode == CUSTOM or cfg.get("storage") is not None) and len(dd_images) == 0): meta_custom(args) elif args.mode in (SIMPLE, SIMPLE_BOOT) or len(dd_images) > 0: meta_simple(args) else: raise NotImplementedError("mode=%s is not implemented" % args.mode) def logtime(msg, func, *args, **kwargs): with util.LogTimer(LOG.debug, msg): return func(*args, **kwargs) def write_image_to_disk(source, dev): """ Write disk image to block device """ LOG.info('writing image to disk %s, %s', source, dev) extractor = { 'dd-tgz': '|tar -xOzf -', 'dd-txz': '|tar -xOJf -', 'dd-tbz': '|tar -xOjf -', 'dd-tar': '|smtar -xOf -', 'dd-bz2': '|bzcat', 'dd-gz': '|zcat', 'dd-xz': '|xzcat', 'dd-raw': '' } (devname, devnode) = block.get_dev_name_entry(dev) util.subp(args=['sh', '-c', ('wget "$1" --progress=dot:mega -O - ' + extractor[source['type']] + '| dd bs=4M of="$2"'), '--', source['uri'], devnode]) util.subp(['partprobe', devnode]) udevadm_settle() paths = ["curtin", "system-data/var/lib/snapd"] return block.get_root_device([devname], paths=paths) def get_bootpt_cfg(cfg, enabled=False, fstype=None, root_fstype=None): # 'cfg' looks like: # enabled: boolean # fstype: filesystem type (default to 'fstype') # label: filesystem label (default to 'boot') # parm enable can enable, but not disable # parm fstype overrides cfg['fstype'] def_boot = (platform.machine() in ('aarch64') and not util.is_uefi_bootable()) ret = {'enabled': def_boot, 'fstype': None, 'label': 'boot'} ret.update(cfg) if enabled: ret['enabled'] = True if ret['enabled'] and not ret['fstype']: if root_fstype: ret['fstype'] = root_fstype if fstype: ret['fstype'] = fstype return ret def get_partition_format_type(cfg, machine=None, uefi_bootable=None): if machine is None: machine = platform.machine() if uefi_bootable is None: uefi_bootable = util.is_uefi_bootable() cfgval = cfg.get('format', None) if cfgval: return cfgval if uefi_bootable: return 'uefi' if machine in ['aarch64']: return 'gpt' elif machine.startswith('ppc64'): return 'prep' return "mbr" def devsync(devpath): LOG.debug('devsync for %s', devpath) util.subp(['partprobe', devpath], rcs=[0, 1]) udevadm_settle() for x in range(0, 10): if os.path.exists(devpath): LOG.debug('devsync happy - path %s now exists', devpath) return else: LOG.debug('Waiting on device path: %s', devpath) time.sleep(1) raise OSError('Failed to find device at path: %s', devpath) def determine_partition_number(partition_id, storage_config): vol = storage_config.get(partition_id) partnumber = vol.get('number') if vol.get('flag') == "logical": if not partnumber: LOG.warn('partition \'number\' key not set in config:\n%s', util.json_dumps(vol)) partnumber = 5 for key, item in storage_config.items(): if item.get('type') == "partition" and \ item.get('device') == vol.get('device') and\ item.get('flag') == "logical": if item.get('id') == vol.get('id'): break else: partnumber += 1 else: if not partnumber: LOG.warn('partition \'number\' key not set in config:\n%s', util.json_dumps(vol)) partnumber = 1 for key, item in storage_config.items(): if item.get('type') == "partition" and \ item.get('device') == vol.get('device'): if item.get('id') == vol.get('id'): break else: partnumber += 1 return partnumber def sanitize_dname(dname): """ dnames should be sanitized before writing rule files, in case maas has emitted a dname with a special character only letters, numbers and '-' and '_' are permitted, as this will be used for a device path. spaces are also not permitted """ valid = string.digits + string.ascii_letters + '-_' return ''.join(c if c in valid else '-' for c in dname) def make_dname(volume, storage_config): state = util.load_command_environment() rules_dir = os.path.join(state['scratch'], "rules.d") vol = storage_config.get(volume) path = get_path_to_storage_volume(volume, storage_config) ptuuid = None dname = vol.get('name') if vol.get('type') in ["partition", "disk"]: (out, _err) = util.subp(["blkid", "-o", "export", path], capture=True, rcs=[0, 2], retries=[1, 1, 1]) for line in out.splitlines(): if "PTUUID" in line or "PARTUUID" in line: ptuuid = line.split('=')[-1] break # we may not always be able to find a uniq identifier on devices with names if not ptuuid and vol.get('type') in ["disk", "partition"]: LOG.warning("Can't find a uuid for volume: {}. Skipping dname.".format( volume)) return rule = [ compose_udev_equality("SUBSYSTEM", "block"), compose_udev_equality("ACTION", "add|change"), ] if vol.get('type') == "disk": rule.append(compose_udev_equality('ENV{DEVTYPE}', "disk")) rule.append(compose_udev_equality('ENV{ID_PART_TABLE_UUID}', ptuuid)) elif vol.get('type') == "partition": rule.append(compose_udev_equality('ENV{DEVTYPE}', "partition")) dname = storage_config.get(vol.get('device')).get('name') + \ "-part%s" % determine_partition_number(volume, storage_config) rule.append(compose_udev_equality('ENV{ID_PART_ENTRY_UUID}', ptuuid)) elif vol.get('type') == "raid": md_data = mdadm.mdadm_query_detail(path) md_uuid = md_data.get('MD_UUID') rule.append(compose_udev_equality("ENV{MD_UUID}", md_uuid)) elif vol.get('type') == "bcache": rule.append(compose_udev_equality("ENV{DEVNAME}", path)) elif vol.get('type') == "lvm_partition": volgroup_name = storage_config.get(vol.get('volgroup')).get('name') dname = "%s-%s" % (volgroup_name, dname) rule.append(compose_udev_equality("ENV{DM_NAME}", dname)) else: raise ValueError('cannot make dname for device with type: {}' .format(vol.get('type'))) # note: this sanitization is done here instead of for all name attributes # at the beginning of storage configuration, as some devices, such as # lvm devices may use the name attribute and may permit special chars sanitized = sanitize_dname(dname) if sanitized != dname: LOG.warning( "dname modified to remove invalid chars. old: '{}' new: '{}'" .format(dname, sanitized)) rule.append("SYMLINK+=\"disk/by-dname/%s\"" % sanitized) LOG.debug("Writing dname udev rule '{}'".format(str(rule))) util.ensure_dir(rules_dir) rule_file = os.path.join(rules_dir, '{}.rules'.format(sanitized)) util.write_file(rule_file, ', '.join(rule)) def get_poolname(info, storage_config): """ Resolve pool name from zfs info """ LOG.debug('get_poolname for volume {}'.format(info)) if info.get('type') == 'zfs': pool_id = info.get('pool') poolname = get_poolname(storage_config.get(pool_id), storage_config) elif info.get('type') == 'zpool': poolname = info.get('pool') else: msg = 'volume is not type zfs or zpool: %s' % info LOG.error(msg) raise ValueError(msg) return poolname def get_path_to_storage_volume(volume, storage_config): # Get path to block device for volume. Volume param should refer to id of # volume in storage config LOG.debug('get_path_to_storage_volume for volume {}'.format(volume)) devsync_vol = None vol = storage_config.get(volume) if not vol: raise ValueError("volume with id '%s' not found" % volume) # Find path to block device if vol.get('type') == "partition": partnumber = determine_partition_number(vol.get('id'), storage_config) disk_block_path = get_path_to_storage_volume(vol.get('device'), storage_config) disk_kname = block.path_to_kname(disk_block_path) partition_kname = block.partition_kname(disk_kname, partnumber) volume_path = block.kname_to_path(partition_kname) devsync_vol = os.path.join(disk_block_path) elif vol.get('type') == "disk": # Get path to block device for disk. Device_id param should refer # to id of device in storage config if vol.get('serial'): volume_path = block.lookup_disk(vol.get('serial')) elif vol.get('path'): if vol.get('path').startswith('iscsi:'): i = iscsi.ensure_disk_connected(vol.get('path')) volume_path = os.path.realpath(i.devdisk_path) else: # resolve any symlinks to the dev_kname so # sys/class/block access is valid. ie, there are no # udev generated values in sysfs volume_path = os.path.realpath(vol.get('path')) elif vol.get('wwn'): by_wwn = '/dev/disk/by-id/wwn-%s' % vol.get('wwn') volume_path = os.path.realpath(by_wwn) else: raise ValueError("serial, wwn or path to block dev must be \ specified to identify disk") elif vol.get('type') == "lvm_partition": # For lvm partitions, a directory in /dev/ should be present with the # name of the volgroup the partition belongs to. We can simply append # the id of the lvm partition to the path of that directory volgroup = storage_config.get(vol.get('volgroup')) if not volgroup: raise ValueError("lvm volume group '%s' could not be found" % vol.get('volgroup')) volume_path = os.path.join("/dev/", volgroup.get('name'), vol.get('name')) elif vol.get('type') == "dm_crypt": # For dm_crypted partitions, unencrypted block device is at # /dev/mapper/ dm_name = vol.get('dm_name') if not dm_name: dm_name = vol.get('id') volume_path = os.path.join("/dev", "mapper", dm_name) elif vol.get('type') == "raid": # For raid partitions, block device is at /dev/mdX name = vol.get('name') volume_path = os.path.join("/dev", name) elif vol.get('type') == "bcache": # For bcache setups, the only reliable way to determine the name of the # block device is to look in all /sys/block/bcacheX/ dirs and see what # block devs are in the slaves dir there. Then, those blockdevs can be # checked against the kname of the devs in the config for the desired # bcache device. This is not very elegant though backing_device_path = get_path_to_storage_volume( vol.get('backing_device'), storage_config) backing_device_kname = block.path_to_kname(backing_device_path) sys_path = list(filter(lambda x: backing_device_kname in x, glob.glob("/sys/block/bcache*/slaves/*")))[0] while "bcache" not in os.path.split(sys_path)[-1]: sys_path = os.path.split(sys_path)[0] bcache_kname = block.path_to_kname(sys_path) volume_path = block.kname_to_path(bcache_kname) LOG.debug('got bcache volume path {}'.format(volume_path)) else: raise NotImplementedError("cannot determine the path to storage \ volume '%s' with type '%s'" % (volume, vol.get('type'))) # sync devices if not devsync_vol: devsync_vol = volume_path devsync(devsync_vol) LOG.debug('return volume path {}'.format(volume_path)) return volume_path def disk_handler(info, storage_config): _dos_names = ['dos', 'msdos'] ptable = info.get('ptable') disk = get_path_to_storage_volume(info.get('id'), storage_config) if config.value_as_boolean(info.get('preserve')): # Handle preserve flag, verifying if ptable specified in config if config.value_as_boolean(ptable): current_ptable = block.get_part_table_type(disk) if not ((ptable in _dos_names and current_ptable in _dos_names) or (ptable == 'gpt' and current_ptable == 'gpt')): raise ValueError( "disk '%s' does not have correct partition table or " "cannot be read, but preserve is set to true. " "cannot continue installation." % info.get('id')) LOG.info("disk '%s' marked to be preserved, so keeping partition " "table" % disk) else: # wipe the disk and create the partition table if instructed to do so if config.value_as_boolean(info.get('wipe')): block.wipe_volume(disk, mode=info.get('wipe')) if config.value_as_boolean(ptable): LOG.info("labeling device: '%s' with '%s' partition table", disk, ptable) if ptable == "gpt": # Wipe both MBR and GPT that may be present on the disk. # N.B.: wipe_volume wipes 1M at front and end of the disk. # This could destroy disk data in filesystems that lived # there. block.wipe_volume(disk, mode='superblock') elif ptable in _dos_names: util.subp(["parted", disk, "--script", "mklabel", "msdos"]) else: raise ValueError('invalid partition table type: %s', ptable) holders = clear_holders.get_holders(disk) if len(holders) > 0: LOG.info('Detected block holders on disk %s: %s', disk, holders) clear_holders.clear_holders(disk) clear_holders.assert_clear(disk) # Make the name if needed if info.get('name'): make_dname(info.get('id'), storage_config) def getnumberoflogicaldisks(device, storage_config): logicaldisks = 0 for key, item in storage_config.items(): if item.get('device') == device and item.get('flag') == "logical": logicaldisks = logicaldisks + 1 return logicaldisks def find_previous_partition(disk_id, part_id, storage_config): last_partnum = None for item_id, command in storage_config.items(): if item_id == part_id: break # skip anything not on this disk, not a 'partition' or 'extended' if command['type'] != 'partition' or command['device'] != disk_id: continue if command.get('flag') == "extended": continue last_partnum = determine_partition_number(item_id, storage_config) return last_partnum def partition_handler(info, storage_config): device = info.get('device') size = info.get('size') flag = info.get('flag') disk_ptable = storage_config.get(device).get('ptable') partition_type = None if not device: raise ValueError("device must be set for partition to be created") if not size: raise ValueError("size must be specified for partition to be created") disk = get_path_to_storage_volume(device, storage_config) partnumber = determine_partition_number(info.get('id'), storage_config) disk_kname = block.path_to_kname(disk) disk_sysfs_path = block.sys_block_path(disk) # consider the disks logical sector size when calculating sectors try: lbs_path = os.path.join(disk_sysfs_path, 'queue', 'logical_block_size') with open(lbs_path, 'r') as f: logical_block_size_bytes = int(f.readline()) except Exception: logical_block_size_bytes = 512 LOG.debug( "{} logical_block_size_bytes: {}".format(disk_kname, logical_block_size_bytes)) if partnumber > 1: if partnumber == 5 and disk_ptable == "msdos": for key, item in storage_config.items(): if item.get('type') == "partition" and \ item.get('device') == device and \ item.get('flag') == "extended": extended_part_no = determine_partition_number( key, storage_config) break pnum = extended_part_no else: pnum = find_previous_partition(device, info['id'], storage_config) LOG.debug("previous partition number for '%s' found to be '%s'", info.get('id'), pnum) partition_kname = block.partition_kname(disk_kname, pnum) previous_partition = os.path.join(disk_sysfs_path, partition_kname) LOG.debug("previous partition: {}".format(previous_partition)) # XXX: sys/block/X/{size,start} is *ALWAYS* in 512b value previous_size = int( util.load_file(os.path.join(previous_partition, "size"))) previous_size_sectors = int(previous_size * 512 / logical_block_size_bytes) previous_start = int( util.load_file(os.path.join(previous_partition, "start"))) previous_start_sectors = int(previous_start * 512 / logical_block_size_bytes) LOG.debug("previous partition.size_sectors: {}".format( previous_size_sectors)) LOG.debug("previous partition.start_sectors: {}".format( previous_start_sectors)) # Align to 1M at the beginning of the disk and at logical partitions alignment_offset = int((1 << 20) / logical_block_size_bytes) if partnumber == 1: # start of disk offset_sectors = alignment_offset else: # further partitions if disk_ptable == "gpt" or flag != "logical": # msdos primary and any gpt part start after former partition end offset_sectors = previous_start_sectors + previous_size_sectors else: # msdos extended/logical partitions if flag == "logical": if partnumber == 5: # First logical partition # start at extended partition start + alignment_offset offset_sectors = (previous_start_sectors + alignment_offset) else: # Further logical partitions # start at former logical partition end + alignment_offset offset_sectors = (previous_start_sectors + previous_size_sectors + alignment_offset) length_bytes = util.human2bytes(size) # start sector is part of the sectors that define the partitions size # so length has to be "size in sectors - 1" length_sectors = int(length_bytes / logical_block_size_bytes) - 1 # logical partitions can't share their start sector with the extended # partition and logical partitions can't go head-to-head, so we have to # realign and for that increase size as required if info.get('flag') == "extended": logdisks = getnumberoflogicaldisks(device, storage_config) length_sectors = length_sectors + (logdisks * alignment_offset) # Handle preserve flag if config.value_as_boolean(info.get('preserve')): return elif config.value_as_boolean(storage_config.get(device).get('preserve')): raise NotImplementedError("Partition '%s' is not marked to be \ preserved, but device '%s' is. At this time, preserving devices \ but not also the partitions on the devices is not supported, \ because of the possibility of damaging partitions intended to be \ preserved." % (info.get('id'), device)) # Set flag # 'sgdisk --list-types' sgdisk_flags = {"boot": 'ef00', "lvm": '8e00', "raid": 'fd00', "bios_grub": 'ef02', "prep": '4100', "swap": '8200', "home": '8302', "linux": '8300'} LOG.info("adding partition '%s' to disk '%s' (ptable: '%s')", info.get('id'), device, disk_ptable) LOG.debug("partnum: %s offset_sectors: %s length_sectors: %s", partnumber, offset_sectors, length_sectors) # Wipe the partition if told to do so, do not wipe dos extended partitions # as this may damage the extended partition table if config.value_as_boolean(info.get('wipe')): LOG.info("Preparing partition location on disk %s", disk) if info.get('flag') == "extended": LOG.warn("extended partitions do not need wiping, so skipping: " "'%s'" % info.get('id')) else: # wipe the start of the new partition first by zeroing 1M at the # length of the previous partition wipe_offset = int(offset_sectors * logical_block_size_bytes) LOG.debug('Wiping 1M on %s at offset %s', disk, wipe_offset) # We don't require exclusive access as we're wiping data at an # offset and the current holder maybe part of the current storage # configuration. block.zero_file_at_offsets(disk, [wipe_offset], exclusive=False) if disk_ptable == "msdos": if flag in ["extended", "logical", "primary"]: partition_type = flag else: partition_type = "primary" cmd = ["parted", disk, "--script", "mkpart", partition_type, "%ss" % offset_sectors, "%ss" % str(offset_sectors + length_sectors)] util.subp(cmd, capture=True) elif disk_ptable == "gpt": if flag and flag in sgdisk_flags: typecode = sgdisk_flags[flag] else: typecode = sgdisk_flags['linux'] cmd = ["sgdisk", "--new", "%s:%s:%s" % (partnumber, offset_sectors, length_sectors + offset_sectors), "--typecode=%s:%s" % (partnumber, typecode), disk] util.subp(cmd, capture=True) else: raise ValueError("parent partition has invalid partition table") # Make the name if needed if storage_config.get(device).get('name') and partition_type != 'extended': make_dname(info.get('id'), storage_config) def format_handler(info, storage_config): volume = info.get('volume') if not volume: raise ValueError("volume must be specified for partition '%s'" % info.get('id')) # Get path to volume volume_path = get_path_to_storage_volume(volume, storage_config) # Handle preserve flag if config.value_as_boolean(info.get('preserve')): # Volume marked to be preserved, not formatting return # Make filesystem using block library LOG.debug("mkfs {} info: {}".format(volume_path, info)) mkfs.mkfs_from_config(volume_path, info) device_type = storage_config.get(volume).get('type') LOG.debug('Formated device type: %s', device_type) if device_type == 'bcache': # other devs have a udev watch on them. Not bcache (LP: #1680597). LOG.debug('Detected bcache device format, calling udevadm trigger to ' 'generate by-uuid symlinks on "%s"', volume_path) udevadm_trigger([volume_path]) def mount_handler(info, storage_config): """ Handle storage config type: mount info = { 'id': 'rootfs_mount', 'type': 'mount', 'path': '/', 'options': 'defaults,errors=remount-ro', 'device': 'rootfs', } Mount specified device under target at 'path' and generate fstab entry. """ state = util.load_command_environment() path = info.get('path') filesystem = storage_config.get(info.get('device')) mount_options = info.get('options') # handle unset, or empty('') strings if not mount_options: mount_options = 'defaults' if not path and filesystem.get('fstype') != "swap": raise ValueError("path to mountpoint must be specified") volume = storage_config.get(filesystem.get('volume')) # Get path to volume volume_path = get_path_to_storage_volume(filesystem.get('volume'), storage_config) if filesystem.get('fstype') != "swap": # Figure out what point should be while len(path) > 0 and path[0] == "/": path = path[1:] mount_point = os.path.sep.join([state['target'], path]) mount_point = os.path.normpath(mount_point) options = mount_options.split(",") # If the volume_path's kname is backed by iSCSI or (in the case of # LVM/DM) if any of its slaves are backed by iSCSI, then we need to # append _netdev to the fstab line if iscsi.volpath_is_iscsi(volume_path): LOG.debug("Marking volume_path:%s as '_netdev'", volume_path) options.append("_netdev") # Create mount point if does not exist util.ensure_dir(mount_point) # Mount volume, with options try: opts = ['-o', ','.join(options)] util.subp(['mount', volume_path, mount_point] + opts, capture=True) except util.ProcessExecutionError as e: LOG.exception(e) msg = ('Mount failed: %s @ %s with options %s' % (volume_path, mount_point, ",".join(opts))) LOG.error(msg) raise RuntimeError(msg) # set path path = "/%s" % path else: path = "none" options = ["sw"] # Add volume to fstab if state['fstab']: uuid = block.get_volume_uuid(volume_path) location = ("UUID=%s" % uuid) if uuid else ( get_path_to_storage_volume(volume.get('id'), storage_config)) fstype = filesystem.get('fstype') if fstype in ["fat", "fat12", "fat16", "fat32", "fat64"]: fstype = "vfat" fstab_entry = "%s %s %s %s 0 0\n" % (location, path, fstype, ",".join(options)) util.write_file(state['fstab'], fstab_entry, omode='a') else: LOG.info("fstab not in environment, so not writing") def lvm_volgroup_handler(info, storage_config): devices = info.get('devices') device_paths = [] name = info.get('name') if not devices: raise ValueError("devices for volgroup '%s' must be specified" % info.get('id')) if not name: raise ValueError("name for volgroups needs to be specified") for device_id in devices: device = storage_config.get(device_id) if not device: raise ValueError("device '%s' could not be found in storage config" % device_id) device_paths.append(get_path_to_storage_volume(device_id, storage_config)) # Handle preserve flag if config.value_as_boolean(info.get('preserve')): # LVM will probably be offline, so start it util.subp(["vgchange", "-a", "y"]) # Verify that volgroup exists and contains all specified devices if set(lvm.get_pvols_in_volgroup(name)) != set(device_paths): raise ValueError("volgroup '%s' marked to be preserved, but does " "not exist or does not contain the right " "physical volumes" % info.get('id')) else: # Create vgrcreate command and run # capture output to avoid printing it to log # Use zero to clear target devices of any metadata util.subp(['vgcreate', '--force', '--zero=y', '--yes', name] + device_paths, capture=True) # refresh lvmetad lvm.lvm_scan() def lvm_partition_handler(info, storage_config): volgroup = storage_config.get(info.get('volgroup')).get('name') name = info.get('name') if not volgroup: raise ValueError("lvm volgroup for lvm partition must be specified") if not name: raise ValueError("lvm partition name must be specified") if info.get('ptable'): raise ValueError("Partition tables on top of lvm logical volumes is " "not supported") # Handle preserve flag if config.value_as_boolean(info.get('preserve')): if name not in lvm.get_lvols_in_volgroup(volgroup): raise ValueError("lvm partition '%s' marked to be preserved, but " "does not exist or does not mach storage " "configuration" % info.get('id')) elif storage_config.get(info.get('volgroup')).get('preserve'): raise NotImplementedError( "Lvm Partition '%s' is not marked to be preserved, but volgroup " "'%s' is. At this time, preserving volgroups but not also the lvm " "partitions on the volgroup is not supported, because of the " "possibility of damaging lvm partitions intended to be " "preserved." % (info.get('id'), volgroup)) else: # Use 'wipesignatures' (if available) and 'zero' to clear target lv # of any fs metadata cmd = ["lvcreate", volgroup, "--name", name, "--zero=y"] release = util.lsb_release()['codename'] if release not in ['precise', 'trusty']: cmd.extend(["--wipesignatures=y"]) if info.get('size'): cmd.extend(["--size", info.get('size')]) else: cmd.extend(["--extents", "100%FREE"]) util.subp(cmd) # refresh lvmetad lvm.lvm_scan() make_dname(info.get('id'), storage_config) def dm_crypt_handler(info, storage_config): state = util.load_command_environment() volume = info.get('volume') key = info.get('key') keysize = info.get('keysize') cipher = info.get('cipher') dm_name = info.get('dm_name') if not volume: raise ValueError("volume for cryptsetup to operate on must be \ specified") if not key: raise ValueError("encryption key must be specified") if not dm_name: dm_name = info.get('id') volume_path = get_path_to_storage_volume(volume, storage_config) # TODO: this is insecure, find better way to do this tmp_keyfile = tempfile.mkstemp()[1] fp = open(tmp_keyfile, "w") fp.write(key) fp.close() cmd = ["cryptsetup"] if cipher: cmd.extend(["--cipher", cipher]) if keysize: cmd.extend(["--key-size", keysize]) cmd.extend(["luksFormat", volume_path, tmp_keyfile]) util.subp(cmd) cmd = ["cryptsetup", "open", "--type", "luks", volume_path, dm_name, "--key-file", tmp_keyfile] util.subp(cmd) os.remove(tmp_keyfile) # A crypttab will be created in the same directory as the fstab in the # configuration. This will then be copied onto the system later if state['fstab']: crypt_tab_location = os.path.join(os.path.split(state['fstab'])[0], "crypttab") uuid = block.get_volume_uuid(volume_path) with open(crypt_tab_location, "a") as fp: fp.write("%s UUID=%s none luks\n" % (dm_name, uuid)) else: LOG.info("fstab configuration is not present in environment, so \ cannot locate an appropriate directory to write crypttab in \ so not writing crypttab") def raid_handler(info, storage_config): state = util.load_command_environment() devices = info.get('devices') raidlevel = info.get('raidlevel') spare_devices = info.get('spare_devices') md_devname = block.dev_path(info.get('name')) if not devices: raise ValueError("devices for raid must be specified") if raidlevel not in ['linear', 'raid0', 0, 'stripe', 'raid1', 1, 'mirror', 'raid4', 4, 'raid5', 5, 'raid6', 6, 'raid10', 10]: raise ValueError("invalid raidlevel '%s'" % raidlevel) if raidlevel in ['linear', 'raid0', 0, 'stripe']: if spare_devices: raise ValueError("spareunsupported in raidlevel '%s'" % raidlevel) LOG.debug('raid: cfg: {}'.format(util.json_dumps(info))) device_paths = list(get_path_to_storage_volume(dev, storage_config) for dev in devices) LOG.debug('raid: device path mapping: {}'.format( zip(devices, device_paths))) spare_device_paths = [] if spare_devices: spare_device_paths = list(get_path_to_storage_volume(dev, storage_config) for dev in spare_devices) LOG.debug('raid: spare device path mapping: {}'.format( zip(spare_devices, spare_device_paths))) # Handle preserve flag if config.value_as_boolean(info.get('preserve')): # check if the array is already up, if not try to assemble if not mdadm.md_check(md_devname, raidlevel, device_paths, spare_device_paths): LOG.info("assembling preserved raid for " "{}".format(md_devname)) mdadm.mdadm_assemble(md_devname, device_paths, spare_device_paths) # try again after attempting to assemble if not mdadm.md_check(md_devname, raidlevel, devices, spare_device_paths): raise ValueError("Unable to confirm preserved raid array: " " {}".format(md_devname)) # raid is all OK return mdadm.mdadm_create(md_devname, raidlevel, device_paths, spare_device_paths, info.get('mdname', '')) # Make dname rule for this dev make_dname(info.get('id'), storage_config) # A mdadm.conf will be created in the same directory as the fstab in the # configuration. This will then be copied onto the installed system later. # The file must also be written onto the running system to enable it to run # mdadm --assemble and continue installation if state['fstab']: mdadm_location = os.path.join(os.path.split(state['fstab'])[0], "mdadm.conf") mdadm_scan_data = mdadm.mdadm_detail_scan() with open(mdadm_location, "w") as fp: fp.write(mdadm_scan_data) else: LOG.info("fstab configuration is not present in the environment, so \ cannot locate an appropriate directory to write mdadm.conf in, \ so not writing mdadm.conf") # If ptable is specified, call disk_handler on this mdadm device to create # the table if info.get('ptable'): disk_handler(info, storage_config) def bcache_handler(info, storage_config): backing_device = get_path_to_storage_volume(info.get('backing_device'), storage_config) cache_device = get_path_to_storage_volume(info.get('cache_device'), storage_config) cache_mode = info.get('cache_mode', None) if not backing_device or not cache_device: raise ValueError("backing device and cache device for bcache" " must be specified") bcache_sysfs = "/sys/fs/bcache" udevadm_settle(exists=bcache_sysfs) def register_bcache(bcache_device): LOG.debug('register_bcache: %s > /sys/fs/bcache/register', bcache_device) with open("/sys/fs/bcache/register", "w") as fp: fp.write(bcache_device) def _validate_bcache(bcache_device, bcache_sys_path): """ check if bcache is ready, dump info For cache devices, we expect to find a cacheN symlink which will point to the underlying cache device; Find this symlink, read it and compare bcache_device specified in the parameters. For backing devices, we expec to find a dev symlink pointing to the bcacheN device to which the backing device is enslaved. From the dev symlink, we can read the bcacheN holders list, which should contain the backing device kname. In either case, if we fail to find the correct symlinks in sysfs, this method will raise an OSError indicating the missing attribute. """ # cacheset # /sys/fs/bcache/ # cache device # /sys/class/block//bcache/set -> # .../fs/bcache/uuid # backing # /sys/class/block//bcache/cache -> # .../block/bcacheN # /sys/class/block//bcache/dev -> # .../block/bcacheN if bcache_sys_path.startswith('/sys/fs/bcache'): LOG.debug("validating bcache caching device '%s' from sys_path" " '%s'", bcache_device, bcache_sys_path) # we expect a cacheN symlink to point to bcache_device/bcache sys_path_links = [os.path.join(bcache_sys_path, l) for l in os.listdir(bcache_sys_path)] cache_links = [l for l in sys_path_links if os.path.islink(l) and ( os.path.basename(l).startswith('cache'))] if len(cache_links) == 0: msg = ('Failed to find any cache links in %s:%s' % ( bcache_sys_path, sys_path_links)) raise OSError(msg) for link in cache_links: target = os.readlink(link) LOG.debug('Resolving symlink %s -> %s', link, target) # cacheN -> ../../../devices/...//bcache # basename(dirname(readlink(link))) target_cache_device = os.path.basename( os.path.dirname(target)) if os.path.basename(bcache_device) == target_cache_device: LOG.debug('Found match: bcache_device=%s target_device=%s', bcache_device, target_cache_device) return else: msg = ('Cache symlink %s ' % target_cache_device + 'points to incorrect device: %s' % bcache_device) raise OSError(msg) elif bcache_sys_path.startswith('/sys/class/block'): LOG.debug("validating bcache backing device '%s' from sys_path" " '%s'", bcache_device, bcache_sys_path) # we expect a 'dev' symlink to point to the bcacheN device bcache_dev = os.path.join(bcache_sys_path, 'dev') if os.path.islink(bcache_dev): bcache_dev_link = ( os.path.basename(os.readlink(bcache_dev))) LOG.debug('bcache device %s using bcache kname: %s', bcache_sys_path, bcache_dev_link) bcache_slaves_path = os.path.join(bcache_dev, 'slaves') slaves = os.listdir(bcache_slaves_path) LOG.debug('bcache device %s has slaves: %s', bcache_sys_path, slaves) if os.path.basename(bcache_device) in slaves: LOG.debug('bcache device %s found in slaves', os.path.basename(bcache_device)) return else: msg = ('Failed to find bcache device %s' % bcache_device + 'in slaves list %s' % slaves) raise OSError(msg) else: msg = 'didnt find "dev" attribute on: %s', bcache_dev return OSError(msg) else: LOG.debug("Failed to validate bcache device '%s' from sys_path" " '%s'", bcache_device, bcache_sys_path) msg = ('sysfs path %s does not appear to be a bcache device' % bcache_sys_path) return ValueError(msg) def ensure_bcache_is_registered(bcache_device, expected, retry=None): """ Test that bcache_device is found at an expected path and re-register the device if it's not ready. Retry the validation and registration as needed. """ if not retry: retry = BCACHE_REGISTRATION_RETRY for attempt, wait in enumerate(retry): # find the actual bcache device name via sysfs using the # backing device's holders directory. LOG.debug('check just created bcache %s if it is registered,' ' try=%s', bcache_device, attempt + 1) try: udevadm_settle() if os.path.exists(expected): LOG.debug('Found bcache dev %s at expected path %s', bcache_device, expected) _validate_bcache(bcache_device, expected) else: msg = 'bcache device path not found: %s' % expected LOG.debug(msg) raise ValueError(msg) # if bcache path exists and holders are > 0 we can return LOG.debug('bcache dev %s at path %s successfully registered' ' on attempt %s/%s', bcache_device, expected, attempt + 1, len(retry)) return except (OSError, IndexError, ValueError): # Some versions of bcache-tools will register the bcache device # as soon as we run make-bcache using udev rules, so wait for # udev to settle, then try to locate the dev, on older versions # we need to register it manually though LOG.debug('bcache device was not registered, registering %s ' 'at /sys/fs/bcache/register', bcache_device) try: register_bcache(bcache_device) except IOError: # device creation is notoriously racy and this can trigger # "Invalid argument" IOErrors if it got created in "the # meantime" - just restart the function a few times to # check it all again pass LOG.debug("bcache dev %s not ready, waiting %ss", bcache_device, wait) time.sleep(wait) # we've exhausted our retries LOG.warning('Repetitive error registering the bcache dev %s', bcache_device) raise RuntimeError("bcache device %s can't be registered" % bcache_device) if cache_device: # /sys/class/block/XXX/YYY/ cache_device_sysfs = block.sys_block_path(cache_device) if os.path.exists(os.path.join(cache_device_sysfs, "bcache")): LOG.debug('caching device already exists at {}/bcache. Read ' 'cset.uuid'.format(cache_device_sysfs)) (out, err) = util.subp(["bcache-super-show", cache_device], capture=True) LOG.debug('bcache-super-show=[{}]'.format(out)) [cset_uuid] = [line.split()[-1] for line in out.split("\n") if line.startswith('cset.uuid')] else: LOG.debug('caching device does not yet exist at {}/bcache. Make ' 'cache and get uuid'.format(cache_device_sysfs)) # make the cache device, extracting cacheset uuid (out, err) = util.subp(["make-bcache", "-C", cache_device], capture=True) LOG.debug('out=[{}]'.format(out)) [cset_uuid] = [line.split()[-1] for line in out.split("\n") if line.startswith('Set UUID:')] target_sysfs_path = '/sys/fs/bcache/%s' % cset_uuid ensure_bcache_is_registered(cache_device, target_sysfs_path) if backing_device: backing_device_sysfs = block.sys_block_path(backing_device) target_sysfs_path = os.path.join(backing_device_sysfs, "bcache") # there should not be any pre-existing bcache device bdir = os.path.join(backing_device_sysfs, "bcache") if os.path.exists(bdir): raise RuntimeError( 'Unexpected old bcache device: %s', backing_device) LOG.debug('Creating a backing device on %s', backing_device) util.subp(["make-bcache", "-B", backing_device]) ensure_bcache_is_registered(backing_device, target_sysfs_path) # via the holders we can identify which bcache device we just created # for a given backing device holders = clear_holders.get_holders(backing_device) if len(holders) != 1: err = ('Invalid number {} of holding devices:' ' "{}"'.format(len(holders), holders)) LOG.error(err) raise ValueError(err) [bcache_dev] = holders LOG.debug('The just created bcache device is {}'.format(holders)) if cache_device: # if we specify both then we need to attach backing to cache if cset_uuid: LOG.info("Attaching backing device to cacheset: " "{} -> {} cset.uuid: {}".format(backing_device, cache_device, cset_uuid)) attach = os.path.join(backing_device_sysfs, "bcache", "attach") with open(attach, "w") as fp: fp.write(cset_uuid) else: msg = "Invalid cset_uuid: {}".format(cset_uuid) LOG.error(msg) raise ValueError(msg) if cache_mode: LOG.info("Setting cache_mode on {} to {}".format(bcache_dev, cache_mode)) cache_mode_file = \ '/sys/block/{}/bcache/cache_mode'.format(bcache_dev) with open(cache_mode_file, "w") as fp: fp.write(cache_mode) else: # no backing device if cache_mode: raise ValueError("cache mode specified which can only be set per \ backing devices, but none was specified") if info.get('name'): # Make dname rule for this dev make_dname(info.get('id'), storage_config) if info.get('ptable'): raise ValueError("Partition tables on top of lvm logical volumes is \ not supported") LOG.debug('Finished bcache creation for backing {} or caching {}' .format(backing_device, cache_device)) def zpool_handler(info, storage_config): """ Create a zpool based in storage_configuration """ state = util.load_command_environment() # extract /dev/disk/by-id paths for each volume used vdevs = [get_path_to_storage_volume(v, storage_config) for v in info.get('vdevs', [])] poolname = info.get('pool') mountpoint = info.get('mountpoint') altroot = state['target'] if not vdevs or not poolname: raise ValueError("pool and vdevs for zpool must be specified") # map storage volume to by-id path for persistent path vdevs_byid = [] for vdev in vdevs: byid = block.disk_to_byid_path(vdev) if not byid: msg = 'Cannot find by-id path to zpool device "%s"' % vdev LOG.error(msg) raise RuntimeError(msg) vdevs_byid.append(byid) LOG.info('Creating zpool %s with vdevs %s', poolname, vdevs_byid) zfs.zpool_create(poolname, vdevs_byid, mountpoint=mountpoint, altroot=altroot) def zfs_handler(info, storage_config): """ Create a zfs filesystem """ state = util.load_command_environment() poolname = get_poolname(info, storage_config) volume = info.get('volume') properties = info.get('properties', {}) LOG.info('Creating zfs dataset %s/%s with properties %s', poolname, volume, properties) zfs.zfs_create(poolname, volume, zfs_properties=properties) mountpoint = properties.get('mountpoint') if mountpoint: if state['fstab']: fstab_entry = ( "# Use `zfs list` for current zfs mount info\n" + "# %s %s defaults 0 0\n" % (poolname, mountpoint)) util.write_file(state['fstab'], fstab_entry, omode='a') def extract_storage_ordered_dict(config): storage_config = config.get('storage', {}) if not storage_config: raise ValueError("no 'storage' entry in config") scfg = storage_config.get('config') if not scfg: raise ValueError("invalid storage config data") # Since storage config will often have to be searched for a value by its # id, and this can become very inefficient as storage_config grows, a dict # will be generated with the id of each component of the storage_config as # its index and the component of storage_config as its value return OrderedDict((d["id"], d) for (i, d) in enumerate(scfg)) def zfsroot_update_storage_config(storage_config): """Return an OrderedDict that has 'zfsroot' format expanded into zpool and zfs commands to enable ZFS on rootfs. """ zfsroots = [d for i, d in storage_config.items() if d.get('fstype') == "zfsroot"] if len(zfsroots) == 0: return storage_config if len(zfsroots) > 1: raise ValueError( "zfsroot found in two entries in storage config: %s" % zfsroots) root = zfsroots[0] vol = root.get('volume') if not vol: raise ValueError("zfsroot entry did not have 'volume'.") if vol not in storage_config: raise ValueError( "zfs volume '%s' not referenced in storage config" % vol) mounts = [d for i, d in storage_config.items() if d.get('type') == 'mount' and d.get('path') == "/"] if len(mounts) != 1: raise ValueError("Multiple 'mount' entries point to '/'") mount = mounts[0] if mount.get('device') != root['id']: raise ValueError( "zfsroot Mountpoint entry for / has device=%s, expected '%s'" % (mount.get("device"), root['id'])) LOG.info('Enabling experimental zfsroot!') ret = OrderedDict() for eid, info in storage_config.items(): if info.get('id') == mount['id']: continue if info.get('fstype') != "zfsroot": ret[eid] = info continue vdevs = [storage_config[info['volume']]['id']] baseid = info['id'] pool = { 'type': 'zpool', 'id': baseid + "_zfsroot_pool", 'pool': 'rpool', 'vdevs': vdevs, 'mountpoint': '/' } container = { 'type': 'zfs', 'id': baseid + "_zfsroot_container", 'pool': pool['id'], 'volume': '/ROOT', 'properties': { 'canmount': 'off', 'mountpoint': 'none', } } rootfs = { 'type': 'zfs', 'id': baseid + "_zfsroot_fs", 'pool': pool['id'], 'volume': '/ROOT/zfsroot', 'properties': { 'canmount': 'noauto', 'mountpoint': '/', } } for d in (pool, container, rootfs): if d['id'] in ret: raise RuntimeError( "Collided on id '%s' in storage config" % d['id']) ret[d['id']] = d return ret def meta_custom(args): """Does custom partitioning based on the layout provided in the config file. Section with the name storage contains information on which partitions on which disks to create. It also contains information about overlays (raid, lvm, bcache) which need to be setup. """ command_handlers = { 'disk': disk_handler, 'partition': partition_handler, 'format': format_handler, 'mount': mount_handler, 'lvm_volgroup': lvm_volgroup_handler, 'lvm_partition': lvm_partition_handler, 'dm_crypt': dm_crypt_handler, 'raid': raid_handler, 'bcache': bcache_handler, 'zfs': zfs_handler, 'zpool': zpool_handler, } state = util.load_command_environment() cfg = config.load_command_config(args, state) storage_config_dict = extract_storage_ordered_dict(cfg) storage_config_dict = zfsroot_update_storage_config(storage_config_dict) # set up reportstack stack_prefix = state.get('report_stack_prefix', '') # shut down any already existing storage layers above any disks used in # config that have 'wipe' set with events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level='INFO', description="removing previous storage devices"): clear_holders.start_clear_holders_deps() disk_paths = [get_path_to_storage_volume(k, storage_config_dict) for (k, v) in storage_config_dict.items() if v.get('type') == 'disk' and config.value_as_boolean(v.get('wipe')) and not config.value_as_boolean(v.get('preserve'))] clear_holders.clear_holders(disk_paths) # if anything was not properly shut down, stop installation clear_holders.assert_clear(disk_paths) for item_id, command in storage_config_dict.items(): handler = command_handlers.get(command['type']) if not handler: raise ValueError("unknown command type '%s'" % command['type']) with events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level="INFO", description="configuring %s: %s" % (command['type'], command['id'])): try: handler(command, storage_config_dict) except Exception as error: LOG.error("An error occured handling '%s': %s - %s" % (item_id, type(error).__name__, error)) raise if args.umount: util.do_umount(state['target'], recursive=True) return 0 def meta_simple(args): """Creates a root partition. If args.mode == SIMPLE_BOOT, it will also create a separate /boot partition. """ state = util.load_command_environment() cfg = config.load_command_config(args, state) devpath = None if cfg.get("storage") is not None: for i in cfg["storage"]["config"]: serial = i.get("serial") if serial is None: continue grub = i.get("grub_device") diskPath = block.lookup_disk(serial) if grub is True: devpath = diskPath if config.value_as_boolean(i.get('wipe')): block.wipe_volume(diskPath, mode=i.get('wipe')) if args.target is not None: state['target'] = args.target if state['target'] is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) devices = args.devices if devices is None: devices = cfg.get('block-meta', {}).get('devices', []) bootpt = get_bootpt_cfg( cfg.get('block-meta', {}).get('boot-partition', {}), enabled=args.mode == SIMPLE_BOOT, fstype=args.boot_fstype, root_fstype=args.fstype) ptfmt = get_partition_format_type(cfg.get('block-meta', {})) # Remove duplicates but maintain ordering. devices = list(OrderedDict.fromkeys(devices)) # Multipath devices might be automatically assembled if multipath-tools # package is available in the installation environment. We need to stop # all multipath devices to exclusively use one of paths as a target disk. block.stop_all_unused_multipath_devices() if len(devices) == 0 and devpath is None: devices = block.get_installable_blockdevs() LOG.warn("'%s' mode, no devices given. unused list: %s", args.mode, devices) # Check if the list of installable block devices is still empty after # checking for block devices and filtering out the removable ones. In # this case we may have a system which has its harddrives reported by # lsblk incorrectly. In this case we search for installable # blockdevices that are removable as a last resort before raising an # exception. if len(devices) == 0: devices = block.get_installable_blockdevs(include_removable=True) if len(devices) == 0: # Fail gracefully if no devices are found, still. raise Exception("No valid target devices found that curtin " "can install on.") else: LOG.warn("No non-removable, installable devices found. List " "populated with removable devices allowed: %s", devices) elif len(devices) == 0 and devpath: devices = [devpath] if len(devices) > 1: if args.devices is not None: LOG.warn("'%s' mode but multiple devices given. " "using first found", args.mode) available = [f for f in devices if block.is_valid_device(f)] target = sorted(available)[0] LOG.warn("mode is '%s'. multiple devices given. using '%s' " "(first available)", args.mode, target) else: target = devices[0] if not block.is_valid_device(target): raise Exception("target device '%s' is not a valid device" % target) (devname, devnode) = block.get_dev_name_entry(target) LOG.info("installing in '%s' mode to '%s'", args.mode, devname) sources = cfg.get('sources', {}) dd_images = util.get_dd_images(sources) if len(dd_images): # we have at least one dd-able image # we will only take the first one rootdev = write_image_to_disk(dd_images[0], devname) util.subp(['mount', rootdev, state['target']]) return 0 # helper partition will forcibly set up partition there ptcmd = ['partition', '--format=' + ptfmt] if bootpt['enabled']: ptcmd.append('--boot') ptcmd.append(devnode) if bootpt['enabled'] and ptfmt in ("uefi", "prep"): raise ValueError("format=%s with boot partition not supported" % ptfmt) bootdev_ptnum = None rootdev_ptnum = None bootdev = None if bootpt['enabled']: bootdev_ptnum = 1 rootdev_ptnum = 2 else: if ptfmt == "prep": rootdev_ptnum = 2 else: rootdev_ptnum = 1 logtime("creating partition with: %s" % ' '.join(ptcmd), util.subp, ptcmd) ptpre = "" if not os.path.exists("%s%s" % (devnode, rootdev_ptnum)): # perhaps the device is /dev/p if os.path.exists("%sp%s" % (devnode, rootdev_ptnum)): ptpre = "p" else: LOG.warn("root device %s%s did not exist, expecting failure", devnode, rootdev_ptnum) if bootdev_ptnum: bootdev = "%s%s%s" % (devnode, ptpre, bootdev_ptnum) if ptfmt == "uefi": # assumed / required from the partitioner pt_uefi uefi_ptnum = "15" uefi_label = "uefi-boot" uefi_dev = "%s%s%s" % (devnode, ptpre, uefi_ptnum) rootdev = "%s%s%s" % (devnode, ptpre, rootdev_ptnum) LOG.debug("rootdev=%s bootdev=%s fmt=%s bootpt=%s", rootdev, bootdev, ptfmt, bootpt) # mkfs for root partition first and mount cmd = ['mkfs.%s' % args.fstype, '-q', '-L', 'cloudimg-rootfs', rootdev] logtime(' '.join(cmd), util.subp, cmd) util.subp(['mount', rootdev, state['target']]) if bootpt['enabled']: # create 'boot' directory in state['target'] boot_dir = os.path.join(state['target'], 'boot') util.subp(['mkdir', boot_dir]) # mkfs for boot partition and mount cmd = ['mkfs.%s' % bootpt['fstype'], '-q', '-L', bootpt['label'], bootdev] logtime(' '.join(cmd), util.subp, cmd) util.subp(['mount', bootdev, boot_dir]) if ptfmt == "uefi": uefi_dir = os.path.join(state['target'], 'boot', 'efi') util.ensure_dir(uefi_dir) util.subp(['mount', uefi_dev, uefi_dir]) if state['fstab']: with open(state['fstab'], "w") as fp: if bootpt['enabled']: fp.write("LABEL=%s /boot %s defaults 0 0\n" % (bootpt['label'], bootpt['fstype'])) if ptfmt == "uefi": # label created in helpers/partition for uefi fp.write("LABEL=%s /boot/efi vfat defaults 0 0\n" % uefi_label) fp.write("LABEL=%s / %s defaults 0 0\n" % ('cloudimg-rootfs', args.fstype)) else: LOG.info("fstab not in environment, so not writing") if args.umount: util.do_umount(state['target'], recursive=True) return 0 def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, block_meta) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/block_wipe.py000066400000000000000000000017701326565350400222330ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import sys import curtin.block as block from . import populate_one_subcmd from .. import log LOG = log.LOG def wipe_main(args): for blockdev in args.devices: try: LOG.debug('Wiping volume %s with mode=%s', blockdev, args.mode) block.wipe_volume(blockdev, mode=args.mode) except Exception as e: sys.stderr.write( "Failed to wipe volume %s in mode %s: %s" % (blockdev, args.mode, e)) sys.exit(1) sys.exit(0) CMD_ARGUMENTS = ( ((('-m', '--mode'), {'help': 'mode for wipe.', 'action': 'store', 'default': 'superblock', 'choices': ['zero', 'superblock', 'superblock-recursive', 'random']}), ('devices', {'help': 'devices to wipe', 'default': [], 'nargs': '+'}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, wipe_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/clear_holders.py000066400000000000000000000022021326565350400227120ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin import block from . import populate_one_subcmd def clear_holders_main(args): """ wrapper for clear_holders accepting cli args """ if (not all(block.is_block_device(device) for device in args.devices) or len(args.devices) == 0): raise ValueError('invalid devices specified') block.clear_holders.start_clear_holders_deps() block.clear_holders.clear_holders(args.devices, try_preserve=args.preserve) if args.preserve: print('ran clear_holders attempting to preserve data. however, ' 'hotplug support for some devices may cause holders to restart ') block.clear_holders.assert_clear(args.devices) CMD_ARGUMENTS = ( (('devices', {'help': 'devices to free', 'default': [], 'nargs': '+'}), (('-p', '--preserve'), {'help': 'try to shut down holders without erasing anything', 'default': False, 'action': 'store_true'}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, clear_holders_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/collect_logs.py000066400000000000000000000144221326565350400225640ustar00rootroot00000000000000# Copyright (C) 2017 Canonical Ltd. # # Curtin is free software: you can redistribute it and/or modify it under # the terms of the GNU Affero General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Curtin is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for # more details. # # You should have received a copy of the GNU Affero General Public License # along with Curtin. If not, see . from datetime import datetime import json import os import re import shutil import sys import tempfile from .. import util from .. import version from ..config import load_config, merge_config from . import populate_one_subcmd from .install import CONFIG_BUILTIN, SAVE_INSTALL_CONFIG CURTIN_PACK_CONFIG_DIR = '/curtin/configs' def collect_logs_main(args): """Collect all configured curtin logs and into a tarfile.""" if os.path.exists(SAVE_INSTALL_CONFIG): cfg = load_config(SAVE_INSTALL_CONFIG) elif os.path.isdir(CURTIN_PACK_CONFIG_DIR): cfg = CONFIG_BUILTIN.copy() for _file in sorted(os.listdir(CURTIN_PACK_CONFIG_DIR)): merge_config( cfg, load_config(os.path.join(CURTIN_PACK_CONFIG_DIR, _file))) else: sys.stderr.write( 'Warning: no configuration file found in %s or %s.\n' 'Using builtin configuration.' % ( SAVE_INSTALL_CONFIG, CURTIN_PACK_CONFIG_DIR)) cfg = CONFIG_BUILTIN.copy() create_log_tarfile(args.output, cfg) sys.exit(0) def create_log_tarfile(tarfile, config): """Create curtin logs tarfile within a temporary directory. A subdirectory curtin- is created in the tar containing the specified logs. Duplicates are skipped, paths which don't exist are skipped. @param tarfile: Path of the tarfile we want to create. @param config: Dictionary of curtin's configuration. """ if not (isinstance(tarfile, util.string_types) and tarfile): raise ValueError("Invalid value '%s' for tarfile" % tarfile) target_dir = os.path.dirname(tarfile) if target_dir and not os.path.exists(target_dir): util.ensure_dir(target_dir) instcfg = config.get('install', {}) logfile = instcfg.get('log_file') alllogs = instcfg.get('post_files', []) if logfile: alllogs.append(logfile) # Prune duplicates and files which do not exist stderr = sys.stderr valid_logs = [] for logfile in set(alllogs): if os.path.exists(logfile): valid_logs.append(logfile) else: stderr.write( 'Skipping logfile %s: file does not exist\n' % logfile) maascfg = instcfg.get('maas', {}) redact_values = [] for key in ('consumer_key', 'token_key', 'token_secret'): redact_value = maascfg.get(key) if redact_value: redact_values.append(redact_value) date = datetime.utcnow().strftime('%Y-%m-%d-%H-%M') tmp_dir = tempfile.mkdtemp() # The tar will contain a dated subdirectory containing all logs tar_dir = 'curtin-logs-{date}'.format(date=date) cmd = ['tar', '-cvf', os.path.join(os.getcwd(), tarfile), tar_dir] try: with util.chdir(tmp_dir): os.mkdir(tar_dir) _collect_system_info(tar_dir, config) for logfile in valid_logs: shutil.copy(logfile, tar_dir) _redact_sensitive_information(tar_dir, redact_values) util.subp(cmd, capture=True) finally: if os.path.exists(tmp_dir): shutil.rmtree(tmp_dir) sys.stderr.write('Wrote: %s\n' % tarfile) def _collect_system_info(target_dir, config): """Copy and create system information files in the provided target_dir.""" util.write_file( os.path.join(target_dir, 'version'), version.version_string()) if os.path.isdir(CURTIN_PACK_CONFIG_DIR): shutil.copytree( CURTIN_PACK_CONFIG_DIR, os.path.join(target_dir, 'configs')) util.write_file( os.path.join(target_dir, 'curtin-config'), json.dumps(config, indent=1, sort_keys=True, separators=(',', ': '))) for fpath in ('/etc/os-release', '/proc/cmdline', '/proc/partitions'): shutil.copy(fpath, target_dir) os.chmod(os.path.join(target_dir, os.path.basename(fpath)), 0o644) _out, _ = util.subp(['uname', '-a'], capture=True) util.write_file(os.path.join(target_dir, 'uname'), _out) lshw_out, _ = util.subp(['sudo', 'lshw'], capture=True) util.write_file(os.path.join(target_dir, 'lshw'), lshw_out) network_cmds = [ ['ip', '--oneline', 'address', 'list'], ['ip', '--oneline', '-6', 'address', 'list'], ['ip', '--oneline', 'route', 'list'], ['ip', '--oneline', '-6', 'route', 'list'], ] content = [] for cmd in network_cmds: content.append('=== {cmd} ==='.format(cmd=' '.join(cmd))) out, err = util.subp(cmd, combine_capture=True) content.append(out) util.write_file(os.path.join(target_dir, 'network'), '\n'.join(content)) def _redact_sensitive_information(target_dir, redact_values): """Redact sensitive information from any files found in target_dir. Perform inline replacement of any matching redact_values with in all files found in target_dir. @param target_dir: The directory in which to redact file content. @param redact_values: List of strings which need redacting from all files in target_dir. """ for root, _, files in os.walk(target_dir): for fname in files: fpath = os.path.join(root, fname) with open(fpath) as stream: content = stream.read() for redact_value in redact_values: content = re.sub(redact_value, '', content) util.write_file(fpath, content, mode=0o666) CMD_ARGUMENTS = ( ((('-o', '--output'), {'help': 'The output tarfile created from logs.', 'action': 'store', 'default': "curtin-logs.tar"}),) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, collect_logs_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/curthooks.py000066400000000000000000001175221326565350400221410ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import copy import glob import os import platform import re import sys import shutil import textwrap from curtin import config from curtin import block from curtin import net from curtin import futil from curtin.log import LOG from curtin import swap from curtin import util from curtin import version as curtin_version from curtin.reporter import events from curtin.commands import apply_net, apt_config from curtin.url_helper import get_maas_version from . import populate_one_subcmd write_files = futil._legacy_write_files # LP: #1731709 CMD_ARGUMENTS = ( ((('-t', '--target'), {'help': 'operate on target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': None}), (('-c', '--config'), {'help': 'operate on config. default is env[CONFIG]', 'action': 'store', 'metavar': 'CONFIG', 'default': None}), ) ) KERNEL_MAPPING = { 'precise': { '3.2.0': '', '3.5.0': '-lts-quantal', '3.8.0': '-lts-raring', '3.11.0': '-lts-saucy', '3.13.0': '-lts-trusty', }, 'trusty': { '3.13.0': '', '3.16.0': '-lts-utopic', '3.19.0': '-lts-vivid', '4.2.0': '-lts-wily', '4.4.0': '-lts-xenial', }, 'xenial': { '4.3.0': '', # development release has 4.3, release will have 4.4 '4.4.0': '', } } CLOUD_INIT_YUM_REPO_TEMPLATE = """ [group_cloud-init-el-stable] name=Copr repo for el-stable owned by @cloud-init baseurl=https://copr-be.cloud.fedoraproject.org/results/@cloud-init/el-stable/epel-%s-$basearch/ type=rpm-md skip_if_unavailable=True gpgcheck=1 gpgkey=https://copr-be.cloud.fedoraproject.org/results/@cloud-init/el-stable/pubkey.gpg repo_gpgcheck=0 enabled=1 enabled_metadata=1 """ def do_apt_config(cfg, target): cfg = apt_config.translate_old_apt_features(cfg) apt_cfg = cfg.get("apt") if apt_cfg is not None: LOG.info("curthooks handling apt to target %s with config %s", target, apt_cfg) apt_config.handle_apt(apt_cfg, target) else: LOG.info("No apt config provided, skipping") def disable_overlayroot(cfg, target): # cloud images come with overlayroot, but installed systems need disabled disable = cfg.get('disable_overlayroot', True) local_conf = os.path.sep.join([target, 'etc/overlayroot.local.conf']) if disable and os.path.exists(local_conf): LOG.debug("renaming %s to %s", local_conf, local_conf + ".old") shutil.move(local_conf, local_conf + ".old") def setup_zipl(cfg, target): if platform.machine() != 's390x': return # assuming that below gives the "/" rootfs target_dev = block.get_devices_for_mp(target)[0] root_arg = None # not mapped rootfs, use UUID if 'mapper' in target_dev: root_arg = target_dev else: uuid = block.get_volume_uuid(target_dev) if uuid: root_arg = "UUID=%s" % uuid if not root_arg: msg = "Failed to identify root= for %s at %s." % (target, target_dev) LOG.warn(msg) raise ValueError(msg) zipl_conf = """ # This has been modified by the MAAS curtin installer [defaultboot] default=ubuntu [ubuntu] target = /boot image = /boot/vmlinuz ramdisk = /boot/initrd.img parameters = root=%s """ % root_arg futil.write_files( files={"zipl_conf": {"path": "/etc/zipl.conf", "content": zipl_conf}}, base_dir=target) def run_zipl(cfg, target): if platform.machine() != 's390x': return with util.ChrootableTarget(target) as in_chroot: in_chroot.subp(['zipl']) def get_flash_kernel_pkgs(arch=None, uefi=None): if arch is None: arch = util.get_architecture() if uefi is None: uefi = util.is_uefi_bootable() if uefi: return None if not arch.startswith('arm'): return None try: fk_packages, _ = util.subp( ['list-flash-kernel-packages'], capture=True) return fk_packages except util.ProcessExecutionError: # Ignore errors return None def install_kernel(cfg, target): kernel_cfg = cfg.get('kernel', {'package': None, 'fallback-package': "linux-generic", 'mapping': {}}) if kernel_cfg is not None: kernel_package = kernel_cfg.get('package') kernel_fallback = kernel_cfg.get('fallback-package') else: kernel_package = None kernel_fallback = None mapping = copy.deepcopy(KERNEL_MAPPING) config.merge_config(mapping, kernel_cfg.get('mapping', {})) # Machines using flash-kernel may need additional dependencies installed # before running. Run those checks in the ephemeral environment so the # target only has required packages installed. See LP:1640519 fk_packages = get_flash_kernel_pkgs() if fk_packages: util.install_packages(fk_packages.split(), target=target) if kernel_package: util.install_packages([kernel_package], target=target) return # uname[2] is kernel name (ie: 3.16.0-7-generic) # version gets X.Y.Z, flavor gets anything after second '-'. kernel = os.uname()[2] codename, _ = util.subp(['lsb_release', '--codename', '--short'], capture=True, target=target) codename = codename.strip() version, abi, flavor = kernel.split('-', 2) try: map_suffix = mapping[codename][version] except KeyError: LOG.warn("Couldn't detect kernel package to install for %s." % kernel) if kernel_fallback is not None: util.install_packages([kernel_fallback], target=target) return package = "linux-{flavor}{map_suffix}".format( flavor=flavor, map_suffix=map_suffix) if util.has_pkg_available(package, target): if util.has_pkg_installed(package, target): LOG.debug("Kernel package '%s' already installed", package) else: LOG.debug("installing kernel package '%s'", package) util.install_packages([package], target=target) else: if kernel_fallback is not None: LOG.info("Kernel package '%s' not available. " "Installing fallback package '%s'.", package, kernel_fallback) util.install_packages([kernel_fallback], target=target) else: LOG.warn("Kernel package '%s' not available and no fallback." " System may not boot.", package) def uefi_remove_old_loaders(grubcfg, target): """Removes the old UEFI loaders from efibootmgr.""" efi_output = util.get_efibootmgr(target) current_uefi_boot = efi_output.get('current', None) old_efi_entries = { entry: info for entry, info in efi_output['entries'].items() if re.match(r'^.*File\(\\EFI.*$', info['path']) } old_efi_entries.pop(current_uefi_boot, None) remove_old_loaders = grubcfg.get('remove_old_uefi_loaders', True) if old_efi_entries: if remove_old_loaders: with util.ChrootableTarget(target) as in_chroot: for entry, info in old_efi_entries.items(): LOG.debug("removing old UEFI entry: %s" % info['name']) in_chroot.subp( ['efibootmgr', '-B', '-b', entry], capture=True) else: LOG.debug( "Skipped removing %d old UEFI entrie%s.", len(old_efi_entries), '' if len(old_efi_entries) == 1 else 's') for info in old_efi_entries.values(): LOG.debug( "UEFI entry '%s' might no longer exist and " "should be removed.", info['name']) def uefi_reorder_loaders(grubcfg, target): """Reorders the UEFI BootOrder to place BootCurrent first. The specifically doesn't try to do to much. The order in which grub places a new EFI loader is up to grub. This only moves the BootCurrent to the front of the BootOrder. """ if grubcfg.get('reorder_uefi', True): efi_output = util.get_efibootmgr(target) currently_booted = efi_output.get('current', None) boot_order = efi_output.get('order', []) if currently_booted: if currently_booted in boot_order: boot_order.remove(currently_booted) boot_order = [currently_booted] + boot_order new_boot_order = ','.join(boot_order) LOG.debug( "Setting currently booted %s as the first " "UEFI loader.", currently_booted) LOG.debug( "New UEFI boot order: %s", new_boot_order) with util.ChrootableTarget(target) as in_chroot: in_chroot.subp(['efibootmgr', '-o', new_boot_order]) else: LOG.debug("Skipped reordering of UEFI boot methods.") LOG.debug("Currently booted UEFI loader might no longer boot.") def setup_grub(cfg, target): # target is the path to the mounted filesystem # FIXME: these methods need moving to curtin.block # and using them from there rather than commands.block_meta from curtin.commands.block_meta import (extract_storage_ordered_dict, get_path_to_storage_volume) grubcfg = cfg.get('grub', {}) # copy legacy top level name if 'grub_install_devices' in cfg and 'install_devices' not in grubcfg: grubcfg['install_devices'] = cfg['grub_install_devices'] LOG.debug("setup grub on target %s", target) # if there is storage config, look for devices tagged with 'grub_device' storage_cfg_odict = None try: storage_cfg_odict = extract_storage_ordered_dict(cfg) except ValueError as e: pass if storage_cfg_odict: storage_grub_devices = [] for item_id, item in storage_cfg_odict.items(): if not item.get('grub_device'): continue LOG.debug("checking: %s", item) storage_grub_devices.append( get_path_to_storage_volume(item_id, storage_cfg_odict)) if len(storage_grub_devices) > 0: grubcfg['install_devices'] = storage_grub_devices LOG.debug("install_devices: %s", grubcfg.get('install_devices')) if 'install_devices' in grubcfg: instdevs = grubcfg.get('install_devices') if isinstance(instdevs, str): instdevs = [instdevs] if instdevs is None: LOG.debug("grub installation disabled by config") else: # If there were no install_devices found then we try to do the right # thing. That right thing is basically installing on all block # devices that are mounted. On powerpc, though it means finding PrEP # partitions. devs = block.get_devices_for_mp(target) blockdevs = set() for maybepart in devs: try: (blockdev, part) = block.get_blockdev_for_partition(maybepart) blockdevs.add(blockdev) except ValueError as e: # if there is no syspath for this device such as a lvm # or raid device, then a ValueError is raised here. LOG.debug("failed to find block device for %s", maybepart) if platform.machine().startswith("ppc64"): # assume we want partitions that are 4100 (PReP). The snippet here # just prints the partition number partitions of that type. shnip = textwrap.dedent(""" export LANG=C; for d in "$@"; do sgdisk "$d" --print | awk '$6 == prep { print d $1 }' "d=$d" prep=4100 done """) try: out, err = util.subp( ['sh', '-c', shnip, '--'] + list(blockdevs), capture=True) instdevs = str(out).splitlines() if not instdevs: LOG.warn("No power grub target partitions found!") instdevs = None except util.ProcessExecutionError as e: LOG.warn("Failed to find power grub partitions: %s", e) instdevs = None else: instdevs = list(blockdevs) # UEFI requires grub-efi-{arch}. If a signed version of that package # exists then it will be installed. if util.is_uefi_bootable(): arch = util.get_architecture() pkgs = ['grub-efi-%s' % arch] # Architecture might support a signed UEFI loader uefi_pkg_signed = 'grub-efi-%s-signed' % arch if util.has_pkg_available(uefi_pkg_signed): pkgs.append(uefi_pkg_signed) # AMD64 has shim-signed for SecureBoot support if arch == "amd64": pkgs.append("shim-signed") # Install the UEFI packages needed for the architecture util.install_packages(pkgs, target=target) env = os.environ.copy() replace_default = grubcfg.get('replace_linux_default', True) if str(replace_default).lower() in ("0", "false"): env['REPLACE_GRUB_LINUX_DEFAULT'] = "0" else: env['REPLACE_GRUB_LINUX_DEFAULT'] = "1" if instdevs: instdevs = [block.get_dev_name_entry(i)[1] for i in instdevs] else: instdevs = ["none"] if util.is_uefi_bootable() and grubcfg.get('update_nvram', True): uefi_remove_old_loaders(grubcfg, target) LOG.debug("installing grub to %s [replace_default=%s]", instdevs, replace_default) with util.ChrootableTarget(target): args = ['install-grub'] if util.is_uefi_bootable(): args.append("--uefi") if grubcfg.get('update_nvram', True): LOG.debug("GRUB UEFI enabling NVRAM updates") args.append("--update-nvram") else: LOG.debug("NOT enabling UEFI nvram updates") LOG.debug("Target system may not boot") args.append(target) # capture stdout and stderr joined. join_stdout_err = ['sh', '-c', 'exec "$0" "$@" 2>&1'] out, _err = util.subp( join_stdout_err + args + instdevs, env=env, capture=True) LOG.debug("%s\n%s\n", args, out) if util.is_uefi_bootable() and grubcfg.get('update_nvram', True): uefi_reorder_loaders(grubcfg, target) def update_initramfs(target=None, all_kernels=False): cmd = ['update-initramfs', '-u'] if all_kernels: cmd.extend(['-k', 'all']) with util.ChrootableTarget(target) as in_chroot: in_chroot.subp(cmd) def copy_fstab(fstab, target): if not fstab: LOG.warn("fstab variable not in state, not copying fstab") return shutil.copy(fstab, os.path.sep.join([target, 'etc/fstab'])) def copy_crypttab(crypttab, target): if not crypttab: LOG.warn("crypttab config must be specified, not copying") return shutil.copy(crypttab, os.path.sep.join([target, 'etc/crypttab'])) def copy_iscsi_conf(nodes_dir, target): if not nodes_dir: LOG.warn("nodes directory must be specified, not copying") return LOG.info("copying iscsi nodes database into target") shutil.copytree(nodes_dir, os.path.sep.join([target, 'etc/iscsi/nodes'])) def copy_mdadm_conf(mdadm_conf, target): if not mdadm_conf: LOG.warn("mdadm config must be specified, not copying") return LOG.info("copying mdadm.conf into target") shutil.copy(mdadm_conf, os.path.sep.join([target, 'etc/mdadm/mdadm.conf'])) def apply_networking(target, state): netconf = state.get('network_config') interfaces = state.get('interfaces') def is_valid_src(infile): with open(infile, 'r') as fp: content = fp.read() if len(content.split('\n')) > 1: return True return False if is_valid_src(netconf): LOG.info("applying network_config") apply_net.apply_net(target, network_state=None, network_config=netconf) else: LOG.debug("copying interfaces") copy_interfaces(interfaces, target) def copy_interfaces(interfaces, target): if not interfaces: LOG.warn("no interfaces file to copy!") return eni = os.path.sep.join([target, 'etc/network/interfaces']) shutil.copy(interfaces, eni) def copy_dname_rules(rules_d, target): if not rules_d: LOG.warn("no udev rules directory to copy") return for rule in os.listdir(rules_d): target_file = os.path.join( target, "etc/udev/rules.d", "%s.rules" % rule) shutil.copy(os.path.join(rules_d, rule), target_file) def restore_dist_interfaces(cfg, target): # cloud images have a link of /etc/network/interfaces into /run eni = os.path.sep.join([target, 'etc/network/interfaces']) if not cfg.get('restore_dist_interfaces', True): return rp = os.path.realpath(eni) if (os.path.exists(eni + ".dist") and (rp.startswith("/run") or rp.startswith(target + "/run"))): LOG.debug("restoring dist interfaces, existing link pointed to /run") shutil.move(eni, eni + ".old") shutil.move(eni + ".dist", eni) def add_swap(cfg, target, fstab): # add swap file per cfg to filesystem root at target. update fstab. # # swap: # filename: 'swap.img', # size: None # (or 1G) # maxsize: 2G if 'swap' in cfg and not cfg.get('swap'): LOG.debug("disabling 'add_swap' due to config") return swapcfg = cfg.get('swap', {}) fname = swapcfg.get('filename', None) size = swapcfg.get('size', None) maxsize = swapcfg.get('maxsize', None) if size: size = util.human2bytes(str(size)) if maxsize: maxsize = util.human2bytes(str(maxsize)) swap.setup_swapfile(target=target, fstab=fstab, swapfile=fname, size=size, maxsize=maxsize) def detect_and_handle_multipath(cfg, target): DEFAULT_MULTIPATH_PACKAGES = ['multipath-tools-boot'] mpcfg = cfg.get('multipath', {}) mpmode = mpcfg.get('mode', 'auto') mppkgs = mpcfg.get('packages', DEFAULT_MULTIPATH_PACKAGES) mpbindings = mpcfg.get('overwrite_bindings', True) if isinstance(mppkgs, str): mppkgs = [mppkgs] if mpmode == 'disabled': return if mpmode == 'auto' and not block.detect_multipath(target): return LOG.info("Detected multipath devices. Installing support via %s", mppkgs) util.install_packages(mppkgs, target=target) replace_spaces = True try: # check in-target version pkg_ver = util.get_package_version('multipath-tools', target=target) LOG.debug("get_package_version:\n%s", pkg_ver) LOG.debug("multipath version is %s (major=%s minor=%s micro=%s)", pkg_ver['semantic_version'], pkg_ver['major'], pkg_ver['minor'], pkg_ver['micro']) # multipath-tools versions < 0.5.0 do _NOT_ want whitespace replaced # i.e. 0.4.X in Trusty. if pkg_ver['semantic_version'] < 500: replace_spaces = False except Exception as e: LOG.warn("failed reading multipath-tools version, " "assuming it wants no spaces in wwids: %s", e) multipath_cfg_path = os.path.sep.join([target, '/etc/multipath.conf']) multipath_bind_path = os.path.sep.join([target, '/etc/multipath/bindings']) # We don't want to overwrite multipath.conf file provided by the image. if not os.path.isfile(multipath_cfg_path): # Without user_friendly_names option enabled system fails to boot # if any of the disks has spaces in its name. Package multipath-tools # has bug opened for this issue (LP: 1432062) but it was not fixed yet. multipath_cfg_content = '\n'.join( ['# This file was created by curtin while installing the system.', 'defaults {', ' user_friendly_names yes', '}', '']) util.write_file(multipath_cfg_path, content=multipath_cfg_content) if mpbindings or not os.path.isfile(multipath_bind_path): # we do assume that get_devices_for_mp()[0] is / target_dev = block.get_devices_for_mp(target)[0] wwid = block.get_scsi_wwid(target_dev, replace_whitespace=replace_spaces) blockdev, partno = block.get_blockdev_for_partition(target_dev) mpname = "mpath0" grub_dev = "/dev/mapper/" + mpname if partno is not None: grub_dev += "-part%s" % partno LOG.debug("configuring multipath install for root=%s wwid=%s", grub_dev, wwid) multipath_bind_content = '\n'.join( ['# This file was created by curtin while installing the system.', "%s %s" % (mpname, wwid), '# End of content generated by curtin.', '# Everything below is maintained by multipath subsystem.', '']) util.write_file(multipath_bind_path, content=multipath_bind_content) grub_cfg = os.path.sep.join( [target, '/etc/default/grub.d/50-curtin-multipath.cfg']) msg = '\n'.join([ '# Written by curtin for multipath device wwid "%s"' % wwid, 'GRUB_DEVICE=%s' % grub_dev, 'GRUB_DISABLE_LINUX_UUID=true', '']) util.write_file(grub_cfg, content=msg) else: LOG.warn("Not sure how this will boot") # Initrams needs to be updated to include /etc/multipath.cfg # and /etc/multipath/bindings files. update_initramfs(target, all_kernels=True) def detect_required_packages(cfg): """ detect packages that will be required in-target by custom config items """ mapping = { 'storage': block.detect_required_packages_mapping(), 'network': net.detect_required_packages_mapping(), } needed_packages = [] for cfg_type, cfg_map in mapping.items(): # skip missing or invalid config items, configs may # only have network or storage, not always both if not isinstance(cfg.get(cfg_type), dict): continue cfg_version = cfg[cfg_type].get('version') if not isinstance(cfg_version, int) or cfg_version not in cfg_map: msg = ('Supplied configuration version "%s", for config type' '"%s" is not present in the known mapping.' % (cfg_version, cfg_type)) raise ValueError(msg) mapped_config = cfg_map[cfg_version] found_reqs = mapped_config['handler'](cfg, mapped_config['mapping']) needed_packages.extend(found_reqs) LOG.debug('Curtin config dependencies requires additional packages: %s', needed_packages) return needed_packages def install_missing_packages(cfg, target): ''' describe which operation types will require specific packages 'custom_config_key': { 'pkg1': ['op_name_1', 'op_name_2', ...] } ''' installed_packages = util.get_installed_packages(target) needed_packages = set([pkg for pkg in detect_required_packages(cfg) if pkg not in installed_packages]) arch_packages = { 's390x': [('s390-tools', 'zipl')], } for pkg, cmd in arch_packages.get(platform.machine(), []): if not util.which(cmd, target=target): if pkg not in needed_packages: needed_packages.add(pkg) # Filter out ifupdown network packages on netplan enabled systems. if 'ifupdown' not in installed_packages and 'nplan' in installed_packages: drops = set(['bridge-utils', 'ifenslave', 'vlan']) if needed_packages.union(drops): LOG.debug("Skipping install of %s. Not needed on netplan system.", needed_packages.union(drops)) needed_packages = needed_packages.difference(drops) if needed_packages: to_add = list(sorted(needed_packages)) state = util.load_command_environment() with events.ReportEventStack( name=state.get('report_stack_prefix'), reporting_enabled=True, level="INFO", description="Installing packages on target system: " + str(to_add)): util.install_packages(to_add, target=target) def system_upgrade(cfg, target): """run system-upgrade (apt-get dist-upgrade) or other in target. config: system_upgrade: enabled: False """ mycfg = {'system_upgrade': {'enabled': False}} config.merge_config(mycfg, cfg) mycfg = mycfg.get('system_upgrade') if not isinstance(mycfg, dict): LOG.debug("system_upgrade disabled by config. entry not a dict.") return if not config.value_as_boolean(mycfg.get('enabled', True)): LOG.debug("system_upgrade disabled by config.") return util.system_upgrade(target=target) def inject_pollinate_user_agent_config(ua_cfg, target): """Write out user-agent config dictionary to pollinate's user-agent file (/etc/pollinate/add-user-agent) in target. """ if not isinstance(ua_cfg, dict): raise ValueError('ua_cfg is not a dictionary: %s', ua_cfg) pollinate_cfg = util.target_path(target, '/etc/pollinate/add-user-agent') comment = "# written by curtin" content = "\n".join(["%s/%s %s" % (ua_key, ua_val, comment) for ua_key, ua_val in ua_cfg.items()]) + "\n" util.write_file(pollinate_cfg, content=content) def handle_pollinate_user_agent(cfg, target): """Configure the pollinate user-agent if provided configuration pollinate: user_agent: false # disable writing out a user-agent string # custom agent key/value pairs pollinate: user_agent: key1: value1 key2: value2 No config will result in curtin fetching: curtin version maas version (via endpoint URL, if present) """ pcfg = cfg.get('pollinate') if not isinstance(pcfg, dict): pcfg = {'user_agent': {}} uacfg = pcfg.get('user_agent', {}) if uacfg is False: return # set curtin version uacfg['curtin'] = curtin_version.version_string() # maas configures a curtin reporting webhook handler with # an endpoint URL. This url is used to query the MAAS REST # api to extract the exact maas version. maas_reporting = cfg.get('reporting', {}).get('maas', None) if maas_reporting: endpoint = maas_reporting.get('endpoint') maas_version = get_maas_version(endpoint) if maas_version: uacfg['maas'] = maas_version['version'] inject_pollinate_user_agent_config(uacfg, target) def handle_cloudconfig(cfg, base_dir=None): """write cloud-init configuration files into base_dir. cloudconfig format is a dictionary of keys and values of content cloudconfig: cfg-datasource: content: | #cloud-cfg datasource_list: [ MAAS ] cfg-maas: content: | #cloud-cfg reporting: maas: { consumer_key: 8cW9kadrWZcZvx8uWP, endpoint: 'http://XXX', token_key: jD57DB9VJYmDePCRkq, token_secret: mGFFMk6YFLA3h34QHCv22FjENV8hJkRX, type: webhook} """ # check that cfg is dict if not isinstance(cfg, dict): raise ValueError("cloudconfig configuration is not in dict format") # for each item in the dict # generate a path based on item key # if path is already in the item, LOG warning, and use generated path for cfgname, cfgvalue in cfg.items(): cfgpath = "50-cloudconfig-%s.cfg" % cfgname if 'path' in cfgvalue: LOG.warning("cloudconfig ignoring 'path' key in config") cfgvalue['path'] = cfgpath # re-use write_files format and adjust target to prepend LOG.debug('Calling write_files with cloudconfig @ %s', base_dir) LOG.debug('Injecting cloud-config:\n%s', cfg) futil.write_files(cfg, base_dir) def ubuntu_core_curthooks(cfg, target=None): """ Ubuntu-Core 16 images cannot execute standard curthooks Instead we copy in any cloud-init configuration to the 'LABEL=writable' partition mounted at target. """ ubuntu_core_target = os.path.join(target, "system-data") cc_target = os.path.join(ubuntu_core_target, 'etc/cloud/cloud.cfg.d') cloudconfig = cfg.get('cloudconfig', None) if cloudconfig: # remove cloud-init.disabled, if found cloudinit_disable = os.path.join(ubuntu_core_target, 'etc/cloud/cloud-init.disabled') if os.path.exists(cloudinit_disable): util.del_file(cloudinit_disable) handle_cloudconfig(cloudconfig, base_dir=cc_target) netconfig = cfg.get('network', None) if netconfig: LOG.info('Writing network configuration') ubuntu_core_netconfig = os.path.join(cc_target, "50-curtin-networking.cfg") util.write_file(ubuntu_core_netconfig, content=config.dump_config({'network': netconfig})) def rpm_get_dist_id(target): """Use rpm command to extract the '%rhel' distro macro which returns the major os version id (6, 7, 8). This works for centos or rhel """ with util.ChrootableTarget(target) as in_chroot: dist, _ = in_chroot.subp(['rpm', '-E', '%rhel'], capture=True) return dist.rstrip() def centos_apply_network_config(netcfg, target=None): """ CentOS images execute built-in curthooks which only supports simple networking configuration. This hook enables advanced network configuration via config passthrough to the target. """ def cloud_init_repo(version): if not version: raise ValueError('Missing required version parameter') return CLOUD_INIT_YUM_REPO_TEMPLATE % version if netcfg: LOG.info('Removing embedded network configuration (if present)') ifcfgs = glob.glob(util.target_path(target, 'etc/sysconfig/network-scripts') + '/ifcfg-*') # remove ifcfg-* (except ifcfg-lo) for ifcfg in ifcfgs: if os.path.basename(ifcfg) != "ifcfg-lo": util.del_file(ifcfg) LOG.info('Checking cloud-init in target [%s] for network ' 'configuration passthrough support.', target) passthrough = net.netconfig_passthrough_available(target) LOG.debug('passthrough available via in-target: %s', passthrough) # if in-target cloud-init is not updated, upgrade via cloud-init repo if not passthrough: cloud_init_yum_repo = ( util.target_path(target, 'etc/yum.repos.d/curtin-cloud-init.repo')) # Inject cloud-init daily yum repo util.write_file(cloud_init_yum_repo, content=cloud_init_repo(rpm_get_dist_id(target))) # we separate the installation of repository packages (epel, # cloud-init-el-release) as we need a new invocation of yum # to read the newly installed repo files. YUM_CMD = ['yum', '-y', '--noplugins', 'install'] retries = [1] * 30 with util.ChrootableTarget(target) as in_chroot: # ensure up-to-date ca-certificates to handle https mirror # connections in_chroot.subp(YUM_CMD + ['ca-certificates'], capture=True, log_captured=True, retries=retries) in_chroot.subp(YUM_CMD + ['epel-release'], capture=True, log_captured=True, retries=retries) in_chroot.subp(YUM_CMD + ['cloud-init-el-release'], log_captured=True, capture=True, retries=retries) in_chroot.subp(YUM_CMD + ['cloud-init'], capture=True, log_captured=True, retries=retries) # remove cloud-init el-stable bootstrap repo config as the # cloud-init-el-release package points to the correct repo util.del_file(cloud_init_yum_repo) # install bridge-utils if needed with util.ChrootableTarget(target) as in_chroot: try: in_chroot.subp(['rpm', '-q', 'bridge-utils'], capture=False, rcs=[0]) except util.ProcessExecutionError: LOG.debug('Image missing bridge-utils package, installing') in_chroot.subp(YUM_CMD + ['bridge-utils'], capture=True, log_captured=True, retries=retries) LOG.info('Passing network configuration through to target') net.render_netconfig_passthrough(target, netconfig={'network': netcfg}) def target_is_ubuntu_core(target): """Check if Ubuntu-Core specific directory is present at target""" if target: return os.path.exists(util.target_path(target, 'system-data/var/lib/snapd')) return False def target_is_centos(target): """Check if CentOS specific file is present at target""" if target: return os.path.exists(util.target_path(target, 'etc/centos-release')) return False def target_is_rhel(target): """Check if RHEL specific file is present at target""" if target: return os.path.exists(util.target_path(target, 'etc/redhat-release')) return False def curthooks(args): state = util.load_command_environment() if args.target is not None: target = args.target else: target = state['target'] if target is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) cfg = config.load_command_config(args, state) stack_prefix = state.get('report_stack_prefix', '') # if curtin-hooks hook exists in target we can defer to the in-target hooks if util.run_hook_if_exists(target, 'curtin-hooks'): # For vmtests to force execute centos_apply_network_config, uncomment # the value in examples/tests/centos_defaults.yaml if cfg.get('_ammend_centos_curthooks'): if cfg.get('cloudconfig'): handle_cloudconfig( cfg['cloudconfig'], base_dir=util.target_path(target, 'etc/cloud/cloud.cfg.d')) if target_is_centos(target) or target_is_rhel(target): LOG.info('Detected RHEL/CentOS image, running extra hooks') with events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level="INFO", description="Configuring CentOS for first boot"): centos_apply_network_config(cfg.get('network', {}), target) sys.exit(0) if target_is_ubuntu_core(target): LOG.info('Detected Ubuntu-Core image, running hooks') with events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level="INFO", description="Configuring Ubuntu-Core for first boot"): ubuntu_core_curthooks(cfg, target) sys.exit(0) with events.ReportEventStack( name=stack_prefix + '/writing-config', reporting_enabled=True, level="INFO", description="configuring apt configuring apt"): do_apt_config(cfg, target) disable_overlayroot(cfg, target) # LP: #1742560 prevent zfs-dkms from being installed (Xenial) if util.lsb_release(target=target)['codename'] == 'xenial': util.apt_update(target=target) with util.ChrootableTarget(target) as in_chroot: in_chroot.subp(['apt-mark', 'hold', 'zfs-dkms']) # packages may be needed prior to installing kernel with events.ReportEventStack( name=stack_prefix + '/installing-missing-packages', reporting_enabled=True, level="INFO", description="installing missing packages"): install_missing_packages(cfg, target) # If a /etc/iscsi/nodes/... file was created by block_meta then it # needs to be copied onto the target system nodes_location = os.path.join(os.path.split(state['fstab'])[0], "nodes") if os.path.exists(nodes_location): copy_iscsi_conf(nodes_location, target) # do we need to reconfigure open-iscsi? # If a mdadm.conf file was created by block_meta than it needs to be copied # onto the target system mdadm_location = os.path.join(os.path.split(state['fstab'])[0], "mdadm.conf") if os.path.exists(mdadm_location): copy_mdadm_conf(mdadm_location, target) # as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052 # reconfigure mdadm util.subp(['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'], data=None, target=target) with events.ReportEventStack( name=stack_prefix + '/installing-kernel', reporting_enabled=True, level="INFO", description="installing kernel"): setup_zipl(cfg, target) install_kernel(cfg, target) run_zipl(cfg, target) restore_dist_interfaces(cfg, target) with events.ReportEventStack( name=stack_prefix + '/setting-up-swap', reporting_enabled=True, level="INFO", description="setting up swap"): add_swap(cfg, target, state.get('fstab')) with events.ReportEventStack( name=stack_prefix + '/apply-networking-config', reporting_enabled=True, level="INFO", description="apply networking config"): apply_networking(target, state) with events.ReportEventStack( name=stack_prefix + '/writing-etc-fstab', reporting_enabled=True, level="INFO", description="writing etc/fstab"): copy_fstab(state.get('fstab'), target) with events.ReportEventStack( name=stack_prefix + '/configuring-multipath', reporting_enabled=True, level="INFO", description="configuring multipath"): detect_and_handle_multipath(cfg, target) with events.ReportEventStack( name=stack_prefix + '/system-upgrade', reporting_enabled=True, level="INFO", description="updating packages on target system"): system_upgrade(cfg, target) with events.ReportEventStack( name=stack_prefix + '/pollinate-user-agent', reporting_enabled=True, level="INFO", description="configuring pollinate user-agent on target system"): handle_pollinate_user_agent(cfg, target) # If a crypttab file was created by block_meta than it needs to be copied # onto the target system, and update_initramfs() needs to be run, so that # the cryptsetup hooks are properly configured on the installed system and # it will be able to open encrypted volumes at boot. crypttab_location = os.path.join(os.path.split(state['fstab'])[0], "crypttab") if os.path.exists(crypttab_location): copy_crypttab(crypttab_location, target) update_initramfs(target) # If udev dname rules were created, copy them to target udev_rules_d = os.path.join(state['scratch'], "rules.d") if os.path.isdir(udev_rules_d): copy_dname_rules(udev_rules_d, target) # As a rule, ARMv7 systems don't use grub. This may change some # day, but for now, assume no. They do require the initramfs # to be updated, and this also triggers boot loader setup via # flash-kernel. machine = platform.machine() if (machine.startswith('armv7') or machine.startswith('s390x') or machine.startswith('aarch64') and not util.is_uefi_bootable()): update_initramfs(target) else: setup_grub(cfg, target) sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, curthooks) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/extract.py000066400000000000000000000107551326565350400215720ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys import tempfile import curtin.config from curtin.log import LOG from curtin import util from curtin.futil import write_files from curtin.reporter import events from curtin import url_helper from . import populate_one_subcmd CMD_ARGUMENTS = ( ((('-t', '--target'), {'help': ('target directory to extract to (root) ' '[default TARGET_MOUNT_POINT]'), 'action': 'store', 'default': os.environ.get('TARGET_MOUNT_POINT')}), (('sources',), {'help': 'the sources to install [default read from CONFIG]', 'nargs': '*'}), ) ) def tar_xattr_opts(cmd=None): # if tar cmd supports xattrs, return the required flags to extract them. if cmd is None: cmd = ['tar'] if isinstance(cmd, str): cmd = [cmd] (out, _err) = util.subp(cmd + ['--help'], capture=True) if "xattr" in out: return ['--xattrs', '--xattrs-include=*'] return [] def extract_root_tgz_url(url, target): # extract a -root.tar.gz url in the 'target' directory path = _path_from_file_url(url) if path != url or os.path.isfile(path): util.subp(args=['tar', '-C', target] + tar_xattr_opts() + ['-Sxpzf', path, '--numeric-owner']) return # Uses smtar to avoid specifying the compression type util.subp(args=['sh', '-cf', ('wget "$1" --progress=dot:mega -O - |' 'smtar -C "$2" ' + ' '.join(tar_xattr_opts()) + ' ' + '-Sxpf - --numeric-owner'), '--', url, target]) def extract_root_fsimage_url(url, target): path = _path_from_file_url(url) if path != url or os.path.isfile(path): return _extract_root_fsimage(path(url), target) wfp = tempfile.NamedTemporaryFile(suffix=".img", delete=False) wfp.close() try: url_helper.download(url, wfp.name) return _extract_root_fsimage(wfp.name, target) finally: os.unlink(wfp.name) def _extract_root_fsimage(path, target): mp = tempfile.mkdtemp() try: util.subp(['mount', '-o', 'loop,ro', path, mp], capture=True) except util.ProcessExecutionError as e: LOG.error("Failed to mount '%s' for extraction: %s", path, e) os.rmdir(mp) raise e try: return copy_to_target(mp, target) finally: util.subp(['umount', mp]) os.rmdir(mp) def copy_to_target(source, target): if source.startswith("cp://"): source = source[5:] source = os.path.abspath(source) util.subp(args=['sh', '-c', ('mkdir -p "$2" && cd "$2" && ' 'rsync -aXHAS --one-file-system "$1/" .'), '--', source, target]) def _path_from_file_url(url): return url[7:] if url.startswith("file://") else url def extract(args): if not args.target: raise ValueError("Target must be defined or set in environment") state = util.load_command_environment() cfg = curtin.config.load_command_config(args, state) sources = args.sources target = args.target if not sources: if not cfg.get('sources'): raise ValueError("'sources' must be on cmdline or in config") sources = cfg.get('sources') if isinstance(sources, dict): sources = [sources[k] for k in sorted(sources.keys())] sources = [util.sanitize_source(s) for s in sources] LOG.debug("Installing sources: %s to target at %s" % (sources, target)) stack_prefix = state.get('report_stack_prefix', '') for source in sources: with events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level="INFO", description="acquiring and extracting image from %s" % source['uri']): if source['type'].startswith('dd-'): continue if source['uri'].startswith("cp://"): copy_to_target(source['uri'], target) elif source['type'] == "fsimage": extract_root_fsimage_url(source['uri'], target=target) else: extract_root_tgz_url(source['uri'], target=target) if cfg.get('write_files'): LOG.info("Applying write_files from config.") write_files(cfg['write_files'], target) else: LOG.info("No write_files in config.") sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, extract) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/hook.py000066400000000000000000000014241326565350400210510ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys import curtin.config from curtin.log import LOG import curtin.util from . import populate_one_subcmd CMD_ARGUMENTS = ( ((('target',), {'help': 'finalize the provided directory [default TARGET_MOUNT_POINT]', 'action': 'store', 'default': os.environ.get('TARGET_MOUNT_POINT'), 'nargs': '?'}), ) ) def hook(args): if not args.target: raise ValueError("Target must be provided or set in environment") LOG.debug("Finalizing %s" % args.target) curtin.util.run_hook_if_exists(args.target, "finalize") sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, hook) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/in_target.py000066400000000000000000000044221326565350400220660ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import pty import sys from curtin import util from . import populate_one_subcmd CMD_ARGUMENTS = ( ((('-a', '--allow-daemons'), {'help': 'do not disable daemons via invoke-rc.d', 'action': 'store_true', 'default': False, }), (('-i', '--interactive'), {'help': 'use command invoked interactively', 'action': 'store_true', 'default': False}), (('--capture',), {'help': 'capture/swallow output of command', 'action': 'store_true', 'default': False}), (('-t', '--target'), {'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), ('command_args', {'help': 'run a command chrooted in the target', 'nargs': '*'}), ) ) def in_target_main(args): if args.target is not None: target = args.target else: state = util.load_command_environment() target = state['target'] if args.target is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) daemons = args.allow_daemons if util.target_path(args.target) == "/": sys.stderr.write("WARN: Target is /, daemons are allowed.\n") daemons = True cmd = args.command_args with util.ChrootableTarget(target, allow_daemons=daemons) as chroot: exit = 0 if not args.interactive: try: chroot.subp(cmd, capture=args.capture) except util.ProcessExecutionError as e: exit = e.exit_code else: if chroot.target != "/": cmd = ["chroot", chroot.target] + args.command_args # in python 3.4 pty.spawn started returning a value. # There, it is the status from os.waitpid. From testing (py3.6) # that seemse to be exit_code * 256. ret = pty.spawn(cmd) # pylint: disable=E1111 if ret is not None: exit = int(ret / 256) sys.exit(exit) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, in_target_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/install.py000066400000000000000000000433201326565350400215600ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import argparse from copy import deepcopy import json import os import re import shlex import shutil import subprocess import sys import tempfile from curtin.block import iscsi from curtin import config from curtin import util from curtin import version from curtin.log import LOG from curtin.reporter.legacy import load_reporter from curtin.reporter import events from . import populate_one_subcmd INSTALL_LOG = "/var/log/curtin/install.log" # Upon error, curtin creates a tar of all related logs at ERROR_TARFILE ERROR_TARFILE = '/var/log/curtin/curtin-error-logs.tar' SAVE_INSTALL_LOG = '/root/curtin-install.log' SAVE_INSTALL_CONFIG = '/root/curtin-install-cfg.yaml' INSTALL_START_MSG = ("curtin: Installation started. (%s)" % version.version_string()) INSTALL_PASS_MSG = "curtin: Installation finished." INSTALL_FAIL_MSG = "curtin: Installation failed with exception: {exception}" STAGE_DESCRIPTIONS = { 'early': 'preparing for installation', 'partitioning': 'configuring storage', 'network': 'configuring network', 'extract': 'writing install sources to disk', 'curthooks': 'configuring installed system', 'hook': 'finalizing installation', 'late': 'executing late commands', } CONFIG_BUILTIN = { 'sources': {}, 'stages': ['early', 'partitioning', 'network', 'extract', 'curthooks', 'hook', 'late'], 'extract_commands': {'builtin': ['curtin', 'extract']}, 'hook_commands': {'builtin': ['curtin', 'hook']}, 'partitioning_commands': { 'builtin': ['curtin', 'block-meta', 'simple']}, 'curthooks_commands': {'builtin': ['curtin', 'curthooks']}, 'late_commands': {'builtin': []}, 'network_commands': {'builtin': ['curtin', 'net-meta', 'auto']}, 'apply_net_commands': {'builtin': []}, 'install': {'log_file': INSTALL_LOG, 'error_tarfile': ERROR_TARFILE} } def clear_install_log(logfile): """Clear the installation log, so no previous installation is present.""" util.ensure_dir(os.path.dirname(logfile)) try: open(logfile, 'w').close() except Exception: pass def copy_install_log(logfile, target, log_target_path): """Copy curtin install log file to target system""" basemsg = 'Cannot copy curtin install log "%s" to target.' % logfile if not logfile: LOG.warn(basemsg) return if not os.path.isfile(logfile): LOG.warn(basemsg + " file does not exist.") return LOG.debug('Copying curtin install log from %s to target/%s', logfile, log_target_path) util.write_file( filename=util.target_path(target, log_target_path), content=util.load_file(logfile, decode=False), mode=0o400, omode="wb") def writeline_and_stdout(logfile, message): writeline(logfile, message) out = sys.stdout msg = message + "\n" if hasattr(out, 'buffer'): out = out.buffer # pylint: disable=no-member msg = msg.encode() out.write(msg) out.flush() def writeline(fname, output): """Write a line to a file.""" if not output.endswith('\n'): output += '\n' try: with open(fname, 'a') as fp: fp.write(output) except IOError: pass class WorkingDir(object): def __init__(self, config): top_d = tempfile.mkdtemp() state_d = os.path.join(top_d, 'state') target_d = config.get('install', {}).get('target') if not target_d: target_d = os.path.join(top_d, 'target') scratch_d = os.path.join(top_d, 'scratch') for p in (state_d, target_d, scratch_d): os.mkdir(p) netconf_f = os.path.join(state_d, 'network_config') netstate_f = os.path.join(state_d, 'network_state') interfaces_f = os.path.join(state_d, 'interfaces') config_f = os.path.join(state_d, 'config') fstab_f = os.path.join(state_d, 'fstab') with open(config_f, "w") as fp: json.dump(config, fp) # just touch these files to make sure they exist for f in (interfaces_f, config_f, fstab_f, netconf_f, netstate_f): with open(f, "ab") as fp: pass self.scratch = scratch_d self.target = target_d self.top = top_d self.interfaces = interfaces_f self.netconf = netconf_f self.netstate = netstate_f self.fstab = fstab_f self.config = config self.config_file = config_f def env(self): return ({'WORKING_DIR': self.scratch, 'OUTPUT_FSTAB': self.fstab, 'OUTPUT_INTERFACES': self.interfaces, 'OUTPUT_NETWORK_CONFIG': self.netconf, 'OUTPUT_NETWORK_STATE': self.netstate, 'TARGET_MOUNT_POINT': self.target, 'CONFIG': self.config_file}) class Stage(object): def __init__(self, name, commands, env, reportstack=None, logfile=None): self.name = name self.commands = commands self.env = env if logfile is None: logfile = INSTALL_LOG self.install_log = self._open_install_log(logfile) if hasattr(sys.stdout, 'buffer'): self.write_stdout = self._write_stdout3 else: self.write_stdout = self._write_stdout2 if reportstack is None: reportstack = events.ReportEventStack( name="stage-%s" % name, description="basic stage %s" % name, reporting_enabled=False) self.reportstack = reportstack def _open_install_log(self, logfile): """Open the install log.""" if not logfile: return None try: return open(logfile, 'ab') except IOError: return None def _write_stdout3(self, data): sys.stdout.buffer.write(data) # pylint: disable=no-member sys.stdout.flush() def _write_stdout2(self, data): sys.stdout.write(data) sys.stdout.flush() def write(self, data): """Write data to stdout and to the install_log.""" self.write_stdout(data) if self.install_log is not None: self.install_log.write(data) self.install_log.flush() def run(self): for cmdname in sorted(self.commands.keys()): cmd = self.commands[cmdname] if not cmd: continue cur_res = events.ReportEventStack( name=cmdname, description="running '%s'" % ' '.join(cmd), parent=self.reportstack, level="DEBUG") env = self.env.copy() env['CURTIN_REPORTSTACK'] = cur_res.fullname shell = not isinstance(cmd, list) with util.LogTimer(LOG.debug, cmdname): with cur_res: try: sp = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, env=env, shell=shell) except OSError as e: LOG.warn("%s command failed", cmdname) raise util.ProcessExecutionError(cmd=cmd, reason=e) output = b"" while True: data = sp.stdout.read(1) if not data and sp.poll() is not None: break self.write(data) output += data rc = sp.returncode if rc != 0: LOG.warn("%s command failed", cmdname) raise util.ProcessExecutionError( stdout=output, stderr="", exit_code=rc, cmd=cmd) def apply_power_state(pstate): """ power_state: delay: 5 mode: poweroff message: Bye Bye """ cmd = load_power_state(pstate) if not cmd: return LOG.info("powering off with %s", cmd) fid = os.fork() if fid == 0: try: util.subp(cmd) os._exit(0) except Exception as e: LOG.warn("%s returned non-zero: %s" % (cmd, e)) os._exit(1) return def load_power_state(pstate): """Returns a command to reboot the system if power_state should.""" if pstate is None: return None if not isinstance(pstate, dict): raise TypeError("power_state is not a dict.") opt_map = {'halt': '-H', 'poweroff': '-P', 'reboot': '-r'} mode = pstate.get("mode") if mode not in opt_map: raise TypeError("power_state[mode] required, must be one of: %s." % ','.join(opt_map.keys())) delay = pstate.get("delay", "5") if delay == "now": delay = "0" elif re.match(r"\+[0-9]+", str(delay)): delay = "%sm" % delay[1:] else: delay = str(delay) args = ["shutdown", opt_map[mode], "now"] if pstate.get("message"): args.append(pstate.get("message")) shcmd = ('sleep "$1" && shift; ' '[ -f /run/block-curtin-poweroff ] && exit 0; ' 'exec "$@"') return (['sh', '-c', shcmd, 'curtin-poweroff', delay] + args) def apply_kexec(kexec, target): """ load kexec kernel from target dir, similar to /etc/init.d/kexec-load kexec: mode: on """ grubcfg = "boot/grub/grub.cfg" target_grubcfg = os.path.join(target, grubcfg) if kexec is None or kexec.get("mode") != "on": return False if not isinstance(kexec, dict): raise TypeError("kexec is not a dict.") if not util.which('kexec'): util.install_packages('kexec-tools') if not os.path.isfile(target_grubcfg): raise ValueError("%s does not exist in target" % grubcfg) with open(target_grubcfg, "r") as fp: default = 0 menu_lines = [] # get the default grub boot entry number and menu entry line numbers for line_num, line in enumerate(fp, 1): if re.search(r"\bset default=\"[0-9]+\"\b", " %s " % line): default = int(re.sub(r"[^0-9]", '', line)) if re.search(r"\bmenuentry\b", " %s " % line): menu_lines.append(line_num) if not menu_lines: LOG.error("grub config file does not have a menuentry\n") return False # get the begin and end line numbers for default menuentry section, # using end of file if it's the last menuentry section begin = menu_lines[default] if begin != menu_lines[-1]: end = menu_lines[default + 1] - 1 else: end = line_num fp.seek(0) lines = fp.readlines() kernel = append = initrd = "" for i in range(begin, end): if 'linux' in lines[i].split(): split_line = shlex.split(lines[i]) kernel = os.path.join(target, split_line[1]) append = "--append=" + ' '.join(split_line[2:]) if 'initrd' in lines[i].split(): split_line = shlex.split(lines[i]) initrd = "--initrd=" + os.path.join(target, split_line[1]) if not kernel: LOG.error("grub config file does not have a kernel\n") return False LOG.debug("kexec -l %s %s %s" % (kernel, append, initrd)) util.subp(args=['kexec', '-l', kernel, append, initrd]) return True def migrate_proxy_settings(cfg): """Move the legacy proxy setting 'http_proxy' into cfg['proxy'].""" proxy = cfg.get('proxy', {}) if not isinstance(proxy, dict): raise ValueError("'proxy' in config is not a dictionary: %s" % proxy) if 'http_proxy' in cfg: hp = cfg['http_proxy'] if hp: if proxy.get('http_proxy', hp) != hp: LOG.warn("legacy http_proxy setting (%s) differs from " "proxy/http_proxy (%s), using %s", hp, proxy['http_proxy'], proxy['http_proxy']) else: LOG.debug("legacy 'http_proxy' migrated to proxy/http_proxy") proxy['http_proxy'] = hp del cfg['http_proxy'] cfg['proxy'] = proxy def cmd_install(args): from .collect_logs import create_log_tarfile cfg = deepcopy(CONFIG_BUILTIN) config.merge_config(cfg, args.config) for source in args.source: src = util.sanitize_source(source) cfg['sources']["%02d_cmdline" % len(cfg['sources'])] = src LOG.info(INSTALL_START_MSG) LOG.debug('LANG=%s', os.environ.get('LANG')) LOG.debug("merged config: %s" % cfg) if not len(cfg.get('sources', [])): raise util.BadUsage("no sources provided to install") for i in cfg['sources']: # we default to tgz for old style sources config cfg['sources'][i] = util.sanitize_source(cfg['sources'][i]) migrate_proxy_settings(cfg) for k in ('http_proxy', 'https_proxy', 'no_proxy'): if k in cfg['proxy']: os.environ[k] = cfg['proxy'][k] instcfg = cfg.get('install', {}) logfile = instcfg.get('log_file') error_tarfile = instcfg.get('error_tarfile') post_files = instcfg.get('post_files', [logfile]) # Generate curtin configuration dump and add to write_files unless # installation config disables dump yaml_dump_file = instcfg.get('save_install_config', SAVE_INSTALL_CONFIG) if yaml_dump_file: write_files = cfg.get('write_files', {}) write_files['curtin_install_cfg'] = { 'path': yaml_dump_file, 'permissions': '0400', 'owner': 'root:root', 'content': config.dump_config(cfg) } cfg['write_files'] = write_files # Load reporter clear_install_log(logfile) legacy_reporter = load_reporter(cfg) legacy_reporter.files = post_files writeline_and_stdout(logfile, INSTALL_START_MSG) args.reportstack.post_files = post_files try: workingd = WorkingDir(cfg) dd_images = util.get_dd_images(cfg.get('sources', {})) if len(dd_images) > 1: raise ValueError("You may not use more than one disk image") LOG.debug(workingd.env()) env = os.environ.copy() env.update(workingd.env()) for name in cfg.get('stages'): desc = STAGE_DESCRIPTIONS.get(name, "stage %s" % name) reportstack = events.ReportEventStack( "stage-%s" % name, description=desc, parent=args.reportstack) env['CURTIN_REPORTSTACK'] = reportstack.fullname with reportstack: commands_name = '%s_commands' % name with util.LogTimer(LOG.debug, 'stage_%s' % name): stage = Stage(name, cfg.get(commands_name, {}), env, reportstack=reportstack, logfile=logfile) stage.run() if apply_kexec(cfg.get('kexec'), workingd.target): cfg['power_state'] = {'mode': 'reboot', 'delay': 'now', 'message': "'rebooting with kexec'"} writeline_and_stdout(logfile, INSTALL_PASS_MSG) legacy_reporter.report_success() except Exception as e: exp_msg = INSTALL_FAIL_MSG.format(exception=e) writeline(logfile, exp_msg) LOG.error(exp_msg) legacy_reporter.report_failure(exp_msg) if error_tarfile: create_log_tarfile(error_tarfile, cfg) raise e finally: log_target_path = instcfg.get('save_install_log', SAVE_INSTALL_LOG) if log_target_path: copy_install_log(logfile, workingd.target, log_target_path) if instcfg.get('unmount', "") == "disabled": LOG.info('Skipping unmount: config disabled target unmounting') else: # unmount everything (including iscsi disks) util.do_umount(workingd.target, recursive=True) # The open-iscsi service in the ephemeral environment handles # disconnecting active sessions. On Artful release the systemd # unit file has conditionals that are not met at boot time and # results in open-iscsi service not being started; This breaks # shutdown on Artful releases. # Additionally, in release < Artful, if the storage configuration # is layered, like RAID over iscsi volumes, then disconnecting # iscsi sessions before stopping the raid device hangs. # As it turns out, letting the open-iscsi service take down the # session last is the cleanest way to handle all releases # regardless of what may be layered on top of the iscsi disks. # # Check if storage configuration has iscsi volumes and if so ensure # iscsi service is active before exiting install if iscsi.get_iscsi_disks_from_config(cfg): iscsi.restart_iscsi_service() shutil.rmtree(workingd.top) apply_power_state(cfg.get('power_state')) sys.exit(0) # we explicitly accept config on install for backwards compatibility CMD_ARGUMENTS = ( ((('-c', '--config'), {'help': 'read configuration from cfg', 'action': util.MergedCmdAppend, 'metavar': 'FILE', 'type': argparse.FileType("rb"), 'dest': 'cfgopts', 'default': []}), ('--set', {'action': util.MergedCmdAppend, 'help': ('define a config variable. key can be a "/" ' 'delimited path ("early_commands/cmd1=a"). if ' 'key starts with "json:" then val is loaded as ' 'json (json:stages="[\'early\']")'), 'metavar': 'key=val', 'dest': 'cfgopts'}), ('source', {'help': 'what to install', 'nargs': '*'}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, cmd_install) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/main.py000066400000000000000000000156711326565350400210460ustar00rootroot00000000000000#!/usr/bin/python # This file is part of curtin. See LICENSE file for copyright and license info. import argparse import os import sys import traceback from .. import log from .. import util from ..deps import install_deps from .. import version VERSIONSTR = version.version_string() SUB_COMMAND_MODULES = [ 'apply_net', 'apt-config', 'block-attach-iscsi', 'block-detach-iscsi', 'block-info', 'block-meta', 'block-wipe', 'clear-holders', 'curthooks', 'collect-logs', 'extract', 'hook', 'install', 'mkfs', 'in-target', 'net-meta', 'pack', 'swap', 'system-install', 'system-upgrade', 'unmount', 'version', ] def add_subcmd(subparser, subcmd): modname = subcmd.replace("-", "_") subcmd_full = "curtin.commands.%s" % modname __import__(subcmd_full) try: popfunc = getattr(sys.modules[subcmd_full], 'POPULATE_SUBCMD') except AttributeError: raise AttributeError("No 'POPULATE_SUBCMD' in %s" % subcmd_full) popfunc(subparser.add_parser(subcmd)) class NoHelpParser(argparse.ArgumentParser): # ArgumentParser with forced 'add_help=False' def __init__(self, *args, **kwargs): kwargs.update({'add_help': False}) super(NoHelpParser, self).__init__(*args, **kwargs) def error(self, message): # without overriding this, argparse exits with bad usage raise ValueError("failed parsing arguments: %s" % message) def get_main_parser(stacktrace=False, verbosity=0, parser_class=argparse.ArgumentParser): parser = parser_class(prog='curtin', epilog='Version %s' % VERSIONSTR) parser.add_argument('--showtrace', action='store_true', default=stacktrace) parser.add_argument('-v', '--verbose', action='count', default=verbosity, dest='verbosity') parser.add_argument('--log-file', default=sys.stderr, type=argparse.FileType('w')) parser.add_argument('-c', '--config', action=util.MergedCmdAppend, help='read configuration from cfg', metavar='FILE', type=argparse.FileType("rb"), dest='main_cfgopts', default=[]) parser.add_argument('--install-deps', action='store_true', help='install dependencies as necessary', default=False) parser.add_argument('--set', action=util.MergedCmdAppend, help=('define a config variable. key can be a "/" ' 'delimited path ("early_commands/cmd1=a"). if ' 'key starts with "json:" then val is loaded as ' 'json (json:stages="[\'early\']")'), metavar='key=val', dest='main_cfgopts') parser.set_defaults(config={}) parser.set_defaults(reportstack=None) return parser def maybe_install_deps(args, stacktrace=True, verbosity=0): parser = get_main_parser(stacktrace=stacktrace, verbosity=verbosity, parser_class=NoHelpParser) subps = parser.add_subparsers(dest="subcmd", parser_class=NoHelpParser) for subcmd in SUB_COMMAND_MODULES: subps.add_parser(subcmd) install_only_args = [ ['-v', '--install-deps'], ['-vv', '--install-deps'], ['--install-deps', '-v'], ['--install-deps', '-vv'], ['--install-deps'], ] install_only = args in install_only_args if install_only: verbosity = 1 else: try: ns, unknown = parser.parse_known_args(args) verbosity = ns.verbosity if not ns.install_deps: return except ValueError: # bad usage will be reported by the real reporter return ret = install_deps(verbosity=verbosity) if ret != 0 or install_only: sys.exit(ret) return def main(argv=None): if argv is None: argv = sys.argv[1:] stacktrace = (os.environ.get('CURTIN_STACKTRACE', "0").lower() not in ("0", "false", "")) try: verbosity = int(os.environ.get('CURTIN_VERBOSITY', "0")) except ValueError: verbosity = 1 maybe_install_deps(argv, stacktrace=stacktrace, verbosity=verbosity) # Above here, only standard library modules can be assumed. from .. import config from ..reporter import (events, update_configuration) parser = get_main_parser(stacktrace=stacktrace, verbosity=verbosity) subps = parser.add_subparsers(dest="subcmd") for subcmd in SUB_COMMAND_MODULES: add_subcmd(subps, subcmd) args = parser.parse_args(argv) # merge config flags into a single config dictionary cfg_opts = args.main_cfgopts if hasattr(args, 'cfgopts'): cfg_opts += getattr(args, 'cfgopts') cfg = {} if cfg_opts: for (flag, val) in cfg_opts: if flag in ('-c', '--config'): config.merge_config_fp(cfg, val) val.close() elif flag in ('--set'): config.merge_cmdarg(cfg, val) else: cfg = config.load_command_config(args, util.load_command_environment()) args.config = cfg # if user gave cmdline arguments, then set environ so subsequent # curtin calls get those as default showtrace = args.showtrace if 'showtrace' in cfg: showtrace = str(cfg['showtrace']).lower() not in ("0", "false") os.environ['CURTIN_STACKTRACE'] = str(int(showtrace)) verbosity = args.verbosity if 'verbosity' in cfg: verbosity = int(cfg['verbosity']) os.environ['CURTIN_VERBOSITY'] = str(verbosity) if not getattr(args, 'func', None): # http://bugs.python.org/issue16308 parser.print_help() sys.exit(1) log.basicConfig(stream=args.log_file, verbosity=verbosity) paths = util.get_paths() if paths['helpers'] is None or paths['curtin_exe'] is None: raise OSError("Unable to find helpers or 'curtin' exe to add to path") path = os.environ['PATH'].split(':') for cand in (paths['helpers'], os.path.dirname(paths['curtin_exe'])): if cand not in [os.path.abspath(d) for d in path]: path.insert(0, cand) os.environ['PATH'] = ':'.join(path) # set up the reportstack update_configuration(cfg.get('reporting', {})) stack_prefix = (os.environ.get("CURTIN_REPORTSTACK", "") + "/cmd-%s" % args.subcmd) if stack_prefix.startswith("/"): stack_prefix = stack_prefix[1:] os.environ["CURTIN_REPORTSTACK"] = stack_prefix args.reportstack = events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level="DEBUG", description="curtin command %s" % args.subcmd) try: with args.reportstack: ret = args.func(args) sys.exit(ret) except Exception as e: if showtrace: traceback.print_exc() sys.stderr.write("%s\n" % e) sys.exit(3) if __name__ == '__main__': sys.exit(main()) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/mkfs.py000066400000000000000000000030261326565350400210510ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from . import populate_one_subcmd from curtin.block.mkfs import mkfs as run_mkfs from curtin.block.mkfs import valid_fstypes import sys CMD_ARGUMENTS = ( (('devices', {'help': 'create filesystem on the target volume(s) or storage config \ item(s)', 'metavar': 'DEVICE', 'action': 'store', 'nargs': '+'}), (('-f', '--fstype'), {'help': 'filesystem type to use. default is ext4', 'choices': sorted(valid_fstypes()), 'default': 'ext4', 'action': 'store'}), (('-l', '--label'), {'help': 'label to use for filesystem', 'action': 'store'}), (('-u', '--uuid'), {'help': 'uuid to use for filesystem', 'action': 'store'}), (('-s', '--strict'), {'help': 'exit if mkfs cannot do exactly what is specified', 'action': 'store_true', 'default': False}), (('-F', '--force'), {'help': 'continue if some data already exists on device', 'action': 'store_true', 'default': False}) ) ) def mkfs(args): for device in args.devices: uuid = run_mkfs(device, args.fstype, strict=args.strict, uuid=args.uuid, label=args.label, force=args.force) print("Created '%s' filesystem in '%s' with uuid '%s' and label '%s'" % (args.fstype, device, uuid, args.label)) sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, mkfs) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/net_meta.py000066400000000000000000000124331326565350400217070ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import argparse import os import sys from curtin import net from curtin.log import LOG import curtin.util as util import curtin.config as config from . import populate_one_subcmd DEVNAME_ALIASES = ['connected', 'configured', 'netboot'] def network_device(value): if value in DEVNAME_ALIASES: return value if (value.startswith('eth') or (value.startswith('en') and len(value) == 3)): return value raise argparse.ArgumentTypeError("%s does not look like a netdev name") def resolve_alias(alias): if alias == "connected": alldevs = net.get_devicelist() return [d for d in alldevs if net.is_physical(d) and net.is_up(d)] elif alias == "configured": alldevs = net.get_devicelist() return [d for d in alldevs if net.is_physical(d) and net.is_up(d) and net.is_connected(d)] elif alias == "netboot": # should read /proc/cmdline here for BOOTIF raise NotImplementedError("netboot alias not implemented") else: raise ValueError("'%s' is not an alias: %s", alias, DEVNAME_ALIASES) def interfaces_basic_dhcp(devices, macs=None): # return network configuration that says to dhcp on provided devices if macs is None: macs = {} for dev in devices: macs[dev] = net.get_interface_mac(dev) config = [] for dev in devices: config.append({ 'type': 'physical', 'name': dev, 'mac_address': macs.get(dev), 'subnets': [{'type': 'dhcp4'}]}) return {'network': {'version': 1, 'config': config}} def interfaces_custom(args): state = util.load_command_environment() cfg = config.load_command_config(args, state) network_config = cfg.get('network', []) if not network_config: raise Exception("network configuration is required by mode '%s' " "but not provided in the config file" % 'custom') return {'network': network_config} def net_meta(args): # curtin net-meta --devices connected dhcp # curtin net-meta --devices configured dhcp # curtin net-meta --devices netboot dhcp # curtin net-meta --devices connected custom # if network-config hook exists in target, # we do not run the builtin if util.run_hook_if_exists(args.target, 'network-config'): sys.exit(0) state = util.load_command_environment() cfg = config.load_command_config(args, state) if cfg.get("network") is not None: args.mode = "custom" eni = "etc/network/interfaces" if args.mode == "auto": if not args.devices: args.devices = ["connected"] t_eni = None if args.target: t_eni = os.path.sep.join((args.target, eni,)) if not os.path.isfile(t_eni): t_eni = None if t_eni: args.mode = "copy" else: args.mode = "dhcp" devices = [] if args.devices: for dev in args.devices: if dev in DEVNAME_ALIASES: devices += resolve_alias(dev) else: devices.append(dev) LOG.debug("net-meta mode is '%s'. devices=%s", args.mode, devices) output_network_config = os.environ.get("OUTPUT_NETWORK_CONFIG", "") if args.mode == "copy": if not args.target: raise argparse.ArgumentTypeError("mode 'copy' requires --target") t_eni = os.path.sep.join((args.target, "etc/network/interfaces",)) with open(t_eni, "r") as fp: content = fp.read() LOG.warn("net-meta mode is 'copy', static network interfaces files" "can be brittle. Copied interfaces: %s", content) target = args.output elif args.mode == "dhcp": target = output_network_config content = config.dump_config(interfaces_basic_dhcp(devices)) elif args.mode == 'custom': target = output_network_config content = config.dump_config(interfaces_custom(args)) else: raise Exception("Unexpected network config mode '%s'." % args.mode) if not target: raise Exception( "No target given for mode = '%s'. No where to write content: %s" % (args.mode, content)) LOG.debug("writing to file %s with network config: %s", target, content) if target == "-": sys.stdout.write(content) else: with open(target, "w") as fp: fp.write(content) sys.exit(0) CMD_ARGUMENTS = ( ((('-D', '--devices'), {'help': 'which devices to operate on', 'action': 'append', 'metavar': 'DEVICE', 'type': network_device}), (('-o', '--output'), {'help': 'file to write to. defaults to env["OUTPUT_INTERFACES"] or "-"', 'metavar': 'IFILE', 'action': 'store', 'default': os.environ.get('OUTPUT_INTERFACES', "-")}), (('-t', '--target'), {'help': 'operate on target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), ('mode', {'help': 'meta-mode to use', 'choices': ['dhcp', 'copy', 'auto', 'custom']}) ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, net_meta) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/pack.py000066400000000000000000000023531326565350400210310ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import sys from curtin import pack from . import populate_one_subcmd CMD_ARGUMENTS = ( ((('-o', '--output'), {'help': 'where to write the archive to', 'action': 'store', 'metavar': 'FILE', 'default': "-", }), (('-a', '--add'), {'help': 'include FILE_PATH in archive at ARCHIVE_PATH', 'action': 'append', 'metavar': 'ARCHIVE_PATH:FILE_PATH', 'default': []}), ('command_args', {'help': 'command to run after extracting', 'nargs': '*'}), ) ) def pack_main(args): if args.output == "-": fdout = sys.stdout else: fdout = open(args.output, "w") delim = ":" addl = [] for tok in args.add: if delim not in tok: raise ValueError("'--add' argument '%s' did not have a '%s'", (tok, delim)) (archpath, filepath) = tok.split(":", 1) addl.append((archpath, filepath),) pack.pack(fdout, command=args.command_args, copy_files=addl) if args.output != "-": fdout.close() sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, pack_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/swap.py000066400000000000000000000042401326565350400210620ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys import curtin.swap as swap import curtin.util as util from . import populate_one_subcmd def swap_main(args): # curtin swap [--size=4G] [--target=/] [--fstab=/etc/fstab] [swap] state = util.load_command_environment() if args.target is not None: state['target'] = args.target if args.fstab is not None: state['fstab'] = args.fstab if state['target'] is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) size = args.size if size is not None and size.lower() == "auto": size = None if size is not None: try: size = util.human2bytes(size) except ValueError as e: sys.stderr.write("%s\n" % e) sys.exit(2) if args.maxsize is not None: args.maxsize = util.human2bytes(args.maxsize) swap.setup_swapfile(target=state['target'], fstab=state['fstab'], swapfile=args.swapfile, size=size, maxsize=args.maxsize) sys.exit(2) CMD_ARGUMENTS = ( ((('-f', '--fstab'), {'help': 'file to write to. defaults to env["OUTPUT_FSTAB"]', 'metavar': 'FSTAB', 'action': 'store', 'default': os.environ.get('OUTPUT_FSTAB')}), (('-t', '--target'), {'help': ('target filesystem root to add swap file to. ' 'default is env[TARGET_MOUNT_POINT]'), 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), (('-s', '--size'), {'help': 'size of swap file (eg: 1G, 1500M, 1024K, 100000. def: "auto")', 'default': None, 'action': 'store'}), (('-M', '--maxsize'), {'help': 'maximum size of swap file (assuming "auto")', 'default': None, 'action': 'store'}), ('swapfile', {'help': 'path to swap file under target', 'default': 'swap.img', 'nargs': '?'}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, swap_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/system_install.py000066400000000000000000000025151326565350400231650ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys import curtin.util as util from . import populate_one_subcmd from curtin.log import LOG def system_install_pkgs_main(args): # curtin system-install [--target=/] [pkg, [pkg...]] if args.target is None: args.target = "/" exit_code = 0 try: util.install_packages( pkglist=args.packages, target=args.target, allow_daemons=args.allow_daemons) except util.ProcessExecutionError as e: LOG.warn("system install failed for %s: %s" % (args.packages, e)) exit_code = e.exit_code sys.exit(exit_code) CMD_ARGUMENTS = ( ((('--allow-daemons',), {'help': ('do not disable running of daemons during upgrade.'), 'action': 'store_true', 'default': False}), (('-t', '--target'), {'help': ('target root to upgrade. ' 'default is env[TARGET_MOUNT_POINT]'), 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), ('packages', {'help': 'the list of packages to install', 'metavar': 'PACKAGES', 'action': 'store', 'nargs': '+'}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, system_install_pkgs_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/system_upgrade.py000066400000000000000000000022001326565350400231350ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys import curtin.util as util from . import populate_one_subcmd from curtin.log import LOG def system_upgrade_main(args): # curtin system-upgrade [--target=/] if args.target is None: args.target = "/" exit_code = 0 try: util.system_upgrade(target=args.target, allow_daemons=args.allow_daemons) except util.ProcessExecutionError as e: LOG.warn("system upgrade failed: %s" % e) exit_code = e.exit_code sys.exit(exit_code) CMD_ARGUMENTS = ( ((('--allow-daemons',), {'help': ('do not disable running of daemons during upgrade.'), 'action': 'store_true', 'default': False}), (('-t', '--target'), {'help': ('target root to upgrade. ' 'default is env[TARGET_MOUNT_POINT]'), 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, system_upgrade_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/unmount.py000066400000000000000000000025531326565350400216220ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin.log import LOG from curtin import util from . import populate_one_subcmd import os try: FileMissingError = FileNotFoundError except NameError: FileMissingError = IOError def unmount_main(args): """ run util.umount(target, recursive=True) """ if args.target is None: msg = "Missing target. Please provide target path parameter" raise ValueError(msg) if not os.path.exists(args.target): msg = "Cannot unmount target path %s: it does not exist" % args.target raise FileMissingError(msg) LOG.info("Unmounting devices from target path: %s", args.target) recursive_mode = not args.disable_recursive_mounts util.do_umount(args.target, recursive=recursive_mode) CMD_ARGUMENTS = ( (('-t', '--target'), {'help': ('Path to mountpoint to be unmounted.' 'The default is env variable "TARGET_MOUNT_POINT"'), 'metavar': 'TARGET', 'action': 'store', 'default': os.environ.get('TARGET_MOUNT_POINT')}), (('-d', '--disable-recursive-mounts'), {'help': 'Disable unmounting recursively under target', 'default': False, 'action': 'store_true'}), ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, unmount_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/commands/version.py000066400000000000000000000006311326565350400215750ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import sys from .. import version from . import populate_one_subcmd def version_main(args): sys.stdout.write(version.version_string() + "\n") sys.exit(0) CMD_ARGUMENTS = ( (tuple()) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, version_main) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/config.py000066400000000000000000000065631326565350400175660ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import yaml import json ARCHIVE_HEADER = "#curtin-config-archive" ARCHIVE_TYPE = "text/curtin-config-archive" CONFIG_HEADER = "#curtin-config" CONFIG_TYPE = "text/curtin-config" try: # python2 _STRING_TYPES = (str, basestring, unicode) except NameError: # python3 _STRING_TYPES = (str,) def merge_config_fp(cfgin, fp): merge_config_str(cfgin, fp.read()) def merge_config_str(cfgin, cfgstr): cfg2 = yaml.safe_load(cfgstr) if not isinstance(cfg2, dict): raise TypeError("Failed reading config. not a dictionary: %s" % cfgstr) merge_config(cfgin, cfg2) def merge_config(cfg, cfg2): # update cfg by merging cfg2 over the top for k, v in cfg2.items(): if isinstance(v, dict) and isinstance(cfg.get(k, None), dict): merge_config(cfg[k], v) else: cfg[k] = v def merge_cmdarg(cfg, cmdarg, delim="/"): merge_config(cfg, cmdarg2cfg(cmdarg, delim)) def cmdarg2cfg(cmdarg, delim="/"): if '=' not in cmdarg: raise ValueError('no "=" in "%s"' % cmdarg) key, val = cmdarg.split("=", 1) cfg = {} cur = cfg is_json = False if key.startswith("json:"): is_json = True key = key[5:] items = key.split(delim) for item in items[:-1]: cur[item] = {} cur = cur[item] if is_json: try: val = json.loads(val) except (ValueError, TypeError): raise ValueError("setting of key '%s' had invalid json: %s" % (key, val)) # this would occur if 'json:={"topkey": "topval"}' if items[-1] == "": cfg = val else: cur[items[-1]] = val return cfg def load_config_archive(content): archive = yaml.load(content) config = {} for part in archive: if isinstance(part, (str,)): if part.startswith(ARCHIVE_HEADER): merge_config(config, load_config_archive(part)) elif part.startswith(CONFIG_HEADER): merge_config_str(config, part) elif isinstance(part, dict) and isinstance(part.get('content'), str): payload = part.get('content') if (part.get('type') == ARCHIVE_TYPE or payload.startswith(ARCHIVE_HEADER)): merge_config(config, load_config_archive(payload)) elif (part.get('type') == CONFIG_TYPE or payload.startswith(CONFIG_HEADER)): merge_config_str(config, payload) return config def load_config(cfg_file): with open(cfg_file, "r") as fp: content = fp.read() if not content.startswith(ARCHIVE_HEADER): return yaml.safe_load(content) else: return load_config_archive(content) def load_command_config(args, state): if hasattr(args, 'config') and args.config: return args.config else: # state 'config' points to a file with fully rendered config cfg_file = state.get('config') if not cfg_file: cfg = {} else: cfg = load_config(cfg_file) return cfg def dump_config(config): return yaml.dump(config, default_flow_style=False, indent=2) def value_as_boolean(value): false_values = (False, None, 0, '0', 'False', 'false', 'None', 'none', '') return value not in false_values # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/deps/000077500000000000000000000000001326565350400166705ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/curtin/deps/__init__.py000066400000000000000000000116431326565350400210060ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys from curtin.util import ( ProcessExecutionError, get_architecture, install_packages, is_uefi_bootable, lsb_release, subp, which, ) REQUIRED_IMPORTS = [ # import string to execute, python2 package, python3 package ('import yaml', 'python-yaml', 'python3-yaml'), ] REQUIRED_EXECUTABLES = [ # executable in PATH, package ('file', 'file'), ('lvcreate', 'lvm2'), ('mdadm', 'mdadm'), ('mkfs.vfat', 'dosfstools'), ('mkfs.btrfs', 'btrfs-tools'), ('mkfs.ext4', 'e2fsprogs'), ('mkfs.xfs', 'xfsprogs'), ('partprobe', 'parted'), ('sgdisk', 'gdisk'), ('udevadm', 'udev'), ('make-bcache', 'bcache-tools'), ('iscsiadm', 'open-iscsi'), ] REQUIRED_KERNEL_MODULES = [ # kmod name ] if lsb_release()['codename'] == "precise": REQUIRED_IMPORTS.append( ('import oauth.oauth', 'python-oauth', None),) else: REQUIRED_IMPORTS.append( ('import oauthlib.oauth1', 'python-oauthlib', 'python3-oauthlib'),) # zfs is > trusty only if not lsb_release()['codename'] in ["precise", "trusty"]: REQUIRED_EXECUTABLES.append(('zfs', 'zfsutils-linux')) REQUIRED_KERNEL_MODULES.append('zfs') if not is_uefi_bootable() and 'arm' in get_architecture(): REQUIRED_EXECUTABLES.append(('flash-kernel', 'flash-kernel')) class MissingDeps(Exception): def __init__(self, message, deps): self.message = message if isinstance(deps, str) or deps is None: deps = [deps] self.deps = [d for d in deps if d is not None] self.fatal = None in deps def __str__(self): if self.fatal: if not len(self.deps): return self.message + " Unresolvable." return (self.message + " Unresolvable. Partially resolvable with packages: %s" % ' '.join(self.deps)) else: return self.message + " Install packages: %s" % ' '.join(self.deps) def check_import(imports, py2pkgs, py3pkgs, message=None): import_group = imports if isinstance(import_group, str): import_group = [import_group] for istr in import_group: try: exec(istr) return except ImportError: pass if not message: if isinstance(imports, str): message = "Failed '%s'." % imports else: message = "Unable to do any of %s." % import_group if sys.version_info[0] == 2: pkgs = py2pkgs else: pkgs = py3pkgs raise MissingDeps(message, pkgs) def check_executable(cmdname, pkg): if not which(cmdname): raise MissingDeps("Missing program '%s'." % cmdname, pkg) def check_executables(executables=None): if executables is None: executables = REQUIRED_EXECUTABLES mdeps = [] for exe, pkg in executables: try: check_executable(exe, pkg) except MissingDeps as e: mdeps.append(e) return mdeps def check_imports(imports=None): if imports is None: imports = REQUIRED_IMPORTS mdeps = [] for import_str, py2pkg, py3pkg in imports: try: check_import(import_str, py2pkg, py3pkg) except MissingDeps as e: mdeps.append(e) return mdeps def check_kernel_modules(modules=None): if modules is None: modules = REQUIRED_KERNEL_MODULES # if we're missing any modules, install the full # linux-image package for this environment for kmod in modules: try: subp(['modinfo', '--filename', kmod], capture=True) except ProcessExecutionError: kernel_pkg = 'linux-image-%s' % os.uname()[2] return [MissingDeps('missing kernel module %s' % kmod, kernel_pkg)] return [] def find_missing_deps(): return check_executables() + check_imports() + check_kernel_modules() def install_deps(verbosity=False, dry_run=False, allow_daemons=True): errors = find_missing_deps() if len(errors) == 0: if verbosity: sys.stderr.write("No missing dependencies\n") return 0 missing_pkgs = [] for e in errors: missing_pkgs += e.deps deps_string = ' '.join(sorted(missing_pkgs)) if dry_run: sys.stderr.write("Missing dependencies: %s\n" % deps_string) return 0 if os.geteuid() != 0: sys.stderr.write("Missing dependencies: %s\n" % deps_string) sys.stderr.write("Package installation is not possible as non-root.\n") return 2 if verbosity: sys.stderr.write("Installing %s\n" % deps_string) ret = 0 try: install_packages(missing_pkgs, allow_daemons=allow_daemons, aptopts=["--no-install-recommends"]) except ProcessExecutionError as e: sys.stderr.write("%s\n" % e) ret = e.exit_code return ret # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/deps/check.py000066400000000000000000000030341326565350400203170ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ The intent point of this module is that it can be called and exit success or fail, indicating that deps should be there. python -m curtin.deps.check [-v] """ import argparse import sys from . import find_missing_deps def debug(level, msg_level, msg): if level >= msg_level: if msg[-1] != "\n": msg += "\n" sys.stderr.write(msg) def main(): parser = argparse.ArgumentParser( prog='curtin-check-deps', description='check dependencies for curtin.') parser.add_argument('-v', '--verbose', action='count', default=0, dest='verbosity') args, extra = parser.parse_known_args(sys.argv[1:]) errors = find_missing_deps() if len(errors) == 0: # exit 0 means all dependencies are available. debug(args.verbosity, 1, "No missing dependencies") sys.exit(0) missing_pkgs = [] fatal = [] for e in errors: if e.fatal: fatal.append(e) debug(args.verbosity, 2, str(e)) missing_pkgs += e.deps if len(fatal): for e in fatal: debug(args.verbosity, 1, str(e)) sys.exit(1) debug(args.verbosity, 1, "Fix with:\n apt-get -qy install %s\n" % ' '.join(sorted(missing_pkgs))) # we exit higher with less deps needed. # exiting 99 means just 1 dep needed. sys.exit(100-len(missing_pkgs)) if __name__ == '__main__': main() # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/deps/install.py000066400000000000000000000016501326565350400207120ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ The intent of this module is that it can be called to install deps python -m curtin.deps.install [-v] """ import argparse import sys from . import install_deps def main(): parser = argparse.ArgumentParser( prog='curtin-install-deps', description='install dependencies for curtin.') parser.add_argument('-v', '--verbose', action='count', default=0, dest='verbosity') parser.add_argument('--dry-run', action='store_true', default=False) parser.add_argument('--no-allow-daemons', action='store_false', default=True) args = parser.parse_args(sys.argv[1:]) ret = install_deps(verbosity=args.verbosity, dry_run=args.dry_run, allow_daemons=True) sys.exit(ret) if __name__ == '__main__': main() # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/futil.py000066400000000000000000000057531326565350400174440ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import grp import pwd import os import warnings from .util import write_file, target_path from .log import LOG def chownbyid(fname, uid=None, gid=None): if uid in [None, -1] and gid in [None, -1]: return os.chown(fname, uid, gid) def decode_perms(perm, default=0o644): try: if perm is None: return default if isinstance(perm, (int, float)): # Just 'downcast' it (if a float) return int(perm) else: # Force to string and try octal conversion return int(str(perm), 8) except (TypeError, ValueError): return default def chownbyname(fname, user=None, group=None): uid = -1 gid = -1 try: if user: uid = pwd.getpwnam(user).pw_uid if group: gid = grp.getgrnam(group).gr_gid except KeyError as e: raise OSError("Unknown user or group: %s" % (e)) chownbyid(fname, uid, gid) def extract_usergroup(ug_pair): if not ug_pair: return (None, None) ug_parted = ug_pair.split(':', 1) u = ug_parted[0].strip() if len(ug_parted) == 2: g = ug_parted[1].strip() else: g = None if not u or u == "-1" or u.lower() == "none": u = None if not g or g == "-1" or g.lower() == "none": g = None return (u, g) def write_finfo(path, content, owner="-1:-1", perms="0644"): (u, g) = extract_usergroup(owner) omode = "w" if isinstance(content, bytes): omode = "wb" write_file(path, content, mode=decode_perms(perms), omode=omode) chownbyname(path, u, g) def write_files(files, base_dir=None): """Write files described in the dictionary 'files' paths are assumed under 'base_dir', which will default to '/'. A trailing '/' will be applied if not present. files is a dictionary where each entry has: path: /file1 content: (bytes or string) permissions: (optional, default=0644) owner: (optional, default -1:-1): string of 'uid:gid'.""" for (key, info) in files.items(): if not info.get('path'): LOG.warn("Warning, write_files[%s] had no 'path' entry", key) continue write_finfo(path=target_path(base_dir, info['path']), content=info.get('content', ''), owner=info.get('owner', "-1:-1"), perms=info.get('permissions', info.get('perms', "0644"))) def _legacy_write_files(cfg, base_dir=None): """Backwards compatibility for curthooks.write_files (LP: #1731709) It needs to work like: curthooks.write_files(cfg, target) cfg is a 'cfg' dictionary with a 'write_files' entry in it. """ warnings.warn( "write_files use from curtin.util is deprecated. " "Please use curtin.futil.write_files.", DeprecationWarning) return write_files(cfg.get('write_files', {}), base_dir=base_dir) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/gpg.py000066400000000000000000000037071326565350400170730ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ gpg.py gpg related utilities to get raw keys data by their id """ from curtin import util from .log import LOG def export_armour(key): """Export gpg key, armoured key gets returned""" try: (armour, _) = util.subp(["gpg", "--export", "--armour", key], capture=True) except util.ProcessExecutionError as error: # debug, since it happens for any key not on the system initially LOG.debug('Failed to export armoured key "%s": %s', key, error) armour = None return armour def recv_key(key, keyserver, retries=None): """Receive gpg key from the specified keyserver""" LOG.debug('Receive gpg key "%s"', key) try: util.subp(["gpg", "--keyserver", keyserver, "--recv", key], capture=True, retries=retries) except util.ProcessExecutionError as error: raise ValueError(('Failed to import key "%s" ' 'from server "%s" - error %s') % (key, keyserver, error)) def delete_key(key): """Delete the specified key from the local gpg ring""" try: util.subp(["gpg", "--batch", "--yes", "--delete-keys", key], capture=True) except util.ProcessExecutionError as error: LOG.warn('Failed delete key "%s": %s', key, error) def getkeybyid(keyid, keyserver='keyserver.ubuntu.com', retries=None): """get gpg keyid from keyserver""" armour = export_armour(keyid) if not armour: try: recv_key(keyid, keyserver=keyserver, retries=retries) armour = export_armour(keyid) except ValueError: LOG.exception('Failed to obtain gpg key %s', keyid) raise finally: # delete just imported key to leave environment as it was before delete_key(keyid) return armour # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/log.py000066400000000000000000000031171326565350400170720ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import logging # Logging items for easy access getLogger = logging.getLogger CRITICAL = logging.CRITICAL FATAL = logging.FATAL ERROR = logging.ERROR WARNING = logging.WARNING WARN = logging.WARN INFO = logging.INFO DEBUG = logging.DEBUG NOTSET = logging.NOTSET class NullHandler(logging.Handler): def emit(self, record): pass def basicConfig(**kwargs): # basically like logging.basicConfig but only output for our logger if kwargs.get('filename'): handler = logging.FileHandler(filename=kwargs['filename'], mode=kwargs.get('filemode', 'a')) elif kwargs.get('stream'): handler = logging.StreamHandler(stream=kwargs['stream']) else: handler = NullHandler() if 'verbosity' in kwargs: level = ((logging.ERROR, logging.INFO, logging.DEBUG) [min(kwargs['verbosity'], 2)]) else: level = kwargs.get('level', logging.NOTSET) handler.setFormatter(logging.Formatter(fmt=kwargs.get('format'), datefmt=kwargs.get('datefmt'))) handler.setLevel(level) logging.getLogger().setLevel(level) logger = _getLogger() for h in list(logger.handlers): logger.removeHandler(h) logger.setLevel(level) logger.addHandler(handler) def _getLogger(name='curtin'): return logging.getLogger(name) if not logging.getLogger().handlers: logging.getLogger().addHandler(NullHandler()) LOG = _getLogger() # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/net/000077500000000000000000000000001326565350400165235ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/curtin/net/__init__.py000066400000000000000000000530731326565350400206440ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import errno import glob import os import re from curtin.log import LOG from curtin.udev import generate_udev_rule import curtin.util as util import curtin.config as config from . import network_state SYS_CLASS_NET = "/sys/class/net/" NET_CONFIG_OPTIONS = [ "address", "netmask", "broadcast", "network", "metric", "gateway", "pointtopoint", "media", "mtu", "hostname", "leasehours", "leasetime", "vendor", "client", "bootfile", "server", "hwaddr", "provider", "frame", "netnum", "endpoint", "local", "ttl", ] NET_CONFIG_COMMANDS = [ "pre-up", "up", "post-up", "down", "pre-down", "post-down", ] NET_CONFIG_BRIDGE_OPTIONS = [ "bridge_ageing", "bridge_bridgeprio", "bridge_fd", "bridge_gcinit", "bridge_hello", "bridge_maxage", "bridge_maxwait", "bridge_stp", ] def sys_dev_path(devname, path=""): return SYS_CLASS_NET + devname + "/" + path def read_sys_net(devname, path, translate=None, enoent=None, keyerror=None): try: contents = "" with open(sys_dev_path(devname, path), "r") as fp: contents = fp.read().strip() if translate is None: return contents try: return translate.get(contents) except KeyError: LOG.debug("found unexpected value '%s' in '%s/%s'", contents, devname, path) if keyerror is not None: return keyerror raise except OSError as e: if e.errno == errno.ENOENT and enoent is not None: return enoent raise def is_up(devname): # The linux kernel says to consider devices in 'unknown' # operstate as up for the purposes of network configuration. See # Documentation/networking/operstates.txt in the kernel source. translate = {'up': True, 'unknown': True, 'down': False} return read_sys_net(devname, "operstate", enoent=False, keyerror=False, translate=translate) def is_wireless(devname): return os.path.exists(sys_dev_path(devname, "wireless")) def is_connected(devname): # is_connected isn't really as simple as that. 2 is # 'physically connected'. 3 is 'not connected'. but a wlan interface will # always show 3. try: iflink = read_sys_net(devname, "iflink", enoent=False) if iflink == "2": return True if not is_wireless(devname): return False LOG.debug("'%s' is wireless, basing 'connected' on carrier", devname) return read_sys_net(devname, "carrier", enoent=False, keyerror=False, translate={'0': False, '1': True}) except IOError as e: if e.errno == errno.EINVAL: return False raise def is_physical(devname): return os.path.exists(sys_dev_path(devname, "device")) def is_present(devname): return os.path.exists(sys_dev_path(devname)) def get_devicelist(): return os.listdir(SYS_CLASS_NET) class ParserError(Exception): """Raised when parser has issue parsing the interfaces file.""" def parse_deb_config_data(ifaces, contents, src_dir, src_path): """Parses the file contents, placing result into ifaces. '_source_path' is added to every dictionary entry to define which file the configration information came from. :param ifaces: interface dictionary :param contents: contents of interfaces file :param src_dir: directory interfaces file was located :param src_path: file path the `contents` was read """ currif = None for line in contents.splitlines(): line = line.strip() if line.startswith('#'): continue split = line.split(' ') option = split[0] if option == "source-directory": parsed_src_dir = split[1] if not parsed_src_dir.startswith("/"): parsed_src_dir = os.path.join(src_dir, parsed_src_dir) for expanded_path in glob.glob(parsed_src_dir): dir_contents = os.listdir(expanded_path) dir_contents = [ os.path.join(expanded_path, path) for path in dir_contents if (os.path.isfile(os.path.join(expanded_path, path)) and re.match("^[a-zA-Z0-9_-]+$", path) is not None) ] for entry in dir_contents: with open(entry, "r") as fp: src_data = fp.read().strip() abs_entry = os.path.abspath(entry) parse_deb_config_data( ifaces, src_data, os.path.dirname(abs_entry), abs_entry) elif option == "source": new_src_path = split[1] if not new_src_path.startswith("/"): new_src_path = os.path.join(src_dir, new_src_path) for expanded_path in glob.glob(new_src_path): with open(expanded_path, "r") as fp: src_data = fp.read().strip() abs_path = os.path.abspath(expanded_path) parse_deb_config_data( ifaces, src_data, os.path.dirname(abs_path), abs_path) elif option == "auto": for iface in split[1:]: if iface not in ifaces: ifaces[iface] = { # Include the source path this interface was found in. "_source_path": src_path } ifaces[iface]['auto'] = True ifaces[iface]['control'] = 'auto' elif option.startswith('allow-'): for iface in split[1:]: if iface not in ifaces: ifaces[iface] = { # Include the source path this interface was found in. "_source_path": src_path } ifaces[iface]['auto'] = False ifaces[iface]['control'] = option.split('allow-')[-1] elif option == "iface": iface, family, method = split[1:4] if iface not in ifaces: ifaces[iface] = { # Include the source path this interface was found in. "_source_path": src_path } # man (5) interfaces says we can have multiple iface stanzas # all options are combined ifaces[iface]['family'] = family ifaces[iface]['method'] = method currif = iface elif option == "hwaddress": ifaces[currif]['hwaddress'] = split[1] elif option in NET_CONFIG_OPTIONS: ifaces[currif][option] = split[1] elif option in NET_CONFIG_COMMANDS: if option not in ifaces[currif]: ifaces[currif][option] = [] ifaces[currif][option].append(' '.join(split[1:])) elif option.startswith('dns-'): if 'dns' not in ifaces[currif]: ifaces[currif]['dns'] = {} if option == 'dns-search': ifaces[currif]['dns']['search'] = [] for domain in split[1:]: ifaces[currif]['dns']['search'].append(domain) elif option == 'dns-nameservers': ifaces[currif]['dns']['nameservers'] = [] for server in split[1:]: ifaces[currif]['dns']['nameservers'].append(server) elif option.startswith('bridge_'): if 'bridge' not in ifaces[currif]: ifaces[currif]['bridge'] = {} if option in NET_CONFIG_BRIDGE_OPTIONS: bridge_option = option.replace('bridge_', '', 1) ifaces[currif]['bridge'][bridge_option] = split[1] elif option == "bridge_ports": ifaces[currif]['bridge']['ports'] = [] for iface in split[1:]: ifaces[currif]['bridge']['ports'].append(iface) elif option == "bridge_hw" and split[1].lower() == "mac": ifaces[currif]['bridge']['mac'] = split[2] elif option == "bridge_pathcost": if 'pathcost' not in ifaces[currif]['bridge']: ifaces[currif]['bridge']['pathcost'] = {} ifaces[currif]['bridge']['pathcost'][split[1]] = split[2] elif option == "bridge_portprio": if 'portprio' not in ifaces[currif]['bridge']: ifaces[currif]['bridge']['portprio'] = {} ifaces[currif]['bridge']['portprio'][split[1]] = split[2] elif option.startswith('bond-'): if 'bond' not in ifaces[currif]: ifaces[currif]['bond'] = {} bond_option = option.replace('bond-', '', 1) ifaces[currif]['bond'][bond_option] = split[1] for iface in ifaces.keys(): if 'auto' not in ifaces[iface]: ifaces[iface]['auto'] = False def parse_deb_config(path): """Parses a debian network configuration file.""" ifaces = {} with open(path, "r") as fp: contents = fp.read().strip() abs_path = os.path.abspath(path) parse_deb_config_data( ifaces, contents, os.path.dirname(abs_path), abs_path) return ifaces def parse_net_config_data(net_config): """Parses the config, returns NetworkState dictionary :param net_config: curtin network config dict """ state = None if 'version' in net_config and 'config' in net_config: ns = network_state.NetworkState(version=net_config.get('version'), config=net_config.get('config')) ns.parse_config() state = ns.network_state return state def parse_net_config(path): """Parses a curtin network configuration file and return network state""" ns = None net_config = config.load_config(path) if 'network' in net_config: ns = parse_net_config_data(net_config.get('network')) return ns def render_persistent_net(network_state): ''' Given state, emit udev rules to map mac to ifname ''' content = "# Autogenerated by curtin\n" interfaces = network_state.get('interfaces') for iface in interfaces.values(): if iface['type'] == 'physical': ifname = iface.get('name', None) mac = iface.get('mac_address', '') # len(macaddr) == 2 * 6 + 5 == 17 if ifname and mac and len(mac) == 17: content += generate_udev_rule(ifname, mac.lower()) return content # TODO: switch valid_map based on mode inet/inet6 def iface_add_subnet(iface, subnet): content = "" valid_map = [ 'address', 'netmask', 'broadcast', 'metric', 'gateway', 'pointopoint', 'mtu', 'scope', 'dns_search', 'dns_nameservers', ] for key, value in subnet.items(): if value and key in valid_map: if type(value) == list: value = " ".join(value) if '_' in key: key = key.replace('_', '-') content += " {} {}\n".format(key, value) return content # TODO: switch to valid_map for attrs def iface_add_attrs(iface, index): # If the index is non-zero, this is an alias interface. Alias interfaces # represent additional interface addresses, and should not have additional # attributes. (extra attributes here are almost always either incorrect, # or are applied to the parent interface.) So if this is an alias, stop # right here. if index != 0: return "" content = "" ignore_map = [ 'control', 'index', 'inet', 'mode', 'name', 'subnets', 'type', ] # These values require repetitive printing # of the key for each value multiline_keys = [ 'bridge_pathcost', 'bridge_portprio', 'bridge_waitport', ] def add_entry(key, value): if type(value) == list: value = " ".join([str(v) for v in value]) return " {} {}\n".format(key, value) if iface['type'] not in ['bond', 'bridge', 'vlan']: ignore_map.append('mac_address') for key, value in iface.items(): if value and key not in ignore_map: if key in multiline_keys: for v in value: content += add_entry(key, v) else: content += add_entry(key, value) return content def render_route(route, indent=""): """When rendering routes for an iface, in some cases applying a route may result in the route command returning non-zero which produces some confusing output for users manually using ifup/ifdown[1]. To that end, we will optionally include an '|| true' postfix to each route line allowing users to work with ifup/ifdown without using --force option. We may at somepoint not want to emit this additional postfix, and add a 'strict' flag to this function. When called with strict=True, then we will not append the postfix. 1. http://askubuntu.com/questions/168033/ how-to-set-static-routes-in-ubuntu-server """ content = [] up = indent + "post-up route add" down = indent + "pre-down route del" or_true = " || true" mapping = { 'network': '-net', 'netmask': 'netmask', 'gateway': 'gw', 'metric': 'metric', } if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0': default_gw = " default gw %s" % route['gateway'] content.append(up + default_gw + or_true) content.append(down + default_gw + or_true) elif route['network'] == '::' and route['netmask'] == 0: # ipv6! default_gw = " -A inet6 default gw %s" % route['gateway'] content.append(up + default_gw + or_true) content.append(down + default_gw + or_true) else: route_line = "" for k in ['network', 'netmask', 'gateway', 'metric']: if k in route: route_line += " %s %s" % (mapping[k], route[k]) content.append(up + route_line + or_true) content.append(down + route_line + or_true) return "\n".join(content) + "\n" def iface_start_entry(iface): fullname = iface['name'] control = iface['control'] if control == "auto": cverb = "auto" elif control in ("hotplug",): cverb = "allow-" + control else: cverb = "# control-" + control subst = iface.copy() subst.update({'fullname': fullname, 'cverb': cverb}) return ("{cverb} {fullname}\n" "iface {fullname} {inet} {mode}\n").format(**subst) def subnet_is_ipv6(subnet): # 'static6' or 'dhcp6' if subnet['type'].endswith('6'): # This is a request for DHCPv6. return True elif subnet['type'] == 'static' and ":" in subnet['address']: return True return False def render_interfaces(network_state): ''' Given state, emit etc/network/interfaces content ''' content = "" interfaces = network_state.get('interfaces') ''' Apply a sort order to ensure that we write out the physical interfaces first; this is critical for bonding ''' order = { 'physical': 0, 'bond': 1, 'bridge': 2, 'vlan': 3, } content += "auto lo\niface lo inet loopback\n" for dnskey, value in network_state.get('dns', {}).items(): if len(value): content += " dns-{} {}\n".format(dnskey, " ".join(value)) for iface in sorted(interfaces.values(), key=lambda k: (order[k['type']], k['name'])): if content[-2:] != "\n\n": content += "\n" subnets = iface.get('subnets', {}) if subnets: for index, subnet in enumerate(subnets): if content[-2:] != "\n\n": content += "\n" iface['index'] = index iface['mode'] = subnet['type'] iface['control'] = subnet.get('control', 'auto') subnet_inet = 'inet' if subnet_is_ipv6(subnet): subnet_inet += '6' iface['inet'] = subnet_inet if subnet['type'].startswith('dhcp'): iface['mode'] = 'dhcp' # do not emit multiple 'auto $IFACE' lines as older (precise) # ifupdown complains if "auto %s\n" % (iface['name']) in content: iface['control'] = 'alias' content += iface_start_entry(iface) content += iface_add_subnet(iface, subnet) content += iface_add_attrs(iface, index) for route in subnet.get('routes', []): content += render_route(route, indent=" ") + '\n' else: # ifenslave docs say to auto the slave devices if 'bond-master' in iface or 'bond-slaves' in iface: content += "auto {name}\n".format(**iface) content += "iface {name} {inet} {mode}\n".format(**iface) content += iface_add_attrs(iface, 0) for route in network_state.get('routes'): content += render_route(route) # global replacements until v2 format content = content.replace('mac_address', 'hwaddress ether') # Play nice with others and source eni config files content += "\nsource /etc/network/interfaces.d/*.cfg\n" return content def netconfig_passthrough_available(target, feature='NETWORK_CONFIG_V2'): """ Determine if curtin can pass v2 network config to in target cloud-init """ LOG.debug('Checking in-target cloud-init for feature: %s', feature) with util.ChrootableTarget(target) as in_chroot: cloudinit = util.which('cloud-init', target=target) if not cloudinit: LOG.warning('Target does not have cloud-init installed') return False available = False try: out, _ = in_chroot.subp([cloudinit, 'features'], capture=True) available = feature in out.splitlines() except util.ProcessExecutionError: # we explicitly don't dump the exception as this triggers # vmtest failures when parsing the installation log file LOG.warning("Failed to probe cloudinit features") return False LOG.debug('cloud-init feature %s available? %s', feature, available) return available def render_netconfig_passthrough(target, netconfig=None): """ Extract original network config and pass it through to cloud-init in target """ cc = 'etc/cloud/cloud.cfg.d/50-curtin-networking.cfg' if not isinstance(netconfig, dict): raise ValueError('Network config must be a dictionary') if 'network' not in netconfig: raise ValueError("Network config must contain the key 'network'") content = config.dump_config(netconfig) cc_passthrough = os.path.sep.join((target, cc,)) LOG.info('Writing network config to %s: %s', cc, cc_passthrough) util.write_file(cc_passthrough, content=content) def render_network_state(target, network_state): LOG.debug("rendering eni from netconfig") eni = 'etc/network/interfaces' netrules = 'etc/udev/rules.d/70-persistent-net.rules' cc = 'etc/cloud/cloud.cfg.d/curtin-disable-cloudinit-networking.cfg' eni = os.path.sep.join((target, eni,)) LOG.info('Writing ' + eni) util.write_file(eni, content=render_interfaces(network_state)) netrules = os.path.sep.join((target, netrules,)) LOG.info('Writing ' + netrules) util.write_file(netrules, content=render_persistent_net(network_state)) cc_disable = os.path.sep.join((target, cc,)) LOG.info('Writing ' + cc_disable) util.write_file(cc_disable, content='network: {config: disabled}\n') def get_interface_mac(ifname): """Returns the string value of an interface's MAC Address""" return read_sys_net(ifname, "address", enoent=False) def network_config_required_packages(network_config, mapping=None): if network_config is None: network_config = {} if not isinstance(network_config, dict): raise ValueError('Invalid network configuration. Must be a dict') if mapping is None: mapping = {} if not isinstance(mapping, dict): raise ValueError('Invalid network mapping. Must be a dict') # allow top-level 'network' key if 'network' in network_config: network_config = network_config.get('network') # v1 has 'config' key and uses type: devtype elements if 'config' in network_config: dev_configs = set(device['type'] for device in network_config['config']) else: # v2 has no config key dev_configs = set(cfgtype for (cfgtype, cfg) in network_config.items() if cfgtype not in ['version']) needed_packages = [] for dev_type in dev_configs: if dev_type in mapping: needed_packages.extend(mapping[dev_type]) return needed_packages def detect_required_packages_mapping(): """Return a dictionary providing a versioned configuration which maps network configuration elements to the packages which are required for functionality. """ mapping = { 1: { 'handler': network_config_required_packages, 'mapping': { 'bond': ['ifenslave'], 'bridge': ['bridge-utils'], 'vlan': ['vlan']}, }, 2: { 'handler': network_config_required_packages, 'mapping': { 'bonds': ['ifenslave'], 'bridges': ['bridge-utils'], 'vlans': ['vlan']} }, } return mapping # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/net/network_state.py000066400000000000000000000337531326565350400220010ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin.log import LOG import curtin.config as curtin_config NETWORK_STATE_VERSION = 1 NETWORK_STATE_REQUIRED_KEYS = { 1: ['version', 'config', 'network_state'], } def from_state_file(state_file): network_state = None state = curtin_config.load_config(state_file) network_state = NetworkState() network_state.load(state) return network_state class NetworkState: def __init__(self, version=NETWORK_STATE_VERSION, config=None): self.version = version self.config = config self.network_state = { 'interfaces': {}, 'routes': [], 'dns': { 'nameservers': [], 'search': [], } } self.command_handlers = self.get_command_handlers() def get_command_handlers(self): METHOD_PREFIX = 'handle_' methods = filter(lambda x: callable(getattr(self, x)) and x.startswith(METHOD_PREFIX), dir(self)) handlers = {} for m in methods: key = m.replace(METHOD_PREFIX, '') handlers[key] = getattr(self, m) return handlers def dump(self): state = { 'version': self.version, 'config': self.config, 'network_state': self.network_state, } return curtin_config.dump_config(state) def load(self, state): if 'version' not in state: LOG.error('Invalid state, missing version field') raise Exception('Invalid state, missing version field') required_keys = NETWORK_STATE_REQUIRED_KEYS[state['version']] if not self.valid_command(state, required_keys): msg = 'Invalid state, missing keys: %s' % (required_keys) LOG.error(msg) raise Exception(msg) # v1 - direct attr mapping, except version for key in [k for k in required_keys if k not in ['version']]: setattr(self, key, state[key]) self.command_handlers = self.get_command_handlers() def dump_network_state(self): return curtin_config.dump_config(self.network_state) def parse_config(self): # rebuild network state for command in self.config: handler = self.command_handlers.get(command['type']) handler(command) def valid_command(self, command, required_keys): if not required_keys: return False found_keys = [key for key in command.keys() if key in required_keys] return len(found_keys) == len(required_keys) def handle_physical(self, command): ''' command = { 'type': 'physical', 'mac_address': 'c0:d6:9f:2c:e8:80', 'name': 'eth0', 'subnets': [ {'type': 'dhcp4'} ] } ''' required_keys = [ 'name', ] if not self.valid_command(command, required_keys): LOG.warn('Skipping Invalid command: %s', command) LOG.debug(self.dump_network_state()) return interfaces = self.network_state.get('interfaces') iface = interfaces.get(command['name'], {}) for param, val in command.get('params', {}).items(): iface.update({param: val}) # convert subnet ipv6 netmask to cidr as needed subnets = command.get('subnets') if subnets: for subnet in subnets: if subnet['type'] == 'static': if 'netmask' in subnet and ':' in subnet['address']: subnet['netmask'] = mask2cidr(subnet['netmask']) for route in subnet.get('routes', []): if 'netmask' in route: route['netmask'] = mask2cidr(route['netmask']) iface.update({ 'name': command.get('name'), 'type': command.get('type'), 'mac_address': command.get('mac_address'), 'inet': 'inet', 'mode': 'manual', 'mtu': command.get('mtu'), 'address': None, 'gateway': None, 'subnets': subnets, }) self.network_state['interfaces'].update({command.get('name'): iface}) self.dump_network_state() def handle_vlan(self, command): ''' auto eth0.222 iface eth0.222 inet static address 10.10.10.1 netmask 255.255.255.0 hwaddress ether BC:76:4E:06:96:B3 vlan-raw-device eth0 ''' required_keys = [ 'name', 'vlan_link', 'vlan_id', ] if not self.valid_command(command, required_keys): print('Skipping Invalid command: {}'.format(command)) print(self.dump_network_state()) return interfaces = self.network_state.get('interfaces') self.handle_physical(command) iface = interfaces.get(command.get('name'), {}) iface['vlan-raw-device'] = command.get('vlan_link') iface['vlan_id'] = command.get('vlan_id') interfaces.update({iface['name']: iface}) def handle_bond(self, command): ''' #/etc/network/interfaces auto eth0 iface eth0 inet manual bond-master bond0 bond-mode 802.3ad auto eth1 iface eth1 inet manual bond-master bond0 bond-mode 802.3ad auto bond0 iface bond0 inet static address 192.168.0.10 gateway 192.168.0.1 netmask 255.255.255.0 bond-slaves none bond-mode 802.3ad bond-miimon 100 bond-downdelay 200 bond-updelay 200 bond-lacp-rate 4 ''' required_keys = [ 'name', 'bond_interfaces', 'params', ] if not self.valid_command(command, required_keys): print('Skipping Invalid command: {}'.format(command)) print(self.dump_network_state()) return self.handle_physical(command) interfaces = self.network_state.get('interfaces') iface = interfaces.get(command.get('name'), {}) for param, val in command.get('params').items(): iface.update({param: val}) iface.update({'bond-slaves': 'none'}) self.network_state['interfaces'].update({iface['name']: iface}) # handle bond slaves for ifname in command.get('bond_interfaces'): if ifname not in interfaces: cmd = { 'name': ifname, 'type': 'bond', } # inject placeholder self.handle_physical(cmd) interfaces = self.network_state.get('interfaces') bond_if = interfaces.get(ifname) bond_if['bond-master'] = command.get('name') # copy in bond config into slave for param, val in command.get('params').items(): bond_if.update({param: val}) self.network_state['interfaces'].update({ifname: bond_if}) def handle_bridge(self, command): ''' auto br0 iface br0 inet static address 10.10.10.1 netmask 255.255.255.0 bridge_ports eth1 eth2 bridge_ageing: 250 bridge_bridgeprio: 22 bridge_fd: 1 bridge_gcint: 2 bridge_hello: 1 bridge_hw: 00:11:22:33:44:55 bridge_maxage: 10 bridge_maxwait: 0 bridge_pathcost: eth1 50 bridge_pathcost: eth2 75 bridge_portprio: eth1 64 bridge_portprio: eth2 192 bridge_stp: 'off' bridge_waitport: 1 eth1 bridge_waitport: 2 eth2 bridge_params = [ "bridge_ports", "bridge_ageing", "bridge_bridgeprio", "bridge_fd", "bridge_gcint", "bridge_hello", "bridge_hw", "bridge_maxage", "bridge_maxwait", "bridge_pathcost", "bridge_portprio", "bridge_stp", "bridge_waitport", ] ''' required_keys = [ 'name', 'bridge_interfaces', 'params', ] if not self.valid_command(command, required_keys): LOG.warn('Skipping Invalid command: %s', command) LOG.warn(self.dump_network_state()) return # find one of the bridge port ifaces to get mac_addr # handle bridge_slaves interfaces = self.network_state.get('interfaces') for ifname in command.get('bridge_interfaces'): if ifname in interfaces: continue cmd = { 'name': ifname, } # inject placeholder self.handle_physical(cmd) interfaces = self.network_state.get('interfaces') self.handle_physical(command) iface = interfaces.get(command.get('name'), {}) iface['bridge_ports'] = command['bridge_interfaces'] for param, val in command.get('params', {}).items(): iface.update({param: val}) interfaces.update({iface['name']: iface}) def handle_nameserver(self, command): required_keys = [ 'address', ] if not self.valid_command(command, required_keys): LOG.warn('Skipping Invalid command: %s', command) LOG.warn(self.dump_network_state()) return dns = self.network_state.get('dns') if 'address' in command: addrs = command['address'] if not type(addrs) == list: addrs = [addrs] for addr in addrs: dns['nameservers'].append(addr) if 'search' in command: paths = command['search'] if not isinstance(paths, list): paths = [paths] for path in paths: dns['search'].append(path) def handle_route(self, command): required_keys = [ 'destination', ] if not self.valid_command(command, required_keys): LOG.warn('Skipping Invalid command: %s', command) LOG.warn(self.dump_network_state()) return routes = self.network_state.get('routes') network, cidr = command['destination'].split("/") netmask = cidr2mask(int(cidr)) route = { 'network': network, 'netmask': netmask, 'gateway': command.get('gateway'), 'metric': command.get('metric'), } routes.append(route) def cidr2mask(cidr): mask = [0, 0, 0, 0] for i in list(range(0, cidr)): idx = int(i / 8) mask[idx] = mask[idx] + (1 << (7 - i % 8)) return ".".join([str(x) for x in mask]) def ipv4mask2cidr(mask): if '.' not in mask: return mask return sum([bin(int(x)).count('1') for x in mask.split('.')]) def ipv6mask2cidr(mask): if ':' not in mask: return mask bitCount = [0, 0x8000, 0xc000, 0xe000, 0xf000, 0xf800, 0xfc00, 0xfe00, 0xff00, 0xff80, 0xffc0, 0xffe0, 0xfff0, 0xfff8, 0xfffc, 0xfffe, 0xffff] cidr = 0 for word in mask.split(':'): if not word or int(word, 16) == 0: break cidr += bitCount.index(int(word, 16)) return cidr def mask2cidr(mask): if ':' in mask: return ipv6mask2cidr(mask) elif '.' in mask: return ipv4mask2cidr(mask) else: return mask if __name__ == '__main__': import sys import random from curtin import net def load_config(nc): version = nc.get('version') config = nc.get('config') return (version, config) def test_parse(network_config): (version, config) = load_config(network_config) ns1 = NetworkState(version=version, config=config) ns1.parse_config() random.shuffle(config) ns2 = NetworkState(version=version, config=config) ns2.parse_config() print("----NS1-----") print(ns1.dump_network_state()) print() print("----NS2-----") print(ns2.dump_network_state()) print("NS1 == NS2 ?=> {}".format( ns1.network_state == ns2.network_state)) eni = net.render_interfaces(ns2.network_state) print(eni) udev_rules = net.render_persistent_net(ns2.network_state) print(udev_rules) def test_dump_and_load(network_config): print("Loading network_config into NetworkState") (version, config) = load_config(network_config) ns1 = NetworkState(version=version, config=config) ns1.parse_config() print("Dumping state to file") ns1_dump = ns1.dump() ns1_state = "/tmp/ns1.state" with open(ns1_state, "w+") as f: f.write(ns1_dump) print("Loading state from file") ns2 = from_state_file(ns1_state) print("NS1 == NS2 ?=> {}".format( ns1.network_state == ns2.network_state)) def test_output(network_config): (version, config) = load_config(network_config) ns1 = NetworkState(version=version, config=config) ns1.parse_config() random.shuffle(config) ns2 = NetworkState(version=version, config=config) ns2.parse_config() print("NS1 == NS2 ?=> {}".format( ns1.network_state == ns2.network_state)) eni_1 = net.render_interfaces(ns1.network_state) eni_2 = net.render_interfaces(ns2.network_state) print(eni_1) print(eni_2) print("eni_1 == eni_2 ?=> {}".format( eni_1 == eni_2)) y = curtin_config.load_config(sys.argv[1]) network_config = y.get('network') test_parse(network_config) test_dump_and_load(network_config) test_output(network_config) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/pack.py000066400000000000000000000162071326565350400172330ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import errno import os import shutil import tempfile from . import util from . import version CALL_ENTRY_POINT_SH_HEADER = """ #!/bin/sh PY3OR2_MAIN="%(ep_main)s" PY3OR2_MCHECK="%(ep_mcheck)s" PY3OR2_PYTHONS=${PY3OR2_PYTHONS:-"%(python_exe_list)s"} PYTHON=${PY3OR2_PYTHON} PY3OR2_DEBUG=${PY3OR2_DEBUG:-0} """.strip() CALL_ENTRY_POINT_SH_BODY = """ debug() { [ "${PY3OR2_DEBUG}" != "0" ] || return 0 echo "$@" 1>&2 } fail() { echo "$@" 1>&2; exit 1; } # if $0 is is bin/ and dirname($0)/../module exists, then prepend PYTHONPATH mydir=${0%/*} updir=${mydir%/*} if [ "${mydir#${updir}/}" = "bin" -a -d "$updir/${PY3OR2_MCHECK%%.*}" ]; then updir=$(cd "$mydir/.." && pwd) case "$PYTHONPATH" in *:$updir:*|$updir:*|*:$updir) :;; *) export PYTHONPATH="$updir${PYTHONPATH:+:$PYTHONPATH}" debug "adding '$updir' to PYTHONPATH" ;; esac fi if [ ! -n "$PYTHON" ]; then first_exe="" oifs="$IFS"; IFS=":" best=0 best_exe="" [ "${PY3OR2_DEBUG}" = "0" ] && _v="" || _v="-v" for p in $PY3OR2_PYTHONS; do command -v "$p" >/dev/null 2>&1 || { debug "$p: not in path"; continue; } [ -z "$PY3OR2_MCHECK" ] && PYTHON=$p && break out=$($p -m "$PY3OR2_MCHECK" $_v -- "$@" 2>&1) && PYTHON="$p" && { debug "$p is good [$p -m $PY3OR2_MCHECK $_v -- $*]"; break; } ret=$? debug "$p [$ret]: $out" # exit code of 1 is unuseable [ $ret -eq 1 ] && continue [ -n "$first_exe" ] || first_exe="$p" # higher non-zero exit values indicate more plausible usability [ $best -lt $ret ] && best_exe="$p" && best=$ret && debug "current best: $best_exe" done IFS="$oifs" [ -z "$best_exe" -a -n "$first_exe" ] && best_exe="$first_exe" [ -n "$PYTHON" ] || PYTHON="$best_exe" [ -n "$PYTHON" ] || fail "no availble python? [PY3OR2_DEBUG=1 for more info]" fi debug "executing: $PYTHON -m \\"$PY3OR2_MAIN\\" $*" exec $PYTHON -m "$PY3OR2_MAIN" "$@" """ def write_exe_wrapper(entrypoint, path=None, interpreter=None, deps_check_entry=None, mode=0o755): if not interpreter: interpreter = "python3:python" subs = { 'ep_main': entrypoint, 'ep_mcheck': deps_check_entry if deps_check_entry else "", 'python_exe_list': interpreter, } content = '\n'.join( (CALL_ENTRY_POINT_SH_HEADER % subs, CALL_ENTRY_POINT_SH_BODY)) if path is not None: with open(path, "w") as fp: fp.write(content) if mode is not None: os.chmod(path, mode) else: return content def pack(fdout=None, command=None, paths=None, copy_files=None, add_files=None): # write to 'fdout' a self extracting file to execute 'command' # if fdout is None, return content that would be written to fdout. # add_files is a list of (archive_path, file_content) tuples. # copy_files is a list of (archive_path, file_path) tuples. if paths is None: paths = util.get_paths() if add_files is None: add_files = [] if copy_files is None: copy_files = [] tmpd = None try: tmpd = tempfile.mkdtemp() exdir = os.path.join(tmpd, 'curtin') os.mkdir(exdir) bindir = os.path.join(exdir, 'bin') os.mkdir(bindir) def not_dot_py(input_d, flist): # include .py files and directories other than __pycache__ return [f for f in flist if not (f.endswith(".py") or (f != "__pycache__" and os.path.isdir(os.path.join(input_d, f))))] shutil.copytree(paths['helpers'], os.path.join(exdir, "helpers")) shutil.copytree(paths['lib'], os.path.join(exdir, "curtin"), ignore=not_dot_py) write_exe_wrapper(entrypoint='curtin.commands.main', path=os.path.join(bindir, 'curtin'), deps_check_entry="curtin.deps.check") packed_version = version.version_string() ver_file = os.path.join(exdir, 'curtin', 'version.py') util.write_file( ver_file, util.load_file(ver_file).replace("@@PACKED_VERSION@@", packed_version)) for archpath, filepath in copy_files: target = os.path.abspath(os.path.join(exdir, archpath)) if not target.startswith(exdir + os.path.sep): raise ValueError("'%s' resulted in path outside archive" % archpath) try: os.mkdir(os.path.dirname(target)) except OSError as e: if e.errno == errno.EEXIST: pass if os.path.isfile(filepath): shutil.copy(filepath, target) else: shutil.copytree(filepath, target) for archpath, content in add_files: target = os.path.abspath(os.path.join(exdir, archpath)) if not target.startswith(exdir + os.path.sep): raise ValueError("'%s' resulted in path outside archive" % archpath) try: os.mkdir(os.path.dirname(target)) except OSError as e: if e.errno == errno.EEXIST: pass with open(target, "w") as fp: fp.write(content) archcmd = os.path.join(paths['helpers'], 'shell-archive') archout = None args = [archcmd] if fdout is not None: archout = os.path.join(tmpd, 'output') args.append("--output=%s" % archout) args.extend(["--bin-path=_pwd_/bin", "--python-path=_pwd_", exdir, "curtin", "--"]) if command is not None: args.extend(command) (out, _err) = util.subp(args, capture=True) if fdout is None: if isinstance(out, bytes): out = out.decode() return out else: with open(archout, "r") as fp: while True: buf = fp.read(4096) fdout.write(buf) if len(buf) != 4096: break finally: if tmpd: shutil.rmtree(tmpd) def pack_install(fdout=None, configs=None, paths=None, add_files=None, copy_files=None, args=None, install_deps=True): if configs is None: configs = [] if add_files is None: add_files = [] if args is None: args = [] if install_deps: dep_flags = ["--install-deps"] else: dep_flags = [] command = ["curtin"] + dep_flags + ["install"] my_files = [] for n, config in enumerate(configs): apath = "configs/config-%03d.cfg" % n my_files.append((apath, config),) command.append("--config=%s" % apath) command += args return pack(fdout=fdout, command=command, paths=paths, add_files=add_files + my_files, copy_files=copy_files) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/reporter/000077500000000000000000000000001326565350400175775ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/curtin/reporter/__init__.py000066400000000000000000000022301326565350400217050ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """Reporter Abstract Base Class.""" from .registry import DictRegistry from .handlers import available_handlers DEFAULT_CONFIG = { 'logging': {'type': 'log'}, } def update_configuration(config): """Update the instanciated_handler_registry. :param config: The dictionary containing changes to apply. If a key is given with a False-ish value, the registered handler matching that name will be unregistered. """ for handler_name, handler_config in config.items(): if not handler_config: instantiated_handler_registry.unregister_item( handler_name, force=True) continue handler_config = handler_config.copy() cls = available_handlers.registered_items[handler_config.pop('type')] instantiated_handler_registry.unregister_item(handler_name) instance = cls(**handler_config) instantiated_handler_registry.register_item(handler_name, instance) instantiated_handler_registry = DictRegistry() update_configuration(DEFAULT_CONFIG) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/reporter/events.py000066400000000000000000000205101326565350400214530ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ cloud-init reporting framework The reporting framework is intended to allow all parts of cloud-init to report events in a structured manner. """ import base64 import os.path import time from . import instantiated_handler_registry FINISH_EVENT_TYPE = 'finish' START_EVENT_TYPE = 'start' RESULT_EVENT_TYPE = 'result' DEFAULT_EVENT_ORIGIN = 'curtin' class _nameset(set): def __getattr__(self, name): if name in self: return name raise AttributeError("%s not a valid value" % name) status = _nameset(("SUCCESS", "WARN", "FAIL")) class ReportingEvent(object): """Encapsulation of event formatting.""" def __init__(self, event_type, name, description, origin=DEFAULT_EVENT_ORIGIN, timestamp=None, level=None): self.event_type = event_type self.name = name self.description = description self.origin = origin if timestamp is None: timestamp = time.time() self.timestamp = timestamp if level is None: level = "INFO" self.level = level def as_string(self): """The event represented as a string.""" return '{0}: {1}: {2}'.format( self.event_type, self.name, self.description) def as_dict(self): """The event represented as a dictionary.""" return {'name': self.name, 'description': self.description, 'event_type': self.event_type, 'origin': self.origin, 'timestamp': self.timestamp, 'level': self.level} class FinishReportingEvent(ReportingEvent): def __init__(self, name, description, result=status.SUCCESS, post_files=None, level=None): super(FinishReportingEvent, self).__init__( FINISH_EVENT_TYPE, name, description, level=level) self.result = result if post_files is None: post_files = [] self.post_files = post_files if result not in status: raise ValueError("Invalid result: %s" % result) if self.result == status.WARN: self.level = "WARN" elif self.result == status.FAIL: self.level = "ERROR" def as_string(self): return '{0}: {1}: {2}: {3}'.format( self.event_type, self.name, self.result, self.description) def as_dict(self): """The event represented as json friendly.""" data = super(FinishReportingEvent, self).as_dict() data['result'] = self.result if self.post_files: data['files'] = _collect_file_info(self.post_files) return data def report_event(event): """Report an event to all registered event handlers. This should generally be called via one of the other functions in the reporting module. :param event_type: The type of the event; this should be a constant from the reporting module. """ for _, handler in instantiated_handler_registry.registered_items.items(): handler.publish_event(event) def report_finish_event(event_name, event_description, result=status.SUCCESS, post_files=None, level=None): """Report a "finish" event. See :py:func:`.report_event` for parameter details. """ event = FinishReportingEvent(event_name, event_description, result, post_files=post_files, level=level) return report_event(event) def report_start_event(event_name, event_description, level=None): """Report a "start" event. :param event_name: The name of the event; this should be a topic which events would share (e.g. it will be the same for start and finish events). :param event_description: A human-readable description of the event that has occurred. """ event = ReportingEvent(START_EVENT_TYPE, event_name, event_description, level=level) return report_event(event) class ReportEventStack(object): """Context Manager for using :py:func:`report_event` This enables calling :py:func:`report_start_event` and :py:func:`report_finish_event` through a context manager. :param name: the name of the event :param description: the event's description, passed on to :py:func:`report_start_event` :param message: the description to use for the finish event. defaults to :param:description. :param parent: :type parent: :py:class:ReportEventStack or None The parent of this event. The parent is populated with results of all its children. The name used in reporting is / :param reporting_enabled: Indicates if reporting events should be generated. If not provided, defaults to the parent's value, or True if no parent is provided. :param result_on_exception: The result value to set if an exception is caught. default value is FAIL. :param level: The priority level of the enter and exit messages sent. Default value is INFO. """ def __init__(self, name, description, message=None, parent=None, reporting_enabled=None, result_on_exception=status.FAIL, post_files=None, level="INFO"): self.parent = parent self.name = name self.description = description self.message = message self.result_on_exception = result_on_exception self.result = status.SUCCESS self.level = level if post_files is None: post_files = [] self.post_files = post_files # use parents reporting value if not provided if reporting_enabled is None: if parent: reporting_enabled = parent.reporting_enabled else: reporting_enabled = True self.reporting_enabled = reporting_enabled if parent: self.fullname = '/'.join((parent.fullname, name,)) else: self.fullname = self.name self.children = {} def __repr__(self): return ("ReportEventStack(%s, %s, reporting_enabled=%s)" % (self.name, self.description, self.reporting_enabled)) def __enter__(self): self.result = status.SUCCESS if self.reporting_enabled: report_start_event(self.fullname, self.description, level=self.level) if self.parent: self.parent.children[self.name] = (None, None) return self def _childrens_finish_info(self): for cand_result in (status.FAIL, status.WARN): for name, (value, msg) in self.children.items(): if value == cand_result: return (value, self.message) return (self.result, self.message) @property def result(self): return self._result @result.setter def result(self, value): if value not in status: raise ValueError("'%s' not a valid result" % value) self._result = value @property def message(self): if self._message is not None: return self._message return self.description @message.setter def message(self, value): self._message = value def _finish_info(self, exc): # return tuple of description, and value # explicitly handle sys.exit(0) as not an error if exc and not(isinstance(exc, SystemExit) and exc.code == 0): return (self.result_on_exception, self.message) return self._childrens_finish_info() def __exit__(self, exc_type, exc_value, traceback): (result, msg) = self._finish_info(exc_value) if self.parent: self.parent.children[self.name] = (result, msg) if self.reporting_enabled: report_finish_event(self.fullname, msg, result, post_files=self.post_files, level=self.level) def _collect_file_info(files): if not files: return None ret = [] for fname in files: if not os.path.isfile(fname): content = None else: with open(fname, "rb") as fp: content = base64.b64encode(fp.read()).decode() ret.append({'path': fname, 'content': content, 'encoding': 'base64'}) return ret # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/reporter/handlers.py000066400000000000000000000102461326565350400217540ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import abc from .registry import DictRegistry from .. import url_helper from .. import log as logging LOG = logging.getLogger(__name__) class ReportingHandler(object): """Base class for report handlers. Implement :meth:`~publish_event` for controlling what the handler does with an event. """ @abc.abstractmethod def publish_event(self, event): """Publish an event to the ``INFO`` log level.""" class LogHandler(ReportingHandler): """Publishes events to the curtin log at the ``DEBUG`` log level.""" def __init__(self, level="DEBUG"): super(LogHandler, self).__init__() if isinstance(level, int): pass else: input_level = level try: level = getattr(logging, level.upper()) except Exception: LOG.warn("invalid level '%s', using WARN", input_level) level = logging.WARN self.level = level def publish_event(self, event): """Publish an event to the ``DEBUG`` log level.""" logger = logging.getLogger( '.'.join(['curtin', 'reporting', event.event_type, event.name])) logger.log(self.level, event.as_string()) class PrintHandler(ReportingHandler): """Print the event as a string.""" def publish_event(self, event): print(event.as_string()) class WebHookHandler(ReportingHandler): def __init__(self, endpoint, consumer_key=None, token_key=None, token_secret=None, consumer_secret=None, timeout=None, retries=None, level="DEBUG"): super(WebHookHandler, self).__init__() self.oauth_helper = url_helper.OauthUrlHelper( consumer_key=consumer_key, token_key=token_key, token_secret=token_secret, consumer_secret=consumer_secret) self.endpoint = endpoint self.timeout = timeout self.retries = retries try: self.level = getattr(logging, level.upper()) except Exception: LOG.warn("invalid level '%s', using WARN", level) self.level = logging.WARN self.headers = {'Content-Type': 'application/json'} def publish_event(self, event): try: return self.oauth_helper.geturl( url=self.endpoint, data=event.as_dict(), headers=self.headers, retries=self.retries) except Exception as e: LOG.warn("failed posting event: %s [%s]" % (event.as_string(), e)) class JournaldHandler(ReportingHandler): def __init__(self, level="DEBUG", identifier="curtin_event"): super(JournaldHandler, self).__init__() if isinstance(level, int): pass else: input_level = level try: level = getattr(logging, level.upper()) except Exception: LOG.warn("invalid level '%s', using WARN", input_level) level = logging.WARN self.level = level self.identifier = identifier def publish_event(self, event): # Ubuntu older than precise will not have python-systemd installed. try: from systemd import journal except ImportError: raise level = str(getattr(journal, "LOG_" + event.level, journal.LOG_DEBUG)) extra = {} if hasattr(event, 'result'): extra['CURTIN_RESULT'] = event.result journal.send( event.as_string(), PRIORITY=level, SYSLOG_IDENTIFIER=self.identifier, CURTIN_EVENT_TYPE=event.event_type, CURTIN_MESSAGE=event.description, CURTIN_NAME=event.name, **extra ) available_handlers = DictRegistry() available_handlers.register_item('log', LogHandler) available_handlers.register_item('print', PrintHandler) available_handlers.register_item('webhook', WebHookHandler) # only add journald handler on systemd systems try: available_handlers.register_item('journald', JournaldHandler) except ImportError: print('journald report handler not supported; no systemd module') # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/reporter/legacy/000077500000000000000000000000001326565350400210435ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/curtin/reporter/legacy/__init__.py000066400000000000000000000027241326565350400231610ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin.util import ( try_import_module, ) from abc import ( ABCMeta, abstractmethod, ) from curtin.log import LOG class BaseReporter: """Skeleton for a report.""" __metaclass__ = ABCMeta @abstractmethod def report_success(self): """Report installation success.""" @abstractmethod def report_failure(self, failure): """Report installation failure.""" class EmptyReporter(BaseReporter): def report_success(self): """Empty.""" def report_failure(self, failure): """Empty.""" class LoadReporterException(Exception): """Raise exception if desired reporter not loaded.""" pass def load_reporter(config): """Loads and returns reporter instance stored in config file.""" reporter = config.get('reporter') if reporter is None: LOG.info("'reporter' not found in config file.") return EmptyReporter() name, options = reporter.popitem() module = try_import_module('curtin.reporter.legacy.%s' % name) if module is None: LOG.error( "Module for %s reporter could not load." % name) return EmptyReporter() try: return module.load_factory(options) except LoadReporterException: LOG.error( "Failed loading %s reporter with %s" % (name, options)) return EmptyReporter() # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/reporter/legacy/maas.py000066400000000000000000000101461326565350400223400ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin import url_helper from . import (BaseReporter, LoadReporterException) import mimetypes import os.path import random import string import sys class MAASReporter(BaseReporter): def __init__(self, config): """Load config dictionary and initialize object.""" self.url = config['url'] self.urlhelper = url_helper.OauthUrlHelper( consumer_key=config.get('consumer_key'), token_key=config.get('token_key'), token_secret=config.get('token_secret'), consumer_secret='', skew_data_file="/run/oauth_skew.json") self.files = [] self.retries = config.get('retries', [1, 1, 2, 4, 8, 16, 32]) def report_success(self): """Report installation success.""" status = "OK" message = "Installation succeeded." self.report(status, message, files=self.files) def report_failure(self, message): """Report installation failure.""" status = "FAILED" self.report(status, message, files=self.files) def encode_multipart_data(self, data, files): """Create a MIME multipart payload from L{data} and L{files}. @param data: A mapping of names (ASCII strings) to data (byte string). @param files: A mapping of names (ASCII strings) to file objects ready to be read. @return: A 2-tuple of C{(body, headers)}, where C{body} is a a byte string and C{headers} is a dict of headers to add to the enclosing request in which this payload will travel. """ boundary = self._random_string(30) lines = [] for name in data: lines.extend(self._encode_field(name, data[name], boundary)) for name in files: lines.extend(self._encode_file(name, files[name], boundary)) lines.extend(('--%s--' % boundary, '')) body = '\r\n'.join(lines) headers = { 'content-type': 'multipart/form-data; boundary=' + boundary, 'content-length': "%d" % len(body), } return body, headers def report(self, status, message=None, files=None): """Send the report.""" params = {} params['status'] = status if message is not None: params['error'] = message if files is None: files = [] install_files = {} for fpath in files: install_files[os.path.basename(fpath)] = open(fpath, "r") data, headers = self.encode_multipart_data(params, install_files) msg = "" if not isinstance(data, bytes): data = data.encode() try: payload = self.urlhelper.geturl( self.url, data=data, headers=headers, retries=self.retries) if payload != b'OK': raise TypeError("Unexpected result from call: %s" % payload) else: msg = "Success" except url_helper.UrlError as exc: msg = str(exc) except Exception as exc: raise exc sys.stderr.write("%s\n" % msg) def _encode_field(self, field_name, data, boundary): return ( '--' + boundary, 'Content-Disposition: form-data; name="%s"' % field_name, '', str(data), ) def _encode_file(self, name, fileObj, boundary): return ( '--' + boundary, 'Content-Disposition: form-data; name="%s"; filename="%s"' % (name, name), 'Content-Type: %s' % self._get_content_type(name), '', fileObj.read(), ) def _random_string(self, length): return ''.join(random.choice(string.ascii_letters) for ii in range(length + 1)) def _get_content_type(self, filename): return mimetypes.guess_type(filename)[0] or 'application/octet-stream' def load_factory(options): try: return MAASReporter(options) except Exception: raise LoadReporterException # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/reporter/registry.py000066400000000000000000000017711326565350400220270ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import copy class DictRegistry(object): """A simple registry for a mapping of objects.""" def __init__(self): self.reset() def reset(self): self._items = {} def register_item(self, key, item): """Add item to the registry.""" if key in self._items: raise ValueError( 'Item already registered with key {0}'.format(key)) self._items[key] = item def unregister_item(self, key, force=True): """Remove item from the registry.""" if key in self._items: del self._items[key] elif not force: raise KeyError("%s: key not present to unregister" % key) @property def registered_items(self): """All the items that have been registered. This cannot be used to modify the contents of the registry. """ return copy.copy(self._items) # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/swap.py000066400000000000000000000063701326565350400172670ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import resource from .log import LOG from . import util def suggested_swapsize(memsize=None, maxsize=None, fsys=None): # make a suggestion on the size of swap for this system. if memsize is None: memsize = util.get_meminfo()['total'] GB = 2 ** 30 sugg_max = 8 * GB if fsys is None and maxsize is None: # set max to 8GB default if no filesystem given maxsize = sugg_max elif fsys: avail = util.get_fs_use_info(fsys)[1] if maxsize is None: # set to 25% of filesystem space maxsize = min(int(avail / 4), sugg_max) elif maxsize > ((avail * .9)): # set to 90% of available disk space maxsize = int(avail * .9) formulas = [ # < 1G: swap = double memory (1 * GB, lambda x: x * 2), # < 2G: swap = 2G (2 * GB, lambda x: 2 * GB), # < 4G: swap = memory (4 * GB, lambda x: x), # < 16G: 4G (16 * GB, lambda x: 4 * GB), # < 64G: 1/2 M up to max (64 * GB, lambda x: x / 2), ] size = None for top, func in formulas: if memsize <= top: size = min(func(memsize), maxsize) if size < (memsize / 2) and size < 4 * GB: return 0 return size return maxsize def setup_swapfile(target, fstab=None, swapfile=None, size=None, maxsize=None): if size is None: size = suggested_swapsize(fsys=target, maxsize=maxsize) if size == 0: LOG.debug("Not creating swap: suggested size was 0") return if swapfile is None: swapfile = "/swap.img" if not swapfile.startswith("/"): swapfile = "/" + swapfile mbsize = str(int(size / (2 ** 20))) msg = "creating swap file '%s' of %sMB" % (swapfile, mbsize) fpath = os.path.sep.join([target, swapfile]) try: util.ensure_dir(os.path.dirname(fpath)) with util.LogTimer(LOG.debug, msg): util.subp( ['sh', '-c', ('rm -f "$1" && umask 0066 && ' '{ fallocate -l "${2}M" "$1" || ' ' dd if=/dev/zero "of=$1" bs=1M "count=$2"; } && ' 'mkswap "$1" || { r=$?; rm -f "$1"; exit $r; }'), 'setup_swap', fpath, mbsize]) except Exception: LOG.warn("failed %s" % msg) raise if fstab is None: return try: line = '\t'.join([swapfile, 'none', 'swap', 'sw', '0', '0']) with open(fstab, "a") as fp: fp.write(line + "\n") except Exception: os.unlink(fpath) raise def is_swap_device(path): """ Determine if specified device is a swap device. Linux swap devices write a magic header value on kernel PAGESIZE - 10. https://github.com/torvalds/linux/blob/master/include/linux/swap.h#L111 """ LOG.debug('Checking if %s is a swap device', path) swap_magic_offset = resource.getpagesize() - 10 magic = util.load_file(path, read_len=10, offset=swap_magic_offset, decode=False) LOG.debug('Found swap magic: %s' % magic) return magic in [b'SWAPSPACE2', b'SWAP-SPACE'] # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/udev.py000066400000000000000000000034611326565350400172560ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os from curtin import util def compose_udev_equality(key, value): """Return a udev comparison clause, like `ACTION=="add"`.""" assert key == key.upper() return '%s=="%s"' % (key, value) def compose_udev_attr_equality(attribute, value): """Return a udev attribute comparison clause, like `ATTR{type}=="1"`.""" assert attribute == attribute.lower() return 'ATTR{%s}=="%s"' % (attribute, value) def compose_udev_setting(key, value): """Return a udev assignment clause, like `NAME="eth0"`.""" assert key == key.upper() return '%s="%s"' % (key, value) def generate_udev_rule(interface, mac): """Return a udev rule to set the name of network interface with `mac`. The rule ends up as a single line looking something like: SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}="ff:ee:dd:cc:bb:aa", NAME="eth0" """ rule = ', '.join([ compose_udev_equality('SUBSYSTEM', 'net'), compose_udev_equality('ACTION', 'add'), compose_udev_equality('DRIVERS', '?*'), compose_udev_attr_equality('address', mac), compose_udev_setting('NAME', interface), ]) return '%s\n' % rule def udevadm_settle(exists=None, timeout=None): settle_cmd = ["udevadm", "settle"] if exists: # skip the settle if the requested path already exists if os.path.exists(exists): return settle_cmd.extend(['--exit-if-exists=%s' % exists]) if timeout: settle_cmd.extend(['--timeout=%s' % timeout]) util.subp(settle_cmd) def udevadm_trigger(devices): if devices is None: devices = [] util.subp(['udevadm', 'trigger'] + list(devices)) udevadm_settle() # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/url_helper.py000066400000000000000000000356351326565350400204640ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from email.utils import parsedate import json import os import socket import sys import time import uuid from functools import partial from curtin import version try: from urllib import request as _u_re # pylint: disable=no-name-in-module from urllib import error as _u_e # pylint: disable=no-name-in-module from urllib.parse import urlparse # pylint: disable=no-name-in-module urllib_request = _u_re urllib_error = _u_e except ImportError: # python2 import urllib2 as urllib_request import urllib2 as urllib_error from urlparse import urlparse # pylint: disable=import-error from .log import LOG error = urllib_error DEFAULT_HEADERS = {'User-Agent': 'Curtin/' + version.version_string()} class _ReRaisedException(Exception): exc = None """this exists only as an exception type that was re-raised by an exception_cb, so code can know to handle it specially""" def __init__(self, exc): self.exc = exc class UrlReader(object): fp = None def __init__(self, url, headers=None, data=None): headers = _get_headers(headers) self.url = url try: req = urllib_request.Request(url=url, data=data, headers=headers) self.fp = urllib_request.urlopen(req) except urllib_error.HTTPError as exc: raise UrlError(exc, code=exc.code, headers=exc.headers, url=url, reason=exc.reason) except Exception as exc: raise UrlError(exc, code=None, headers=None, url=url, reason="unknown") self.info = self.fp.info() self.size = self.info.get('content-length', -1) def read(self, buflen): try: return self.fp.read(buflen) except urllib_error.HTTPError as exc: raise UrlError(exc, code=exc.code, headers=exc.headers, url=self.url, reason=exc.reason) except Exception as exc: raise UrlError(exc, code=None, headers=None, url=self.url, reason="unknown") def close(self): if not self.fp: return try: self.fp.close() finally: self.fp = None def __enter__(self): return self def __exit__(self, etype, value, trace): self.close() def download(url, path, reporthook=None, data=None): """Download url to path. reporthook is compatible with py3 urllib.request.urlretrieve. urlretrieve does not exist in py2.""" buflen = 8192 wfp = open(path, "wb") try: buf = None blocknum = 0 fsize = 0 start = time.time() with UrlReader(url) as rfp: if reporthook: reporthook(blocknum, buflen, rfp.size) while True: buf = rfp.read(buflen) if not buf: break blocknum += 1 if reporthook: reporthook(blocknum, buflen, rfp.size) wfp.write(buf) fsize += len(buf) timedelta = time.time() - start LOG.debug("Downloaded %d bytes from %s to %s in %.2fs (%.2fMbps)", fsize, url, path, timedelta, fsize / timedelta / 1024 / 1024) return path, rfp.info finally: wfp.close() def get_maas_version(endpoint): """ Attempt to return the MAAS version via api calls to the specified endpoint. MAAS endpoint url looks like this: http://10.245.168.2/MAAS/metadata/status/node-f0462064-20f6-11e5-990a-d4bed9a84493 We need the MAAS_URL, which is http://10.245.168.2 Returns a maas version dictionary: {'subversion': '16.04.1', 'capabilities': ['networks-management', 'static-ipaddresses', 'ipv6-deployment-ubuntu', 'devices-management', 'storage-deployment-ubuntu', 'network-deployment-ubuntu', 'bridging-interface-ubuntu', 'bridging-automatic-ubuntu'], 'version': '2.1.5+bzr5596-0ubuntu1' } """ # https://docs.ubuntu.com/maas/devel/en/api indicates that # we leave 1.0 in here for maas 1.9 endpoints MAAS_API_SUPPORTED_VERSIONS = ["1.0", "2.0"] try: parsed = urlparse(endpoint) except AttributeError as e: LOG.warn('Failed to parse endpoint URL: %s', e) return None maas_host = "%s://%s" % (parsed.scheme, parsed.netloc) maas_api_version_url = "%s/MAAS/api/version/" % (maas_host) try: result = geturl(maas_api_version_url) except UrlError as e: LOG.warn('Failed to query MAAS API version URL: %s', e) return None api_version = result.decode('utf-8') if api_version not in MAAS_API_SUPPORTED_VERSIONS: LOG.warn('Endpoint "%s" API version "%s" not in MAAS supported' 'versions: "%s"', endpoint, api_version, MAAS_API_SUPPORTED_VERSIONS) return None maas_version_url = "%s/MAAS/api/%s/version/" % (maas_host, api_version) maas_version = None try: result = geturl(maas_version_url) maas_version = json.loads(result.decode('utf-8')) except UrlError as e: LOG.warn('Failed to query MAAS version via URL: %s', e) except (ValueError, TypeError): LOG.warn('Failed to load MAAS version result: %s', result) return maas_version def _get_headers(headers=None): allheaders = DEFAULT_HEADERS.copy() if headers is not None: allheaders.update(headers) return allheaders def _geturl(url, headers=None, headers_cb=None, exception_cb=None, data=None): headers = _get_headers(headers) if headers_cb: headers.update(headers_cb(url)) if data and isinstance(data, dict): data = json.dumps(data).encode() try: req = urllib_request.Request(url=url, data=data, headers=headers) r = urllib_request.urlopen(req).read() # python2, we want to return bytes, which is what python3 does if isinstance(r, str): return r.decode() return r except urllib_error.HTTPError as exc: myexc = UrlError(exc, code=exc.code, headers=exc.headers, url=url, reason=exc.reason) except Exception as exc: myexc = UrlError(exc, code=None, headers=None, url=url, reason="unknown") if exception_cb: try: exception_cb(myexc) except Exception as e: myexc = _ReRaisedException(e) raise myexc def geturl(url, headers=None, headers_cb=None, exception_cb=None, data=None, retries=None, log=LOG.warn): """return the content of the url in binary_type. (py3: bytes, py2: str)""" if retries is None: retries = [] curexc = None for trynum, naptime in enumerate(retries): try: return _geturl(url=url, headers=headers, headers_cb=headers_cb, exception_cb=exception_cb, data=data) except _ReRaisedException as e: raise curexc.exc except Exception as e: curexc = e if log: msg = ("try %d of request to %s failed. sleeping %d: %s" % (naptime, url, naptime, curexc)) log(msg) time.sleep(naptime) try: return _geturl(url=url, headers=headers, headers_cb=headers_cb, exception_cb=exception_cb, data=data) except _ReRaisedException as e: raise e.exc class UrlError(IOError): def __init__(self, cause, code=None, headers=None, url=None, reason=None): IOError.__init__(self, str(cause)) self.cause = cause self.code = code self.headers = headers if self.headers is None: self.headers = {} self.url = url self.reason = reason def __str__(self): if isinstance(self.cause, urllib_error.HTTPError): msg = "http error: %s" % self.cause.code elif isinstance(self.cause, urllib_error.URLError): msg = "url error: %s" % self.cause.reason elif isinstance(self.cause, socket.timeout): msg = "socket timeout: %s" % self.cause else: msg = "Unknown Exception: %s" % self.cause return "[%s] " % self.url + msg class OauthUrlHelper(object): def __init__(self, consumer_key=None, token_key=None, token_secret=None, consumer_secret=None, skew_data_file="/run/oauth_skew.json"): self.consumer_key = consumer_key self.consumer_secret = consumer_secret or "" self.token_key = token_key self.token_secret = token_secret self.skew_data_file = skew_data_file self._do_oauth = True self.skew_change_limit = 5 required = (self.token_key, self.token_secret, self.consumer_key) if not any(required): self._do_oauth = False elif not all(required): raise ValueError("all or none of token_key, token_secret, or " "consumer_key can be set") old = self.read_skew_file() self.skew_data = old or {} def __str__(self): fields = ['consumer_key', 'consumer_secret', 'token_key', 'token_secret'] masked = fields def r(name): if not hasattr(self, name): rval = "_unset" else: val = getattr(self, name) if val is None: rval = "None" elif name in masked: rval = '"%s"' % ("*" * len(val)) else: rval = '"%s"' % val return '%s=%s' % (name, rval) return ("OauthUrlHelper(" + ','.join([r(f) for f in fields]) + ")") def read_skew_file(self): if self.skew_data_file and os.path.isfile(self.skew_data_file): with open(self.skew_data_file, mode="r") as fp: return json.load(fp) return None def update_skew_file(self, host, value): # this is not atomic if not self.skew_data_file: return cur = self.read_skew_file() if cur is None: cur = {} cur[host] = value with open(self.skew_data_file, mode="w") as fp: fp.write(json.dumps(cur)) def exception_cb(self, exception): if not (isinstance(exception, UrlError) and (exception.code == 403 or exception.code == 401)): return if 'date' not in exception.headers: LOG.warn("Missing header 'date' in %s response", exception.code) return date = exception.headers['date'] try: remote_time = time.mktime(parsedate(date)) except Exception as e: LOG.warn("Failed to convert datetime '%s': %s", date, e) return skew = int(remote_time - time.time()) host = urlparse(exception.url).netloc old_skew = self.skew_data.get(host, 0) if abs(old_skew - skew) > self.skew_change_limit: self.update_skew_file(host, skew) LOG.warn("Setting oauth clockskew for %s to %d", host, skew) self.skew_data[host] = skew return def headers_cb(self, url): if not self._do_oauth: return {} host = urlparse(url).netloc clockskew = None if self.skew_data and host in self.skew_data: clockskew = self.skew_data[host] return oauth_headers( url=url, consumer_key=self.consumer_key, token_key=self.token_key, token_secret=self.token_secret, consumer_secret=self.consumer_secret, clockskew=clockskew) def _wrapped(self, wrapped_func, args, kwargs): kwargs['headers_cb'] = partial( self._headers_cb, kwargs.get('headers_cb')) kwargs['exception_cb'] = partial( self._exception_cb, kwargs.get('exception_cb')) return wrapped_func(*args, **kwargs) def geturl(self, *args, **kwargs): return self._wrapped(geturl, args, kwargs) def _exception_cb(self, extra_exception_cb, exception): ret = None try: if extra_exception_cb: ret = extra_exception_cb(exception) finally: self.exception_cb(exception) return ret def _headers_cb(self, extra_headers_cb, url): headers = {} if extra_headers_cb: headers = extra_headers_cb(url) headers.update(self.headers_cb(url)) return headers def _oauth_headers_none(url, consumer_key, token_key, token_secret, consumer_secret, clockskew=0): """oauth_headers implementation when no oauth is available""" if not any([token_key, token_secret, consumer_key]): return {} pkg = "'python3-oauthlib'" if sys.version_info[0] == 2: pkg = "'python-oauthlib' or 'python-oauth'" raise ValueError( "Oauth was necessary but no oauth library is available. " "Please install package " + pkg + ".") def _oauth_headers_oauth(url, consumer_key, token_key, token_secret, consumer_secret, clockskew=0): """Build OAuth headers with oauth using given credentials.""" consumer = oauth.OAuthConsumer(consumer_key, consumer_secret) token = oauth.OAuthToken(token_key, token_secret) if clockskew is None: clockskew = 0 timestamp = int(time.time()) + clockskew params = { 'oauth_version': "1.0", 'oauth_nonce': uuid.uuid4().hex, 'oauth_timestamp': timestamp, 'oauth_token': token.key, 'oauth_consumer_key': consumer.key, } req = oauth.OAuthRequest(http_url=url, parameters=params) req.sign_request( oauth.OAuthSignatureMethod_PLAINTEXT(), consumer, token) return(req.to_header()) def _oauth_headers_oauthlib(url, consumer_key, token_key, token_secret, consumer_secret, clockskew=0): """Build OAuth headers with oauthlib using given credentials.""" if clockskew is None: clockskew = 0 timestamp = int(time.time()) + clockskew client = oauth1.Client( consumer_key, client_secret=consumer_secret, resource_owner_key=token_key, resource_owner_secret=token_secret, signature_method=oauth1.SIGNATURE_PLAINTEXT, timestamp=str(timestamp)) uri, signed_headers, body = client.sign(url) return signed_headers oauth_headers = _oauth_headers_none try: # prefer to use oauthlib. (python-oauthlib) import oauthlib.oauth1 as oauth1 oauth_headers = _oauth_headers_oauthlib except ImportError: # no oauthlib was present, try using oauth (python-oauth) try: import oauth.oauth as oauth oauth_headers = _oauth_headers_oauth except ImportError: # we have no oauth libraries available, use oauth_headers_none pass # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/util.py000066400000000000000000001315761326565350400173010ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import argparse import collections from contextlib import contextmanager import errno import glob import json import os import platform import re import shlex import shutil import socket import subprocess import stat import sys import tempfile import time # avoid the dependency to python3-six as used in cloud-init try: from urlparse import urlparse except ImportError: # python3 # avoid triggering pylint, https://github.com/PyCQA/pylint/issues/769 # pylint:disable=import-error,no-name-in-module from urllib.parse import urlparse try: string_types = (basestring,) except NameError: string_types = (str,) try: numeric_types = (int, float, long) except NameError: # python3 does not have a long type. numeric_types = (int, float) from .log import LOG _INSTALLED_HELPERS_PATH = 'usr/lib/curtin/helpers' _INSTALLED_MAIN = 'usr/bin/curtin' _LSB_RELEASE = {} _USES_SYSTEMD = None _HAS_UNSHARE_PID = None _DNS_REDIRECT_IP = None # matcher used in template rendering functions BASIC_MATCHER = re.compile(r'\$\{([A-Za-z0-9_.]+)\}|\$([A-Za-z0-9_.]+)') def _subp(args, data=None, rcs=None, env=None, capture=False, combine_capture=False, shell=False, logstring=False, decode="replace", target=None, cwd=None, log_captured=False, unshare_pid=None): if rcs is None: rcs = [0] devnull_fp = None tpath = target_path(target) chroot_args = [] if tpath == "/" else ['chroot', target] sh_args = ['sh', '-c'] if shell else [] if isinstance(args, string_types): args = [args] try: unshare_args = _get_unshare_pid_args(unshare_pid, tpath) except RuntimeError as e: raise RuntimeError("Unable to unshare pid (cmd=%s): %s" % (args, e)) args = unshare_args + chroot_args + sh_args + list(args) if not logstring: LOG.debug( "Running command %s with allowed return codes %s (capture=%s)", args, rcs, 'combine' if combine_capture else capture) else: LOG.debug(("Running hidden command to protect sensitive " "input/output logstring: %s"), logstring) try: stdin = None stdout = None stderr = None if capture: stdout = subprocess.PIPE stderr = subprocess.PIPE if combine_capture: stdout = subprocess.PIPE stderr = subprocess.STDOUT if data is None: devnull_fp = open(os.devnull) stdin = devnull_fp else: stdin = subprocess.PIPE sp = subprocess.Popen(args, stdout=stdout, stderr=stderr, stdin=stdin, env=env, shell=False, cwd=cwd) # communicate in python2 returns str, python3 returns bytes (out, err) = sp.communicate(data) # Just ensure blank instead of none. if not out and capture: out = b'' if not err and capture: err = b'' if decode: def ldecode(data, m='utf-8'): if not isinstance(data, bytes): return data return data.decode(m, errors=decode) out = ldecode(out) err = ldecode(err) except OSError as e: raise ProcessExecutionError(cmd=args, reason=e) finally: if devnull_fp: devnull_fp.close() if capture and log_captured: LOG.debug("Command returned stdout=%s, stderr=%s", out, err) rc = sp.returncode # pylint: disable=E1101 if rc not in rcs: raise ProcessExecutionError(stdout=out, stderr=err, exit_code=rc, cmd=args) return (out, err) def _has_unshare_pid(): global _HAS_UNSHARE_PID if _HAS_UNSHARE_PID is not None: return _HAS_UNSHARE_PID if not which('unshare'): _HAS_UNSHARE_PID = False return False out, err = subp(["unshare", "--help"], capture=True, decode=False, unshare_pid=False) joined = b'\n'.join([out, err]) _HAS_UNSHARE_PID = b'--fork' in joined and b'--pid' in joined return _HAS_UNSHARE_PID def _get_unshare_pid_args(unshare_pid=None, target=None, euid=None): """Get args for calling unshare for a pid. If unshare_pid is False, return empty list. If unshare_pid is True, check if it is usable. If not, raise exception. if unshare_pid is None, then unshare if * euid is 0 * 'unshare' with '--fork' and '--pid' is available. * target != / """ if unshare_pid is not None and not unshare_pid: # given a false-ish other than None means no. return [] if euid is None: euid = os.geteuid() tpath = target_path(target) unshare_pid_in = unshare_pid if unshare_pid is None: unshare_pid = False if tpath != "/" and euid == 0: if _has_unshare_pid(): unshare_pid = True if not unshare_pid: return [] # either unshare was passed in as True, or None and turned to True. if euid != 0: raise RuntimeError( "given unshare_pid=%s but euid (%s) != 0." % (unshare_pid_in, euid)) if not _has_unshare_pid(): raise RuntimeError( "given unshare_pid=%s but no unshare command." % unshare_pid_in) return ['unshare', '--fork', '--pid', '--'] def subp(*args, **kwargs): """Run a subprocess. :param args: command to run in a list. [cmd, arg1, arg2...] :param data: input to the command, made available on its stdin. :param rcs: a list of allowed return codes. If subprocess exits with a value not in this list, a ProcessExecutionError will be raised. By default, data is returned as a string. See 'decode' parameter. :param env: a dictionary for the command's environment. :param capture: boolean indicating if output should be captured. If True, then stderr and stdout will be returned. If False, they will not be redirected. :param combine_capture: boolean indicating if stderr should be redirected to stdout. When True, interleaved stderr and stdout will be returned as the first element of a tuple. :param log_captured: boolean indicating if output should be logged on capture. If True, then stderr and stdout will be logged at DEBUG level. If False, they will not be logged. :param shell: boolean indicating if this should be run with a shell. :param logstring: the command will be logged to DEBUG. If it contains info that should not be logged, then logstring will be logged instead. :param decode: if False, no decoding will be done and returned stdout and stderr will be bytes. Other allowed values are 'strict', 'ignore', and 'replace'. These values are passed through to bytes().decode() as the 'errors' parameter. There is no support for decoding to other than utf-8. :param retries: a list of times to sleep in between retries. After each failure subp will sleep for N seconds and then try again. A value of [1, 3] means to run, sleep 1, run, sleep 3, run and then return exit code. :param target: run the command as 'chroot target ' :param unshare_pid: unshare the pid namespace. default value (None) is to unshare pid namespace if possible and target != / :return if not capturing, return is (None, None) if capturing, stdout and stderr are returned. if decode: python2 unicode or python3 string if not decode: python2 string or python3 bytes """ retries = [] if "retries" in kwargs: retries = kwargs.pop("retries") if not retries: # allow retries=None retries = [] if args: cmd = args[0] if 'args' in kwargs: cmd = kwargs['args'] # Retry with waits between the retried command. for num, wait in enumerate(retries): try: return _subp(*args, **kwargs) except ProcessExecutionError as e: LOG.debug("try %s: command %s failed, rc: %s", num, cmd, e.exit_code) time.sleep(wait) # Final try without needing to wait or catch the error. If this # errors here then it will be raised to the caller. return _subp(*args, **kwargs) def wait_for_removal(path, retries=[1, 3, 5, 7]): if not path: raise ValueError('wait_for_removal: missing path parameter') # Retry with waits between checking for existence LOG.debug('waiting for %s to be removed', path) for num, wait in enumerate(retries): if not os.path.exists(path): LOG.debug('%s has been removed', path) return LOG.debug('sleeping %s', wait) time.sleep(wait) # final check if not os.path.exists(path): LOG.debug('%s has been removed', path) return raise OSError('Timeout exceeded for removal of %s', path) def load_command_environment(env=os.environ, strict=False): mapping = {'scratch': 'WORKING_DIR', 'fstab': 'OUTPUT_FSTAB', 'interfaces': 'OUTPUT_INTERFACES', 'config': 'CONFIG', 'target': 'TARGET_MOUNT_POINT', 'network_state': 'OUTPUT_NETWORK_STATE', 'network_config': 'OUTPUT_NETWORK_CONFIG', 'report_stack_prefix': 'CURTIN_REPORTSTACK'} if strict: missing = [k for k in mapping.values() if k not in env] if len(missing): raise KeyError("missing environment vars: %s" % missing) return {k: env.get(v) for k, v in mapping.items()} def is_kmod_loaded(module): """Test if kernel module 'module' is current loaded by checking sysfs""" if not module: raise ValueError('is_kmod_loaded: invalid module: "%s"', module) return os.path.isdir('/sys/module/%s' % module) def load_kernel_module(module, check_loaded=True): """Install kernel module via modprobe. Optionally check if it's already loaded . """ if not module: raise ValueError('load_kernel_module: invalid module: "%s"', module) if check_loaded: if is_kmod_loaded(module): LOG.debug('Skipping kernel module load, %s already loaded', module) return LOG.debug('Loading kernel module %s via modprobe', module) subp(['modprobe', '--use-blacklist', module]) class BadUsage(Exception): pass class ProcessExecutionError(IOError): MESSAGE_TMPL = ('%(description)s\n' 'Command: %(cmd)s\n' 'Exit code: %(exit_code)s\n' 'Reason: %(reason)s\n' 'Stdout: %(stdout)s\n' 'Stderr: %(stderr)s') stdout_indent_level = 8 def __init__(self, stdout=None, stderr=None, exit_code=None, cmd=None, description=None, reason=None): if not cmd: self.cmd = '-' else: self.cmd = cmd if not description: self.description = 'Unexpected error while running command.' else: self.description = description if not isinstance(exit_code, int): self.exit_code = '-' else: self.exit_code = exit_code if not stderr: self.stderr = "''" else: self.stderr = self._indent_text(stderr) if not stdout: self.stdout = "''" else: self.stdout = self._indent_text(stdout) if reason: self.reason = reason else: self.reason = '-' message = self.MESSAGE_TMPL % { 'description': self.description, 'cmd': self.cmd, 'exit_code': self.exit_code, 'stdout': self.stdout, 'stderr': self.stderr, 'reason': self.reason, } IOError.__init__(self, message) def _indent_text(self, text): if type(text) == bytes: text = text.decode() return text.replace('\n', '\n' + ' ' * self.stdout_indent_level) class LogTimer(object): def __init__(self, logfunc, msg): self.logfunc = logfunc self.msg = msg def __enter__(self): self.start = time.time() return self def __exit__(self, etype, value, trace): self.logfunc("%s took %0.3f seconds" % (self.msg, time.time() - self.start)) def is_mounted(target, src=None, opts=None): # return whether or not src is mounted on target mounts = "" with open("/proc/mounts", "r") as fp: mounts = fp.read() for line in mounts.splitlines(): if line.split()[1] == os.path.abspath(target): return True return False def list_device_mounts(device): # return mount entry if device is in /proc/mounts mounts = "" with open("/proc/mounts", "r") as fp: mounts = fp.read() dev_mounts = [] for line in mounts.splitlines(): if line.split()[0] == device: dev_mounts.append(line) return dev_mounts def fuser_mount(path): """ Execute fuser to determine open file handles from mountpoint path Use verbose mode and then combine stdout, stderr from fuser into a dictionary: {pid: "fuser-details"} path may also be a kernel devpath (e.g. /dev/sda) """ fuser_output = {} try: stdout, stderr = subp(['fuser', '--verbose', '--mount', path], capture=True) except ProcessExecutionError as e: LOG.debug('fuser returned non-zero: %s', e.stderr) return None pidlist = stdout.split() """ fuser writes a header in verbose mode, we'll ignore that but the order if the input is note that is not present in stderr, it's only in stdout. Also only the entry with pid=kernel entry will contain the mountpoint # Combined stdout and stderr look like: # USER PID ACCESS COMMAND # /home: root kernel mount / # root 1 .rce. systemd # # This would return # { 'kernel': ['/home', 'root', 'mount', '/'], '1': ['root', '1', '.rce.', 'systemd'], } """ # Note that fuser only writes PIDS to stdout. Each PID value is # 'kernel' or an integer and indicates a process which has an open # file handle against the path specified path. All other output # is sent to stderr. This code below will merge the two as needed. for (pid, status) in zip(pidlist, stderr.splitlines()[1:]): fuser_output[pid] = status.split() return fuser_output @contextmanager def chdir(dirname): curdir = os.getcwd() try: os.chdir(dirname) yield dirname finally: os.chdir(curdir) def do_mount(src, target, opts=None): # mount src at target with opts and return True # if already mounted, return False if opts is None: opts = [] if isinstance(opts, str): opts = [opts] if is_mounted(target, src, opts): return False ensure_dir(target) cmd = ['mount'] + opts + [src, target] subp(cmd) return True def do_umount(mountpoint, recursive=False): # unmount mountpoint. if recursive, unmount all mounts under it. # return boolean indicating if mountpoint was previously mounted. mp = os.path.abspath(mountpoint) ret = False for line in reversed(load_file("/proc/mounts", decode=True).splitlines()): curmp = line.split()[1] if curmp == mp or (recursive and curmp.startswith(mp + os.path.sep)): subp(['umount', curmp]) if curmp == mp: ret = True return ret def ensure_dir(path, mode=None): try: os.makedirs(path) except OSError as e: if e.errno != errno.EEXIST: raise if mode is not None: os.chmod(path, mode) def write_file(filename, content, mode=0o644, omode="w"): """ write 'content' to file at 'filename' using python open mode 'omode'. if mode is not set, then chmod file to mode. mode is 644 by default """ ensure_dir(os.path.dirname(filename)) with open(filename, omode) as fp: fp.write(content) if mode: os.chmod(filename, mode) def load_file(path, read_len=None, offset=0, decode=True): with open(path, "rb") as fp: if offset: fp.seek(offset) contents = fp.read(read_len) if read_len else fp.read() if decode: return decode_binary(contents) else: return contents def decode_binary(blob, encoding='utf-8', errors='replace'): # Converts a binary type into a text type using given encoding. return blob.decode(encoding, errors=errors) def file_size(path): """get the size of a file""" with open(path, 'rb') as fp: fp.seek(0, 2) return fp.tell() def del_file(path): try: os.unlink(path) LOG.debug("del_file: removed %s", path) except OSError as e: LOG.exception("del_file: %s did not exist.", path) if e.errno != errno.ENOENT: raise e def disable_daemons_in_root(target): contents = "\n".join( ['#!/bin/sh', '# see invoke-rc.d for exit codes. 101 is "do not run"', 'while true; do', ' case "$1" in', ' -*) shift;;', ' makedev|x11-common) exit 0;;', ' *) exit 101;;', ' esac', 'done', '']) fpath = target_path(target, "/usr/sbin/policy-rc.d") if os.path.isfile(fpath): return False write_file(fpath, mode=0o755, content=contents) return True def undisable_daemons_in_root(target): try: os.unlink(target_path(target, "/usr/sbin/policy-rc.d")) except OSError as e: if e.errno != errno.ENOENT: raise return False return True class ChrootableTarget(object): def __init__(self, target, allow_daemons=False, sys_resolvconf=True): if target is None: target = "/" self.target = target_path(target) self.mounts = ["/dev", "/proc", "/sys"] self.umounts = [] self.disabled_daemons = False self.allow_daemons = allow_daemons self.sys_resolvconf = sys_resolvconf self.rconf_d = None def __enter__(self): for p in self.mounts: tpath = target_path(self.target, p) if do_mount(p, tpath, opts='--bind'): self.umounts.append(tpath) if not self.allow_daemons: self.disabled_daemons = disable_daemons_in_root(self.target) rconf = target_path(self.target, "/etc/resolv.conf") target_etc = os.path.dirname(rconf) if self.target != "/" and os.path.isdir(target_etc): # never muck with resolv.conf on / rconf = os.path.join(target_etc, "resolv.conf") rtd = None try: rtd = tempfile.mkdtemp(dir=target_etc) tmp = os.path.join(rtd, "resolv.conf") os.rename(rconf, tmp) self.rconf_d = rtd shutil.copy("/etc/resolv.conf", rconf) except Exception: if rtd: shutil.rmtree(rtd) self.rconf_d = None raise return self def __exit__(self, etype, value, trace): if self.disabled_daemons: undisable_daemons_in_root(self.target) # if /dev is to be unmounted, udevadm settle (LP: #1462139) if target_path(self.target, "/dev") in self.umounts: subp(['udevadm', 'settle']) for p in reversed(self.umounts): do_umount(p) rconf = target_path(self.target, "/etc/resolv.conf") if self.sys_resolvconf and self.rconf_d: os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf) shutil.rmtree(self.rconf_d) def subp(self, *args, **kwargs): kwargs['target'] = self.target return subp(*args, **kwargs) def path(self, path): return target_path(self.target, path) def is_exe(fpath): # Return path of program for execution if found in path return os.path.isfile(fpath) and os.access(fpath, os.X_OK) def which(program, search=None, target=None): target = target_path(target) if os.path.sep in program: # if program had a '/' in it, then do not search PATH # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls # so effectively we set cwd to / (or target) if is_exe(target_path(target, program)): return program if search is None: paths = [p.strip('"') for p in os.environ.get("PATH", "").split(os.pathsep)] if target == "/": search = paths else: search = [p for p in paths if p.startswith("/")] # normalize path input search = [os.path.abspath(p) for p in search] for path in search: ppath = os.path.sep.join((path, program)) if is_exe(target_path(target, ppath)): return ppath return None def _installed_file_path(path, check_file=None): # check the install root for the file 'path'. # if 'check_file', then path is a directory that contains file. # return absolute path or None. inst_pre = "/" if os.environ.get('SNAP'): inst_pre = os.path.abspath(os.environ['SNAP']) inst_path = os.path.join(inst_pre, path) if check_file: check_path = os.path.sep.join((inst_path, check_file)) else: check_path = inst_path if os.path.isfile(check_path): return os.path.abspath(inst_path) return None def get_paths(curtin_exe=None, lib=None, helpers=None): # return a dictionary with paths for 'curtin_exe', 'helpers' and 'lib' # that represent where 'curtin' executable lives, where the 'curtin' module # directory is (containing __init__.py) and where the 'helpers' directory. mydir = os.path.realpath(os.path.dirname(__file__)) tld = os.path.realpath(mydir + os.path.sep + "..") if curtin_exe is None: if os.path.isfile(os.path.join(tld, "bin", "curtin")): curtin_exe = os.path.join(tld, "bin", "curtin") if (curtin_exe is None and (os.path.basename(sys.argv[0]).startswith("curtin") and os.path.isfile(sys.argv[0]))): curtin_exe = os.path.realpath(sys.argv[0]) if curtin_exe is None: found = which('curtin') if found: curtin_exe = found if curtin_exe is None: curtin_exe = _installed_file_path(_INSTALLED_MAIN) # "common" is a file in helpers cfile = "common" if (helpers is None and os.path.isfile(os.path.join(tld, "helpers", cfile))): helpers = os.path.join(tld, "helpers") if helpers is None: helpers = _installed_file_path(_INSTALLED_HELPERS_PATH, cfile) return({'curtin_exe': curtin_exe, 'lib': mydir, 'helpers': helpers}) def get_architecture(target=None): out, _ = subp(['dpkg', '--print-architecture'], capture=True, target=target) return out.strip() def has_pkg_available(pkg, target=None): out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target) for item in out.splitlines(): if pkg == item.strip(): return True return False def get_installed_packages(target=None): (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True) pkgs_inst = set() for line in out.splitlines(): try: (state, pkg, other) = line.split(None, 2) except ValueError: continue if state.startswith("hi") or state.startswith("ii"): pkgs_inst.add(re.sub(":.*", "", pkg)) return pkgs_inst def has_pkg_installed(pkg, target=None): try: out, _ = subp(['dpkg-query', '--show', '--showformat', '${db:Status-Abbrev}', pkg], capture=True, target=target) return out.rstrip() == "ii" except ProcessExecutionError: return False def parse_dpkg_version(raw, name=None, semx=None): """Parse a dpkg version string into various parts and calcualate a numerical value of the version for use in comparing package versions returns a dictionary with the results """ if semx is None: semx = (10000, 100, 1) upstream = raw.split('-')[0] toks = upstream.split(".", 2) if len(toks) == 3: major, minor, micro = toks elif len(toks) == 2: major, minor, micro = (toks[0], toks[1], 0) elif len(toks) == 1: major, minor, micro = (toks[0], 0, 0) version = { 'major': major, 'minor': minor, 'micro': micro, 'raw': raw, 'upstream': upstream, } if name: version['name'] = name if semx: try: version['semantic_version'] = int( int(major) * semx[0] + int(minor) * semx[1] + int(micro) * semx[2]) except (ValueError, IndexError): version['semantic_version'] = None return version def get_package_version(pkg, target=None, semx=None): """Use dpkg-query to extract package pkg's version string and parse the version string into a dictionary """ try: out, _ = subp(['dpkg-query', '--show', '--showformat', '${Version}', pkg], capture=True, target=target) raw = out.rstrip() return parse_dpkg_version(raw, name=pkg, semx=semx) except ProcessExecutionError: return None def find_newer(src, files): mtime = os.stat(src).st_mtime return [f for f in files if os.path.exists(f) and os.stat(f).st_mtime > mtime] def set_unexecutable(fname, strict=False): """set fname so it is not executable. if strict, raise an exception if the file does not exist. return the current mode, or None if no change is needed. """ if not os.path.exists(fname): if strict: raise ValueError('%s: file does not exist' % fname) return None cur = stat.S_IMODE(os.lstat(fname).st_mode) target = cur & (~stat.S_IEXEC & ~stat.S_IXGRP & ~stat.S_IXOTH) if cur == target: return None os.chmod(fname, target) return cur def apt_update(target=None, env=None, force=False, comment=None, retries=None): marker = "tmp/curtin.aptupdate" if target is None: target = "/" if env is None: env = os.environ.copy() if retries is None: # by default run apt-update up to 3 times to allow # for transient failures retries = (1, 2, 3) if comment is None: comment = "no comment provided" if comment.endswith("\n"): comment = comment[:-1] marker = target_path(target, marker) # if marker exists, check if there are files that would make it obsolete listfiles = [target_path(target, "/etc/apt/sources.list")] listfiles += glob.glob( target_path(target, "etc/apt/sources.list.d/*.list")) if os.path.exists(marker) and not force: if len(find_newer(marker, listfiles)) == 0: return restore_perms = [] abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp")) try: abs_slist = abs_tmpdir + "/sources.list" abs_slistd = abs_tmpdir + "/sources.list.d" ch_tmpdir = "/tmp/" + os.path.basename(abs_tmpdir) ch_slist = ch_tmpdir + "/sources.list" ch_slistd = ch_tmpdir + "/sources.list.d" # this file gets executed on apt-get update sometimes. (LP: #1527710) motd_update = target_path( target, "/usr/lib/update-notifier/update-motd-updates-available") pmode = set_unexecutable(motd_update) if pmode is not None: restore_perms.append((motd_update, pmode),) # create tmpdir/sources.list with all lines other than deb-src # avoid apt complaining by using existing and empty dir for sourceparts os.mkdir(abs_slistd) with open(abs_slist, "w") as sfp: for sfile in listfiles: with open(sfile, "r") as fp: contents = fp.read() for line in contents.splitlines(): line = line.lstrip() if not line.startswith("deb-src"): sfp.write(line + "\n") update_cmd = [ 'apt-get', '--quiet', '--option=Acquire::Languages=none', '--option=Dir::Etc::sourcelist=%s' % ch_slist, '--option=Dir::Etc::sourceparts=%s' % ch_slistd, 'update'] # do not using 'run_apt_command' so we can use 'retries' to subp with ChrootableTarget(target, allow_daemons=True) as inchroot: inchroot.subp(update_cmd, env=env, retries=retries) finally: for fname, perms in restore_perms: os.chmod(fname, perms) if abs_tmpdir: shutil.rmtree(abs_tmpdir) with open(marker, "w") as fp: fp.write(comment + "\n") def run_apt_command(mode, args=None, aptopts=None, env=None, target=None, execute=True, allow_daemons=False): opts = ['--quiet', '--assume-yes', '--option=Dpkg::options::=--force-unsafe-io', '--option=Dpkg::Options::=--force-confold'] if args is None: args = [] if aptopts is None: aptopts = [] if env is None: env = os.environ.copy() env['DEBIAN_FRONTEND'] = 'noninteractive' if which('eatmydata', target=target): emd = ['eatmydata'] else: emd = [] cmd = emd + ['apt-get'] + opts + aptopts + [mode] + args if not execute: return env, cmd apt_update(target, env=env, comment=' '.join(cmd)) with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: return inchroot.subp(cmd, env=env) def system_upgrade(aptopts=None, target=None, env=None, allow_daemons=False): LOG.debug("Upgrading system in %s", target) for mode in ('dist-upgrade', 'autoremove'): ret = run_apt_command( mode, aptopts=aptopts, target=target, env=env, allow_daemons=allow_daemons) return ret def install_packages(pkglist, aptopts=None, target=None, env=None, allow_daemons=False): if isinstance(pkglist, str): pkglist = [pkglist] return run_apt_command( 'install', args=pkglist, aptopts=aptopts, target=target, env=env, allow_daemons=allow_daemons) def is_uefi_bootable(): return os.path.exists('/sys/firmware/efi') is True def get_efibootmgr(target): """Return mapping of EFI information. Calls `efibootmgr` inside the `target`. Example output: { 'current': '0000', 'timeout': '1 seconds', 'order': ['0000', '0001'], 'entries': { '0000': { 'name': 'ubuntu', 'path': ( 'HD(1,GPT,0,0x8,0x1)/File(\\EFI\\ubuntu\\shimx64.efi)'), }, '0001': { 'name': 'UEFI:Network Device', 'path': 'BBS(131,,0x0)', } } } """ efikey_to_dict_key = { 'BootCurrent': 'current', 'Timeout': 'timeout', 'BootOrder': 'order', } with ChrootableTarget(target) as in_chroot: stdout, _ = in_chroot.subp(['efibootmgr', '-v'], capture=True) output = {} for line in stdout.splitlines(): split = line.split(':') if len(split) == 2: key = split[0].strip() output_key = efikey_to_dict_key.get(key, None) if output_key: output[output_key] = split[1].strip() if output_key == 'order': output[output_key] = output[output_key].split(',') output['entries'] = { entry: { 'name': name.strip(), 'path': path.strip(), } for entry, name, path in re.findall( r"^Boot(?P[0-9a-fA-F]{4})\*?\s(?P.+)\t" r"(?P.*)$", stdout, re.MULTILINE) } return output def run_hook_if_exists(target, hook): """ Look for "hook" in "target" and run it """ target_hook = target_path(target, '/curtin/' + hook) if os.path.isfile(target_hook): LOG.debug("running %s" % target_hook) subp([target_hook]) return True return False def sanitize_source(source): """ Check the install source for type information If no type information is present or it is an invalid type, we default to the standard tgz format """ if type(source) is dict: # already sanitized? return source supported = ['tgz', 'dd-tgz', 'dd-tbz', 'dd-txz', 'dd-tar', 'dd-bz2', 'dd-gz', 'dd-xz', 'dd-raw', 'fsimage'] deftype = 'tgz' for i in supported: prefix = i + ":" if source.startswith(prefix): return {'type': i, 'uri': source[len(prefix):]} # translate squashfs: to fsimage type. if source.startswith("squashfs:"): return {'type': 'fsimage', 'uri': source[len("squashfs:")]} if source.endswith("squashfs") or source.endswith("squash"): return {'type': 'fsimage', 'uri': source} LOG.debug("unknown type for url '%s', assuming type '%s'", source, deftype) # default to tgz for unknown types return {'type': deftype, 'uri': source} def get_dd_images(sources): """ return all disk images in sources list """ src = [] if type(sources) is not dict: return src for i in sources: if type(sources[i]) is not dict: continue if sources[i]['type'].startswith('dd-'): src.append(sources[i]) return src def get_meminfo(meminfo="/proc/meminfo", raw=False): mpliers = {'kB': 2**10, 'mB': 2 ** 20, 'B': 1, 'gB': 2 ** 30} kmap = {'MemTotal:': 'total', 'MemFree:': 'free', 'MemAvailable:': 'available'} ret = {} with open(meminfo, "r") as fp: for line in fp: try: key, value, unit = line.split() except ValueError: key, value = line.split() unit = 'B' if raw: ret[key] = int(value) * mpliers[unit] elif key in kmap: ret[kmap[key]] = int(value) * mpliers[unit] return ret def get_fs_use_info(path): # return some filesystem usage info as tuple of (size_in_bytes, free_bytes) statvfs = os.statvfs(path) return (statvfs.f_frsize * statvfs.f_blocks, statvfs.f_frsize * statvfs.f_bfree) def human2bytes(size): # convert human 'size' to integer size_in = size if isinstance(size, int): return size elif isinstance(size, float): if int(size) != size: raise ValueError("'%s': resulted in non-integer (%s)" % (size_in, int(size))) return size elif not isinstance(size, str): raise TypeError("cannot convert type %s ('%s')." % (type(size), size)) if size.endswith("B"): size = size[:-1] mpliers = {'B': 1, 'K': 2 ** 10, 'M': 2 ** 20, 'G': 2 ** 30, 'T': 2 ** 40} num = size mplier = 'B' for m in mpliers: if size.endswith(m): mplier = m num = size[0:-len(m)] try: num = float(num) except ValueError: raise ValueError("'%s' is not valid input." % size_in) if num < 0: raise ValueError("'%s': cannot be negative" % size_in) val = num * mpliers[mplier] if int(val) != val: raise ValueError("'%s': resulted in non-integer (%s)" % (size_in, val)) return val def bytes2human(size): """convert size in bytes to human readable""" if not isinstance(size, numeric_types): raise ValueError('size must be a numeric value, not %s', type(size)) isize = int(size) if isize != size: raise ValueError('size "%s" is not a whole number.' % size) if isize < 0: raise ValueError('size "%d" < 0.' % isize) mpliers = {'B': 1, 'K': 2 ** 10, 'M': 2 ** 20, 'G': 2 ** 30, 'T': 2 ** 40} unit_order = sorted(mpliers, key=lambda x: -1 * mpliers[x]) unit = next((u for u in unit_order if (isize / mpliers[u]) >= 1), 'B') return str(int(isize / mpliers[unit])) + unit def import_module(import_str): """Import a module.""" __import__(import_str) return sys.modules[import_str] def try_import_module(import_str, default=None): """Try to import a module.""" try: return import_module(import_str) except ImportError: return default def is_file_not_found_exc(exc): return (isinstance(exc, (IOError, OSError)) and hasattr(exc, 'errno') and exc.errno in (errno.ENOENT, errno.EIO, errno.ENXIO)) def _lsb_release(target=None): fmap = {'Codename': 'codename', 'Description': 'description', 'Distributor ID': 'id', 'Release': 'release'} data = {} try: out, _ = subp(['lsb_release', '--all'], capture=True, target=target) for line in out.splitlines(): fname, _, val = line.partition(":") if fname in fmap: data[fmap[fname]] = val.strip() missing = [k for k in fmap.values() if k not in data] if len(missing): LOG.warn("Missing fields in lsb_release --all output: %s", ','.join(missing)) except ProcessExecutionError as err: LOG.warn("Unable to get lsb_release --all: %s", err) data = {v: "UNAVAILABLE" for v in fmap.values()} return data def lsb_release(target=None): if target_path(target) != "/": # do not use or update cache if target is provided return _lsb_release(target) global _LSB_RELEASE if not _LSB_RELEASE: data = _lsb_release() _LSB_RELEASE.update(data) return _LSB_RELEASE class MergedCmdAppend(argparse.Action): """This appends to a list in order of appearence both the option string and the value""" def __call__(self, parser, namespace, values, option_string=None): if getattr(namespace, self.dest, None) is None: setattr(namespace, self.dest, []) getattr(namespace, self.dest).append((option_string, values,)) def json_dumps(data): return json.dumps(data, indent=1, sort_keys=True, separators=(',', ': ')) def get_platform_arch(): platform2arch = { 'i586': 'i386', 'i686': 'i386', 'x86_64': 'amd64', 'ppc64le': 'ppc64el', 'aarch64': 'arm64', } return platform2arch.get(platform.machine(), platform.machine()) def basic_template_render(content, params): """This does simple replacement of bash variable like templates. It identifies patterns like ${a} or $a and can also identify patterns like ${a.b} or $a.b which will look for a key 'b' in the dictionary rooted by key 'a'. """ def replacer(match): """ replacer replacer used in regex match to replace content """ # Only 1 of the 2 groups will actually have a valid entry. name = match.group(1) if name is None: name = match.group(2) if name is None: raise RuntimeError("Match encountered but no valid group present") path = collections.deque(name.split(".")) selected_params = params while len(path) > 1: key = path.popleft() if not isinstance(selected_params, dict): raise TypeError("Can not traverse into" " non-dictionary '%s' of type %s while" " looking for subkey '%s'" % (selected_params, selected_params.__class__.__name__, key)) selected_params = selected_params[key] key = path.popleft() if not isinstance(selected_params, dict): raise TypeError("Can not extract key '%s' from non-dictionary" " '%s' of type %s" % (key, selected_params, selected_params.__class__.__name__)) return str(selected_params[key]) return BASIC_MATCHER.sub(replacer, content) def render_string(content, params): """ render_string render a string following replacement rules as defined in basic_template_render returning the string """ if not params: params = {} return basic_template_render(content, params) def is_resolvable(name): """determine if a url is resolvable, return a boolean This also attempts to be resilent against dns redirection. Note, that normal nsswitch resolution is used here. So in order to avoid any utilization of 'search' entries in /etc/resolv.conf we have to append '.'. The top level 'invalid' domain is invalid per RFC. And example.com should also not exist. The random entry will be resolved inside the search list. """ global _DNS_REDIRECT_IP if _DNS_REDIRECT_IP is None: badips = set() badnames = ("does-not-exist.example.com.", "example.invalid.") badresults = {} for iname in badnames: try: result = socket.getaddrinfo(iname, None, 0, 0, socket.SOCK_STREAM, socket.AI_CANONNAME) badresults[iname] = [] for (_, _, _, cname, sockaddr) in result: badresults[iname].append("%s: %s" % (cname, sockaddr[0])) badips.add(sockaddr[0]) except (socket.gaierror, socket.error): pass _DNS_REDIRECT_IP = badips if badresults: LOG.debug("detected dns redirection: %s", badresults) try: result = socket.getaddrinfo(name, None) # check first result's sockaddr field addr = result[0][4][0] if addr in _DNS_REDIRECT_IP: LOG.debug("dns %s in _DNS_REDIRECT_IP", name) return False LOG.debug("dns %s resolved to '%s'", name, result) return True except (socket.gaierror, socket.error): LOG.debug("dns %s failed to resolve", name) return False def is_valid_ipv6_address(addr): try: socket.inet_pton(socket.AF_INET6, addr) except socket.error: return False return True def is_resolvable_url(url): """determine if this url is resolvable (existing or ip).""" return is_resolvable(urlparse(url).hostname) def target_path(target, path=None): # return 'path' inside target, accepting target as None if target in (None, ""): target = "/" elif not isinstance(target, string_types): raise ValueError("Unexpected input for target: %s" % target) else: target = os.path.abspath(target) # abspath("//") returns "//" specifically for 2 slashes. if target.startswith("//"): target = target[1:] if not path: return target if not isinstance(path, string_types): raise ValueError("Unexpected input for path: %s" % path) # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /. while len(path) and path[0] == "/": path = path[1:] return os.path.join(target, path) class RunInChroot(ChrootableTarget): """Backwards compatibility for RunInChroot (LP: #1617375). It needs to work like: with RunInChroot("/target") as in_chroot: in_chroot(["your", "chrooted", "command"])""" __call__ = ChrootableTarget.subp def shlex_split(str_in): # shlex.split takes a string # but in python2 if input here is a unicode, encode it to a string. # http://stackoverflow.com/questions/2365411/ # python-convert-unicode-to-ascii-without-errors if sys.version_info.major == 2: try: if isinstance(str_in, unicode): str_in = str_in.encode('utf-8') except NameError: pass return shlex.split(str_in) else: return shlex.split(str_in) def load_shell_content(content, add_empty=False, empty_val=None): """Given shell like syntax (key=value\nkey2=value2\n) in content return the data in dictionary form. If 'add_empty' is True then add entries in to the returned dictionary for 'VAR=' variables. Set their value to empty_val.""" data = {} for line in shlex_split(content): key, value = line.split("=", 1) if not value: value = empty_val if add_empty or value: data[key] = value return data def uses_systemd(): """ Check if current enviroment uses systemd by testing if /run/systemd/system is a directory; only present if systemd is available on running system. """ global _USES_SYSTEMD if _USES_SYSTEMD is None: _USES_SYSTEMD = os.path.isdir('/run/systemd/system') return _USES_SYSTEMD # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/curtin/version.py000066400000000000000000000017411326565350400177770ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin import __version__ as old_version import os import subprocess _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' _PACKED_VERSION = '@@PACKED_VERSION@@' def version_string(): """ Extract a version string from curtin source or version file""" if not _PACKAGED_VERSION.startswith('@@'): return _PACKAGED_VERSION if not _PACKED_VERSION.startswith('@@'): return _PACKED_VERSION version = old_version gitdir = os.path.abspath(os.path.join(__file__, '..', '..', '.git')) if os.path.exists(gitdir): try: out = subprocess.check_output( ['git', 'describe', '--long', '--abbrev=8', "--match=[0-9][0-9]*"], cwd=os.path.dirname(gitdir)) version = out.decode('utf-8').strip() except subprocess.CalledProcessError: pass return version # vi: ts=4 expandtab syntax=python curtin-18.1-5-g572ae5d6/debian/000077500000000000000000000000001326565350400156535ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/debian/changelog.trunk000066400000000000000000000002221326565350400206630ustar00rootroot00000000000000curtin (UPSTREAM_VER-0ubuntu1) UNRELEASED; urgency=low * Initial release -- Scott Moser Mon, 29 Jul 2013 16:12:09 -0400 curtin-18.1-5-g572ae5d6/debian/compat000066400000000000000000000000021326565350400170510ustar00rootroot000000000000007 curtin-18.1-5-g572ae5d6/debian/control000066400000000000000000000041671326565350400172660ustar00rootroot00000000000000Source: curtin Section: admin Priority: extra Standards-Version: 3.9.6 Maintainer: Ubuntu Developers Build-Depends: debhelper (>= 7), dh-python, pep8, pyflakes, python-all, python-coverage, python-mock, python-nose, python-oauthlib, python-setuptools, python-yaml, python3, python3-coverage, python3-mock, python3-nose, python3-oauthlib, python3-pyflakes | pyflakes (<< 1.1.0-2), python3-setuptools, python3-yaml Homepage: http://launchpad.net/curtin X-Python3-Version: >= 3.2 Package: curtin Architecture: all Priority: extra Depends: bcache-tools, btrfs-tools, dosfstools, file, gdisk, lvm2, mdadm, parted, python3-curtin (= ${binary:Version}), udev, xfsprogs, ${misc:Depends} Description: Library and tools for the curtin installer This package provides the curtin installer. . Curtin is an installer that is blunt, brief, snappish, snippety and unceremonious. Package: curtin-common Architecture: all Priority: extra Depends: ${misc:Depends} Description: Library and tools for curtin installer This package contains utilities for the curtin installer. Package: python-curtin Section: python Architecture: all Priority: extra Depends: curtin-common (= ${binary:Version}), python-oauthlib, python-yaml, wget, ${misc:Depends}, ${python:Depends} Description: Library and tools for curtin installer This package provides python library for use by curtin. Package: python3-curtin Section: python Architecture: all Priority: extra Depends: curtin-common (= ${binary:Version}), python3-oauthlib, python3-yaml, wget, ${misc:Depends}, ${python3:Depends} Description: Library and tools for curtin installer This package provides python3 library for use by curtin. curtin-18.1-5-g572ae5d6/debian/copyright000066400000000000000000000011401326565350400176020ustar00rootroot00000000000000Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: curtin Upstream-Contact: Scott Moser Source: https://launchpad.net/curtin Files: * Copyright: 2013, Canonical Ltd. License: AGPL-3 GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 . Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. . The complete text of the AGPL version 3 can be seen in http://www.gnu.org/licenses/agpl-3.0.html curtin-18.1-5-g572ae5d6/debian/curtin-common.install000066400000000000000000000000311326565350400220270ustar00rootroot00000000000000usr/lib/curtin/helpers/* curtin-18.1-5-g572ae5d6/debian/curtin.install000066400000000000000000000000121326565350400205400ustar00rootroot00000000000000usr/bin/* curtin-18.1-5-g572ae5d6/debian/python-curtin.install000066400000000000000000000000451326565350400220650ustar00rootroot00000000000000usr/lib/python2*/*-packages/curtin/* curtin-18.1-5-g572ae5d6/debian/python3-curtin.install000066400000000000000000000000451326565350400221500ustar00rootroot00000000000000usr/lib/python3*/*-packages/curtin/* curtin-18.1-5-g572ae5d6/debian/rules000077500000000000000000000014441326565350400167360ustar00rootroot00000000000000#!/usr/bin/make -f PYVERS := $(shell pyversions -r) PY3VERS := $(shell py3versions -r) DEB_VERSION := $(shell dpkg-parsechangelog --show-field=Version) UPSTREAM_VERSION := $(shell x="$(DEB_VERSION)"; echo "$${x%-*}") PKG_VERSION := $(shell x="$(DEB_VERSION)"; echo "$${x\#\#*-}") %: dh $@ --with=python2,python3 override_dh_auto_install: dh_auto_install set -ex; for python in $(PY3VERS) $(PYVERS); do \ $$python setup.py build --executable=/usr/bin/python && \ $$python setup.py install --root=$(CURDIR)/debian/tmp --install-layout=deb; \ done chmod 755 $(CURDIR)/debian/tmp/usr/lib/curtin/helpers/* find $(CURDIR)/debian/tmp for f in $$(find $(CURDIR)/debian/tmp/usr/lib -type f -name version.py); do [ -f "$$f" ] || continue; sed -i 's,@@PACKAGED_VERSION@@,$(DEB_VERSION),' "$$f"; done curtin-18.1-5-g572ae5d6/debian/source/000077500000000000000000000000001326565350400171535ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/debian/source/format000066400000000000000000000000141326565350400203610ustar00rootroot000000000000003.0 (quilt) curtin-18.1-5-g572ae5d6/doc/000077500000000000000000000000001326565350400151765ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/doc/Makefile000066400000000000000000000126741326565350400166500ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/curtin.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/curtin.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/curtin" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/curtin" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." curtin-18.1-5-g572ae5d6/doc/conf.py000066400000000000000000000202601326565350400164750ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # curtin documentation build configuration file, created by # sphinx-quickstart on Thu May 30 16:03:34 2013. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # Fix path so we can import curtin.__version__ sys.path.insert(1, os.path.realpath(os.path.join( os.path.dirname(__file__), '..'))) import curtin # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [] # Add any paths that contain templates here, relative to this directory. templates_path = ['templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'curtin' copyright = u'2016, Scott Moser, Ryan Harper' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = curtin.__version__ # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'classic' # on_rtd is whether we are on readthedocs.org, this line of code grabbed from # docs.readthedocs.org on_rtd = os.environ.get('READTHEDOCS', None) == 'True' if not on_rtd: # only import and set the theme if we're building docs locally import sphinx_rtd_theme html_theme = 'sphinx_rtd_theme' html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # otherwise, readthedocs.org uses their theme by default, so no need to specify # it # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'curtindoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'curtin.tex', u'curtin Documentation', u'Scott Moser', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'curtin', u'curtin Documentation', [u'Scott Moser'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'curtin', u'curtin Documentation', u'Scott Moser', 'curtin', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' curtin-18.1-5-g572ae5d6/doc/devel/000077500000000000000000000000001326565350400162755ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/doc/devel/clear_holders_doc.txt000066400000000000000000000077741326565350400225100ustar00rootroot00000000000000The new version of clear_holders is based around a data structure called a holder_tree which represents the current storage hirearchy above a specified starting device. Each node in a holders tree contains data about the node and a key 'holders' which contains a list of all nodes that depend on it. The keys in a holdres_tree node are: - device: the path to the device in /sys/class/block - dev_type: what type of storage layer the device is. possible values: - disk - lvm - crypt - raid - bcache - disk - name: the kname of the device (used for display) - holders: holders_trees for devices depending on the current device A holders tree can be generated for a device using the function clear_holders.gen_holders_tree. The device can be specified either as a path in /sys/class/block or as a path in /dev. The new implementation of block.clear_holders shuts down storage devices in a holders tree starting from the leaves of the tree and ascending towards the root. The old implementation of clear_holders ascended up each path of the tree separately, in a pattern similar to depth first search. The problem with the old implementation is that in some cases either an attempt would be made to remove one storage device while other devices depended on it or clear_holders would attempt to shut down the same storage device several times. In order to cope with this the old version of clear_holders had logic to handle expected failures and hope for the best moving forward. The new version of clear_holders is able to run without many anticipated failures. The logic to plan what order to shut down storage layers in is in clear_holders.plan_shutdown_holders_trees. This function accepts either a single holders tree or a list of holders trees. When run with a list of holders trees, it assumes that all of these trees start at basically the same layer in the overall storage hirearcy for the system (i.e. a list of holders trees starting from all of the target installation disks). This function returns a list of dictionaries, with each dictionary containing the keys: - device: the path to the device in /sys/class/block - dev_type: what type of storage layer the device is. possible values: - disk - lvm - crypt - raid - bcache - disk - level: the level of the device in the current storage hirearchy (starting from 0) The items in the list returned by clear_holders.plan_shutdown_holders_trees should be processed in order to make sure the holders trees are shutdown fully The main interface for clear_holders is the function clear_holders.clear_holders. If the system has just been booted it could be beneficial to run the function clear_holders.start_clear_holders_deps before using clear_holders.clear_holders. This ensures clear_holders will be able to properly storage devices. The function clear_holders.clear_holders can be passed either a single device or a list of devices and will shut down all storage devices above the device(s). The devices can be specified either by path in /dev or by path in /sys/class/block. In order to test if a device or devices are free to be partitioned/formatted, the function clear_holders.assert_clear can be passed either a single device or a list of devices, with devices specified either by path in /dev or by path in /sys/class/block. If there are any storage devices that depend on one of the devices passed to clear_holders.assert_clear, then an OSError will be raised. If clear_holders.assert_clear does not raise any errors, then the devices specified should be ready for partitioning. It is possible to query further information about storage devices using clear_holders. Holders for a individual device can be queried using clear_holders.get_holders. Results are returned as a list or knames for holding devices. A holders tree can be printed in a human readable format using clear_holders.format_holders_tree(). Example output: sda |-- sda1 |-- sda2 `-- sda5 `-- dm-0 |-- dm-1 `-- dm-2 `-- dm-3 curtin-18.1-5-g572ae5d6/doc/index.rst000066400000000000000000000014031326565350400170350ustar00rootroot00000000000000.. curtin documentation master file, created by sphinx-quickstart on Thu May 30 16:03:34 2013. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to curtin's documentation! ================================== This is 'curtin', the curt installer. It is blunt, brief, snappish, snippety and unceremonious. Its goal is to install an operating system as quick as possible. Contents: .. toctree:: :maxdepth: 2 topics/overview topics/config topics/apt_source topics/networking topics/storage topics/curthooks topics/reporting topics/hacking topics/integration-testing Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` curtin-18.1-5-g572ae5d6/doc/topics/000077500000000000000000000000001326565350400164775ustar00rootroot00000000000000curtin-18.1-5-g572ae5d6/doc/topics/apt_source.rst000066400000000000000000000136651326565350400214100ustar00rootroot00000000000000========== APT Source ========== This part of curtin is meant to allow influencing the apt behaviour and configuration. By default - if no apt config is provided - it does nothing. That keeps behavior compatible on upgrades. The feature has an optional target argument which - by default - is used to modify the environment that curtin currently installs (@TARGET_MOUNT_POINT). Features ~~~~~~~~ * Add PGP keys to the APT trusted keyring - add via short keyid - add via long key fingerprint - specify a custom keyserver to pull from - add raw keys (which makes you independent of keyservers) * Influence global apt configuration - adding ppa's - replacing mirror, security mirror and release in sources.list - able to provide a fully custom template for sources.list - add arbitrary apt.conf settings - provide debconf configurations - disabling suites (=pockets) - per architecture mirror definition Configuration ~~~~~~~~~~~~~ The general configuration of the apt feature is under an element called ``apt``. This can have various "global" subelements as listed in the examples below. The file ``apt-source.yaml`` holds more examples. These global configurations are valid throughput all of the apt feature. So for exmaple a global specification of a ``primary`` mirror will apply to all rendered sources entries. Then there is a section ``sources`` which can hold any number of source subelements itself. The key is the filename and will be prepended by /etc/apt/sources.list.d/ if it doesn't start with a ``/``. There are certain cases - where no content is written into a source.list file where the filename will be ignored - yet it can still be used as index for merging. The values inside the entries consist of the following optional entries * ``source``: a sources.list entry (some variable replacements apply) * ``keyid``: providing a key to import via shortid or fingerprint * ``key``: providing a raw PGP key * ``keyserver``: specify an alternate keyserver to pull keys from that were specified by keyid The section "sources" is is a dictionary (unlike most block/net configs which are lists). This format allows merging between multiple input files than a list like :: sources: s1: {'key': 'key1', 'source': 'source1'} sources: s2: {'key': 'key2'} s1: {'keyserver': 'foo'} This would be merged into s1: {'key': 'key1', 'source': 'source1', keyserver: 'foo'} s2: {'key': 'key2'} Here is just one of the most common examples for this feature: install with curtin in an isolated environment (derived repository): For that we need to: * insert the PGP key of the local repository to be trusted - since you are locked down you can't pull from keyserver.ubuntu.com - if you have an internal keyserver you could pull from there, but let us assume you don't even have that; so you have to provide the raw key - in the example I'll use the key of the "Ubuntu CD Image Automatic Signing Key" which makes no sense as it is in the trusted keyring anyway, but it is a good example. (Also the key is shortened to stay readable) :: -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1 mQGiBEFEnz8RBAC7LstGsKD7McXZgd58oN68KquARLBl6rjA2vdhwl77KkPPOr3O RwIbDAAKCRBAl26vQ30FtdxYAJsFjU+xbex7gevyGQ2/mhqidES4MwCggqQyo+w1 Twx6DKLF+3rF5nf1F3Q= =PBAe -----END PGP PUBLIC KEY BLOCK----- * replace the mirrors used to some mirrors available inside the isolated environment for apt to pull repository data from. - lets consider we have a local mirror at ``mymirror.local`` but otherwise following the usual paths - make an example with a partial mirror that doesn't mirror the backports suite, so backports have to be disabled That would be specified as :: apt: primary: - arches [default] uri: http://mymirror.local/ubuntu/ disable_suites: [backports] sources: localrepokey: key: | # full key as block -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1 mQGiBEFEnz8RBAC7LstGsKD7McXZgd58oN68KquARLBl6rjA2vdhwl77KkPPOr3O RwIbDAAKCRBAl26vQ30FtdxYAJsFjU+xbex7gevyGQ2/mhqidES4MwCggqQyo+w1 Twx6DKLF+3rF5nf1F3Q= =PBAe -----END PGP PUBLIC KEY BLOCK----- The file examples/apt-source.yaml holds various further examples that can be configured with this feature. Common snippets ~~~~~~~~~~~~~~~ This is a collection of additional ideas people can use the feature for customizing their to-be-installed system. * enable proposed on installing :: apt: sources: proposed.list: source: | deb $MIRROR $RELEASE-proposed main restricted universe multiverse * Make debug symbols available :: apt: sources: ddebs.list: source: | deb http://ddebs.ubuntu.com $RELEASE main restricted universe multiverse  deb http://ddebs.ubuntu.com $RELEASE-updates main restricted universe multiverse  deb http://ddebs.ubuntu.com $RELEASE-security main restricted universe multiverse deb http://ddebs.ubuntu.com $RELEASE-proposed main restricted universe multiverse Timing ~~~~~~ The feature is implemented at the stage of curthooks_commands, which runs just after curtin has extracted the image to the target. Additionally it can be ran as standalong command "curtin -v --config apt-config". This will pick up the target from the environment variable that is set by curtin, if you want to use it to a different target or outside of usual curtin handling you can add ``--target `` to it to overwrite the target path. This target should have at least a minimal system with apt, apt-add-repository and dpkg being installed for the functionality to work. Dependencies ~~~~~~~~~~~~ Cloud-init might need to resolve dependencies and install packages in the ephemeral environment to run curtin. Therefore it is recommended to not only provide an apt configuration to curtin for the target, but also one to the install environment via cloud-init. curtin-18.1-5-g572ae5d6/doc/topics/config.rst000066400000000000000000000373541326565350400205120ustar00rootroot00000000000000==================== Curtin Configuration ==================== Curtin exposes a number of configuration options for controlling Curtin behavior during installation. Configuration options --------------------- Curtin's top level config keys are as follows: - apt_mirrors (``apt_mirrors``) - apt_proxy (``apt_proxy``) - block-meta (``block``) - debconf_selections (``debconf_selections``) - disable_overlayroot (``disable_overlayroot``) - grub (``grub``) - http_proxy (``http_proxy``) - install (``install``) - kernel (``kernel``) - kexec (``kexec``) - multipath (``multipath``) - network (``network``) - pollinate (``pollinate``) - power_state (``power_state``) - proxy (``proxy``) - reporting (``reporting``) - restore_dist_interfaces: (``restore_dist_interfaces``) - sources (``sources``) - stages (``stages``) - storage (``storage``) - swap (``swap``) - system_upgrade (``system_upgrade``) - write_files (``write_files``) apt_mirrors ~~~~~~~~~~~ Configure APT mirrors for ``ubuntu_archive`` and ``ubuntu_security`` **ubuntu_archive**: ** **ubuntu_security**: ** If the target OS includes /etc/apt/sources.list, Curtin will replace the default values for each key set with the supplied mirror URL. **Example**:: apt_mirrors: ubuntu_archive: http://local.archive/ubuntu ubuntu_security: http://local.archive/ubuntu apt_proxy ~~~~~~~~~ Curtin will configure an APT HTTP proxy in the target OS **apt_proxy**: ** **Example**:: apt_proxy: http://squid.mirror:3267/ block-meta ~~~~~~~~~~ Configure how Curtin selects and configures disks on the target system without providing a custom configuration (mode=simple). **devices**: ** The ``devices`` parameter is a list of block device paths that Curtin may select from with choosing where to install the OS. **boot-partition**: ** The ``boot-partition`` parameter controls how to configure the boot partition with the following parameters: **enabled**: ** Enabled will forcibly setup a partition on the target device for booting. **format**: *<['uefi', 'gpt', 'prep', 'mbr']>* Specify the partition format. Some formats, like ``uefi`` and ``prep`` are restricted by platform characteristics. **fstype**: ** Specify the filesystem format on the boot partition. **label**: ** Specify the filesystem label on the boot partition. **Example**:: block-meta: devices: - /dev/sda - /dev/sdb boot-partition: - enabled: True format: gpt fstype: ext4 label: my-boot-partition debconf_selections ~~~~~~~~~~~~~~~~~~ Curtin will update the target with debconf set-selection values. Users will need to be familiar with the package debconf options. Users can probe a packages' debconf settings by using ``debconf-get-selections``. **selection_name**: ** ``debconf-set-selections`` is in the form:: **Example**:: debconf_selections: set1: | cloud-init cloud-init/datasources multiselect MAAS lxd lxd/bridge-name string lxdbr0 set2: lxd lxd/setup-bridge boolean true disable_overlayroot ~~~~~~~~~~~~~~~~~~~ Curtin disables overlayroot in the target by default. **disable_overlayroot**: ** **Example**:: disable_overlayroot: False grub ~~~~ Curtin configures grub as the target machine's boot loader. Users can control a few options to tailor how the system will boot after installation. **install_devices**: ** Specify a list of devices onto which grub will attempt to install. **replace_linux_default**: ** Controls whether grub-install will update the Linux Default target value during installation. **update_nvram**: ** Certain platforms, like ``uefi`` and ``prep`` systems utilize NVRAM to hold boot configuration settings which control the order in which devices are booted. Curtin by default will not attempt to update the NVRAM settings to preserve the system configuration. Users may want to force NVRAM to be updated such that the next boot of the system will boot from the installed device. **Example**:: grub: install_devices: - /dev/sda1 replace_linux_default: False update_nvram: True http_proxy ~~~~~~~~~~ Curtin will export ``http_proxy`` value into the installer environment. **Deprecated**: This setting is deprecated in favor of ``proxy`` below. **http_proxy**: ** **Example**:: http_proxy: http://squid.proxy:3728/ install ~~~~~~~ Configure Curtin's install options. **log_file**: ** Curtin logs install progress by default to /var/log/curtin/install.log **error_tarfile**: ** If error_tarfile is not None and curtin encounters an error, this tarfile will be created. It includes logs, configuration and system info to aid triage and bug filing. When unset, error_tarfile defaults to /var/log/curtin/curtin-logs.tar. **post_files**: ** Curtin by default will post the ``log_file`` value to any configured reporter. **save_install_config**: ** Curtin will save the merged configuration data into the target OS at the path of ``save_install_config``. This defaults to /root/curtin-install-cfg.yaml **save_install_logs**: ** Curtin will copy the install log to a specific path in the target filesystem. This defaults to /root/install.log **target**: ** Control where curtin mounts the target device for installing the OS. If this value is unset, curtin picks a suitable path under a temporary directory. If a value is set, then curtin will utilize the ``target`` value instead. **unmount**: *disabled* If this key is set to the string 'disabled' then curtin will not unmount the target filesystem when install is complete. This skips unmounting in all cases of install success or failure. **Example**:: install: log_file: /tmp/install.log error_tarfile: /var/log/curtin/curtin-error-logs.tar post_files: - /tmp/install.log - /var/log/syslog save_install_config: /root/myconf.yaml save_install_log: /var/log/curtin-install.log target: /my_mount_point unmount: disabled kernel ~~~~~~ Configure how Curtin selects which kernel to install into the target image. If ``kernel`` is not configured, Curtin will use the default mapping below and determine which ``package`` value by looking up the current release and current kernel version running. **fallback-package**: ** Specify a kernel package name to be used if the default package is not available. **mapping**: ** Default mapping for Releases to package names is as follows:: precise: 3.2.0: 3.5.0: -lts-quantal 3.8.0: -lts-raring 3.11.0: -lts-saucy 3.13.0: -lts-trusty trusty: 3.13.0: 3.16.0: -lts-utopic 3.19.0: -lts-vivid 4.2.0: -lts-wily 4.4.0: -lts-xenial xenial: 4.3.0: 4.4.0: **package**: ** Specify the exact package to install in the target OS. **Example**:: kernel: fallback-package: linux-image-generic package: linux-image-generic-lts-xenial mapping: - xenial: - 4.4.0: -my-custom-kernel kexec ~~~~~ Curtin can use kexec to "reboot" into the target OS. **mode**: ** Enable rebooting with kexec. **Example**:: kexec: on multipath ~~~~~~~~~ Curtin will detect and autoconfigure multipath by default to enable boot for systems with multipath. Curtin does not apply any advanced configuration or tuning, rather it uses distro defaults and provides enough configuration to enable booting. **mode**: *<['auto', ['disabled']>* Defaults to auto which will configure enough to enable booting on multipath devices. Disabled will prevent curtin from installing or configuring multipath. **overwrite_bindings**: ** If ``overwrite_bindings`` is True then Curtin will generate new bindings file for multipath, overriding any existing binding in the target image. **Example**:: multipath: mode: auto overwrite_bindings: True network ~~~~~~~ Configure networking (see Networking section for details). **network_option_1**: *