pax_global_header00006660000000000000000000000064141535047660014525gustar00rootroot0000000000000052 comment=c5ccb0bfc7c97925b76291646102a3667422da8e curtin-21.3/000077500000000000000000000000001415350476600127565ustar00rootroot00000000000000curtin-21.3/.gitignore000066400000000000000000000000621415350476600147440ustar00rootroot00000000000000*.pyc __pycache__ .tox .coverage curtin.egg-info/ curtin-21.3/HACKING.rst000066400000000000000000000076761415350476600145740ustar00rootroot00000000000000***************** Hacking on curtin ***************** This document describes how to contribute changes to curtin. It assumes you have a `Launchpad`_ account, and refers to your launchpad user as ``LP_USER`` throughout. Do these things once ==================== * To contribute, you must sign the Canonical `contributor license agreement`_ If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Rick Harding ` or ping rick_h in ``#curtin`` channel via Freenode IRC. When prompted for 'Project contact' or 'Canonical Project Manager' enter 'Rick Harding'. * Configure git with your email and name for commit messages. Your name will appear in commit messages and will also be used in changelogs or release notes. Give yourself credit! Please provide a valid email address:: git config user.name "Your Name" git config user.email "Your Email" * Clone the upstream `repository`_ on Launchpad:: git clone https://git.launchpad.net/curtin cd curtin There is more information on Launchpad as a git hosting site in `Launchpad git documentation`_. * Create a new remote pointing to your personal Launchpad repository. This is equivalent to 'fork' on github. .. code:: sh git remote add LP_USER ssh://LP_USER@git.launchpad.net/~LP_USER/curtin git push LP_USER master .. _repository: https://git.launchpad.net/curtin .. _contributor license agreement: https://ubuntu.com/legal/contributors .. _contributor-agreement-canonical: https://launchpad.net/%7Econtributor-agreement-canonical/+members .. _Launchpad git documentation: https://help.launchpad.net/Code/Git Do these things for each feature or bug ======================================= * Create a new topic branch for your work:: git checkout -b my-topic-branch * Make and commit your changes (note, you can make multiple commits, fixes, more commits.):: git commit * Run unit tests and lint/formatting checks with `tox`_:: tox * Push your changes to your personal Launchpad repository:: git push -u LP_USER my-topic-branch * Use your browser to create a merge request: - Open the branch on Launchpad. - You can see a web view of your repository and navigate to the branch at: ``https://code.launchpad.net/~LP_USER/curtin/`` - It will typically be at: ``https://code.launchpad.net/~LP_USER/curtin/+git/curtin/+ref/BRANCHNAME`` Here is an example link: https://code.launchpad.net/~raharper/curtin/+git/curtin/+ref/feature/zfs-root - Click 'Propose for merging' - Select 'lp:curtin' as the target repository - Type '``master``' as the Target reference path - Click 'Propose Merge' - On the next page, hit 'Set commit message' and type a combined git style commit message. The commit message should be one summary line of less than 74 characters followed by a blank line, and then one or more paragraphs describing the change and why it was needed. If you have fixed a bug in your commit, reference it at the end of the message with ``LP: #XXXXXXX``. This is the message that will be used on the commit when it is sqaushed and merged into trunk. Here is an example: :: Activate the frobnicator. The frobnicator was previously inactive and now runs by default. This may save the world some day. Then, list the bugs you fixed as footers with syntax as shown here. LP: #1 Then, someone in the `curtin-dev` group will review your changes and follow up in the merge request. Feel free to ping and/or join ``#curtin`` on Freenode IRC if you have any questions. .. _tox: https://tox.readthedocs.io/en/latest/ .. _Launchpad: https://launchpad.net .. _curtin-dev: https://launchpad.net/~curtin-dev/+members#active curtin-21.3/LICENSE000066400000000000000000000012021415350476600137560ustar00rootroot00000000000000Copyright 2013 Canonical Ltd and contributors. SPDX-License-Identifier: AGPL-3.0-only Curtin is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, version 3. Curtin is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with Curtin. If not, see . curtin-21.3/LICENSE-AGPLv3000066400000000000000000001033301415350476600147550ustar00rootroot00000000000000 GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software. A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public. The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version. An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU Affero General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Remote Network Interaction; Use with the GNU General Public License. Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements. You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see . curtin-21.3/Makefile000066400000000000000000000036151415350476600144230ustar00rootroot00000000000000TOP := $(abspath $(dir $(lastword $(MAKEFILE_LIST)))) CWD := $(shell pwd) PYTHON2 ?= python2 PYTHON3 ?= python3 COVERAGE ?= 1 DEFAULT_COVERAGEOPTS = --with-coverage --cover-erase --cover-branches --cover-package=curtin --cover-inclusive ifeq ($(COVERAGE), 1) coverageopts ?= $(DEFAULT_COVERAGEOPTS) endif CURTIN_VMTEST_IMAGE_SYNC ?= False export CURTIN_VMTEST_IMAGE_SYNC noseopts ?= -vv pylintopts ?= --rcfile=pylintrc --errors-only target_dirs ?= curtin tests tools build: bin/curtin: curtin/pack.py tools/write-curtin $(PYTHON) tools/write-curtin bin/curtin check: unittest style-check: pep8 pyflakes pyflakes3 coverage: coverageopts ?= $(DEFAULT_COVERAGEOPTS) coverage: unittest pep8: @$(CWD)/tools/run-pep8 pyflakes: $(PYTHON2) -m pyflakes $(target_dirs) pyflakes3: $(PYTHON3) -m pyflakes $(target_dirs) pylint: $(PYTHON2) -m pylint $(pylintopts) $(target_dirs) pylint3: $(PYTHON3) -m pylint $(pylintopts) $(target_dirs) unittest2: $(PYTHON2) -m nose $(coverageopts) $(noseopts) tests/unittests unittest3: $(PYTHON3) -m nose $(coverageopts) $(noseopts) tests/unittests unittest: unittest2 unittest3 schema-validate: @$(CWD)/tools/schema-validate-storage docs: check-doc-deps make -C doc html check-doc-deps: @which sphinx-build && $(PYTHON) -c 'import sphinx_rtd_theme' || \ { echo "Missing doc dependencies. Install with:"; \ pkgs="python3-sphinx-rtd-theme python3-sphinx"; \ echo sudo apt-get install -qy $$pkgs ; exit 1; } # By default don't sync images when running all tests. vmtest: schema-validate $(PYTHON3) -m nose $(noseopts) tests/vmtests vmtest-deps: @$(CWD)/tools/vmtest-system-setup sync-images: @$(CWD)/tools/vmtest-sync-images integration-deps: @$(CWD)/tools/vmtest-create-static-images integration: integration-deps $(PYTHON3) -m pytest tests/integration clean: rm -rf doc/_build .PHONY: all clean test pyflakes pyflakes3 pep8 build style-check check-doc-deps curtin-21.3/README000066400000000000000000000002431415350476600136350ustar00rootroot00000000000000This is 'curtin', the curt installer. It is blunt, brief, snappish, snippety and unceremonious. Its goal is to install an operating system as quick as possible. curtin-21.3/bin/000077500000000000000000000000001415350476600135265ustar00rootroot00000000000000curtin-21.3/bin/curtin000077500000000000000000000036661415350476600147730ustar00rootroot00000000000000#!/bin/sh # This file is part of curtin. See LICENSE file for copyright and license info. PY3OR2_MAIN="curtin" PY3OR2_MCHECK="curtin.deps.check" PY3OR2_PYTHONS=${PY3OR2_PYTHONS:-"python3:python"} PYTHON=${PYTHON:-"$PY3OR2_PYTHON"} PY3OR2_DEBUG=${PY3OR2_DEBUG:-0} debug() { [ "${PY3OR2_DEBUG}" != "0" ] || return 0 echo "$@" 1>&2 } fail() { echo "$@" 1>&2; exit 1; } # if $0 is is bin/ and dirname($0)/../module exists, then prepend PYTHONPATH mydir=${0%/*} updir=${mydir%/*} if [ "${mydir#${updir}/}" = "bin" -a -d "$updir/${PY3OR2_MCHECK%%.*}" ]; then updir=$(cd "$mydir/.." && pwd) case "$PYTHONPATH" in *:$updir:*|$updir:*|*:$updir) :;; *) export PYTHONPATH="$updir${PYTHONPATH:+:$PYTHONPATH}" debug "adding '$updir' to PYTHONPATH" ;; esac fi if [ ! -n "$PYTHON" ]; then first_exe="" oifs="$IFS"; IFS=":" best=0 best_exe="" [ "${PY3OR2_DEBUG}" = "0" ] && _v="" || _v="-v" for p in $PY3OR2_PYTHONS; do command -v "$p" >/dev/null 2>&1 || { debug "$p: not in path"; continue; } [ -z "$PY3OR2_MCHECK" ] && PYTHON=$p && break out=$($p -m "$PY3OR2_MCHECK" $_v -- "$@" 2>&1) && PYTHON="$p" && { debug "$p is good [$p -m $PY3OR2_MCHECK $_v -- $*]"; break; } ret=$? debug "$p [$ret]: $out" # exit code of 1 is unuseable [ $ret -eq 1 ] && continue [ -n "$first_exe" ] || first_exe="$p" # higher non-zero exit values indicate more plausible usability [ $best -lt $ret ] && best_exe="$p" && best=$ret && debug "current best: $best_exe" done IFS="$oifs" [ -z "$best_exe" -a -n "$first_exe" ] && best_exe="$first_exe" [ -n "$PYTHON" ] || PYTHON="$best_exe" [ -n "$PYTHON" ] || fail "no availble python? [PY3OR2_DEBUG=1 for more info]" fi debug "executing: $PYTHON -m \"$PY3OR2_MAIN\" $*" exec $PYTHON -m "$PY3OR2_MAIN" "$@" # vi: ts=4 expandtab syntax=sh curtin-21.3/curtin/000077500000000000000000000000001415350476600142625ustar00rootroot00000000000000curtin-21.3/curtin/__init__.py000066400000000000000000000033341415350476600163760ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. # This constant is made available so a caller can read it # it must be kept the same as that used in helpers/common:get_carryover_params KERNEL_CMDLINE_COPY_TO_INSTALL_SEP = "---" # The 'FEATURES' variable is provided so that users of curtin # can determine which features are supported. Each entry should have # a consistent meaning. FEATURES = [ # curtin supports creating swapfiles on btrfs, if possible 'BTRFS_SWAPFILE', # curtin can apply centos networking via centos_apply_network_config 'CENTOS_APPLY_NETWORK_CONFIG', # curtin can configure centos storage devices and boot devices 'CENTOS_CURTHOOK_SUPPORT', # install supports the 'network' config version 1 'NETWORK_CONFIG_V1', # reporter supports 'webhook' type 'REPORTING_EVENTS_WEBHOOK', # has storage-config schema validation 'STORAGE_CONFIG_SCHEMA', # install supports the 'storage' config version 1 'STORAGE_CONFIG_V1', # install supports the 'storage' config version 1 for DD images 'STORAGE_CONFIG_V1_DD', # has separate 'preserve' and 'wipe' config options 'STORAGE_CONFIG_SEPARATE_PRESERVE_AND_WIPE' # subcommand 'system-install' is present 'SUBCOMMAND_SYSTEM_INSTALL', # subcommand 'system-upgrade' is present 'SUBCOMMAND_SYSTEM_UPGRADE', # supports new format of apt configuration 'APT_CONFIG_V1', # has version module 'HAS_VERSION_MODULE', # uefi_reoder has fallback support if BootCurrent is missing 'UEFI_REORDER_FALLBACK_SUPPORT', # fstabs by default are output with passno = 1 if not nodev 'FSTAB_DEFAULT_FSCK_ON_BLK' ] __version__ = "21.3" # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/__main__.py000066400000000000000000000001431415350476600163520ustar00rootroot00000000000000if __name__ == '__main__': from .commands.main import main import sys sys.exit(main()) curtin-21.3/curtin/block/000077500000000000000000000000001415350476600153545ustar00rootroot00000000000000curtin-21.3/curtin/block/__init__.py000066400000000000000000001351571415350476600175010ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import re from contextlib import contextmanager import errno import itertools import os import stat import sys import tempfile from curtin import util from curtin.block import lvm from curtin.block import multipath from curtin.log import LOG from curtin.udev import udevadm_settle, udevadm_info from curtin import storage_config SECTOR_SIZE_BYTES = 512 def get_dev_name_entry(devname): """ convert device name to path in /dev """ bname = devname.split('/dev/')[-1] return (bname, "/dev/" + bname) def is_valid_device(devname): """ check if device is a valid device """ devent = get_dev_name_entry(devname)[1] return is_block_device(devent) def is_block_device(path): """ check if path is a block device """ try: return stat.S_ISBLK(os.stat(path).st_mode) except OSError as e: if not util.is_file_not_found_exc(e): raise return False def dev_short(devname): """ get short form of device name """ devname = os.path.normpath(devname) if os.path.sep in devname: return os.path.basename(devname) return devname def dev_path(devname): """ convert device name to path in /dev """ if devname.startswith('/dev/'): # it could be something like /dev/mapper/mpatha-part2 return os.path.realpath(devname) else: return '/dev/' + devname def md_path(mdname): """ Convert device name to path in /dev/md """ full_mdname = dev_path(mdname) if full_mdname.startswith('/dev/md/'): return full_mdname elif re.match(r'/dev/md\d+$', full_mdname): return full_mdname elif '/' in mdname: raise ValueError("Invalid RAID device name: {}".format(mdname)) else: return '/dev/md/{}'.format(mdname) def path_to_kname(path): """ converts a path in /dev or a path in /sys/block to the device kname, taking special devices and unusual naming schemes into account """ # if path given is a link, get real path # only do this if given a path though, if kname is already specified then # this would cause a failure where the function should still be able to run if os.path.sep in path: path = os.path.realpath(path) # using basename here ensures that the function will work given a path in # /dev, a kname, or a path in /sys/block as an arg dev_kname = os.path.basename(path) # cciss devices need to have 'cciss!' prepended if path.startswith('/dev/cciss'): dev_kname = 'cciss!' + dev_kname return dev_kname def kname_to_path(kname): """ converts a kname to a path in /dev, taking special devices and unusual naming schemes into account """ # if given something that is already a dev path, return it if os.path.exists(kname) and is_valid_device(kname): path = kname return os.path.realpath(path) # adding '/dev' to path is not sufficient to handle cciss devices and # possibly other special devices which have not been encountered yet path = os.path.realpath(os.sep.join(['/dev'] + kname.split('!'))) # make sure path we get is correct if not (os.path.exists(path) and is_valid_device(path)): raise OSError('could not get path to dev from kname: {}'.format(kname)) return path def partition_kname(disk_kname, partition_number): """ Add number to disk_kname prepending a 'p' if needed """ if disk_kname.startswith('dm-'): # device-mapper devices may create a new dm device for the partition, # e.g. multipath disk is at dm-2, new partition could be dm-11, but # linux will create a -partX symlink against the disk by-id name. devpath = '/dev/' + disk_kname disk_link = get_device_mapper_links(devpath, first=True) return path_to_kname( os.path.realpath('%s-part%s' % (disk_link, partition_number))) for dev_type in ['bcache', 'nvme', 'mmcblk', 'cciss', 'mpath', 'md', 'loop']: if disk_kname.startswith(dev_type): partition_number = "p%s" % partition_number break return "%s%s" % (disk_kname, partition_number) def sysfs_to_devpath(sysfs_path): """ convert a path in /sys/class/block to a path in /dev """ path = kname_to_path(path_to_kname(sysfs_path)) if not is_block_device(path): raise ValueError('could not find blockdev for sys path: {}' .format(sysfs_path)) return path def sys_block_path(devname, add=None, strict=True): """ get path to device in /sys/class/block """ toks = ['/sys/class/block'] # insert parent dev if devname is partition devname = os.path.normpath(devname) if devname.startswith('/dev/') and not os.path.exists(devname): LOG.warning('block.sys_block_path: devname %s does not exist', devname) toks.append(path_to_kname(devname)) if add is not None: toks.append(add) path = os.sep.join(toks) if strict and not os.path.exists(path): err = OSError( "devname '{}' did not have existing syspath '{}'".format( devname, path)) err.errno = errno.ENOENT raise err return os.path.normpath(path) def get_holders(device): """ Look up any block device holders, return list of knames """ # block.sys_block_path works when given a /sys or /dev path sysfs_path = sys_block_path(device) # get holders holders = os.listdir(os.path.join(sysfs_path, 'holders')) LOG.debug("devname '%s' had holders: %s", device, holders) return holders def get_device_slave_knames(device): """ Find the underlying knames of a given device by walking sysfs recursively. Returns a list of knames """ slave_knames = [] slaves_dir_path = os.path.join(sys_block_path(device), 'slaves') # if we find a 'slaves' dir, recurse and check # the underlying devices if os.path.exists(slaves_dir_path): slaves = os.listdir(slaves_dir_path) if len(slaves) > 0: for slave_kname in slaves: slave_knames.extend(get_device_slave_knames(slave_kname)) else: slave_knames.append(path_to_kname(device)) return slave_knames else: # if a device has no 'slaves' attribute then # we've found the underlying device, return # the kname of the device return [path_to_kname(device)] def _lsblock_pairs_to_dict(lines): """ parse lsblock output and convert to dict """ ret = {} for line in lines.splitlines(): toks = util.shlex_split(line) cur = {} for tok in toks: k, v = tok.split("=", 1) if k == 'MAJ_MIN': k = 'MAJ:MIN' else: k = k.replace('_', '-') cur[k] = v # use KNAME, as NAME may include spaces and other info, # for example, lvm decices may show 'dm0 lvm1' cur['device_path'] = get_dev_name_entry(cur['KNAME'])[1] ret[cur['KNAME']] = cur return ret def _lsblock(args=None): """ get lsblock data as dict """ # lsblk --help | sed -n '/Available/,/^$/p' | # sed -e 1d -e '$d' -e 's,^[ ]\+,,' -e 's, .*,,' | sort keys = ['ALIGNMENT', 'DISC-ALN', 'DISC-GRAN', 'DISC-MAX', 'DISC-ZERO', 'FSTYPE', 'GROUP', 'KNAME', 'LABEL', 'LOG-SEC', 'MAJ:MIN', 'MIN-IO', 'MODE', 'MODEL', 'MOUNTPOINT', 'NAME', 'OPT-IO', 'OWNER', 'PHY-SEC', 'RM', 'RO', 'ROTA', 'RQ-SIZE', 'SCHED', 'SIZE', 'STATE', 'TYPE', 'UUID'] if args is None: args = [] args = [x.replace('!', '/') for x in args] # in order to avoid a very odd error with '-o' and all output fields above # we just drop one. doesn't really matter which one. keys.remove('SCHED') basecmd = ['lsblk', '--noheadings', '--bytes', '--pairs', '--output=' + ','.join(keys)] (out, _err) = util.subp(basecmd + list(args), capture=True) out = out.replace('!', '/') return _lsblock_pairs_to_dict(out) def sfdisk_info(devpath): ''' returns dict of sfdisk info about disk partitions { "label": "gpt", "id": "877716F7-31D0-4D56-A1ED-4D566EFE418E", "device": "/dev/vda", "unit": "sectors", "firstlba": 34, "lastlba": 41943006, "partitions": [ {"node": "/dev/vda1", "start": 227328, "size": 41715679, "type": "0FC63DAF-8483-4772-8E79-3D69D8477DE4", "uuid": "60541CAF-E2AC-48CD-BF89-AF16051C833F"}, ] } { "label":"dos", "id":"0xb0dbdde1", "device":"/dev/vdb", "unit":"sectors", "partitions": [ {"node":"/dev/vdb1", "start":2048, "size":8388608, "type":"83", "bootable":true}, {"node":"/dev/vdb2", "start":8390656, "size":8388608, "type":"83"}, {"node":"/dev/vdb3", "start":16779264, "size":62914560, "type":"5"}, {"node":"/dev/vdb5", "start":16781312, "size":31457280, "type":"83"}, {"node":"/dev/vdb6", "start":48240640, "size":10485760, "type":"83"}, {"node":"/dev/vdb7", "start":58728448, "size":20965376, "type":"83"} ] } ''' (parent, partnum) = get_blockdev_for_partition(devpath) try: (out, _err) = util.subp(['sfdisk', '--json', parent], capture=True) except util.ProcessExecutionError as e: out = None LOG.exception(e) if out is not None: return util.load_json(out).get('partitiontable', {}) return {} def get_partition_sfdisk_info(devpath, sfdisk_info=None): if not sfdisk_info: sfdisk_info = sfdisk_info(devpath) entry = [part for part in sfdisk_info['partitions'] if os.path.realpath(part['node']) == os.path.realpath(devpath)] if len(entry) != 1: raise RuntimeError('Device %s not present in sfdisk dump:\n%s' % devpath, util.json_dumps(sfdisk_info)) return entry.pop() def dmsetup_info(devname): ''' returns dict of info about device mapper dev. {'blkdevname': 'dm-0', 'blkdevs_used': 'sda5', 'name': 'sda5_crypt', 'subsystem': 'CRYPT', 'uuid': 'CRYPT-LUKS1-2b370697149743b0b2407d11f88311f1-sda5_crypt' } ''' _SEP = '=' fields = ('name,uuid,blkdevname,blkdevs_used,subsystem'.split(',')) try: (out, _err) = util.subp(['dmsetup', 'info', devname, '-C', '-o', ','.join(fields), '--noheading', '--separator', _SEP], capture=True) except util.ProcessExecutionError as e: LOG.error('Failed to run dmsetup info: %s', e) return {} values = out.strip().split(_SEP) info = dict(zip(fields, values)) return info def get_unused_blockdev_info(): """ return a list of unused block devices. These are devices that do not have anything mounted on them. """ # get a list of top level block devices, then iterate over it to get # devices dependent on those. If the lsblk call for that specific # call has nothing 'MOUNTED", then this is an unused block device bdinfo = _lsblock(['--nodeps']) unused = {} for devname, data in bdinfo.items(): cur = _lsblock([data['device_path']]) mountpoints = [x for x in cur if cur[x].get('MOUNTPOINT')] if len(mountpoints) == 0: unused[devname] = data return unused def get_devices_for_mp(mountpoint): """ return a list of devices (full paths) used by the provided mountpoint """ bdinfo = _lsblock() found = set() for devname, data in bdinfo.items(): if data['MOUNTPOINT'] == mountpoint: found.add(data['device_path']) if found: return list(found) # for some reason, on some systems, lsblk does not list mountpoint # for devices that are mounted. This happens on /dev/vdc1 during a run # using tools/launch. mountpoint = [os.path.realpath(dev) for (dev, mp, vfs, opts, freq, passno) in get_proc_mounts() if mp == mountpoint] return mountpoint def get_installable_blockdevs(include_removable=False, min_size=1024**3): """ find blockdevs suitable for installation """ good = [] unused = get_unused_blockdev_info() for devname, data in unused.items(): if not include_removable and data.get('RM') == "1": continue if data.get('RO') != "0" or data.get('TYPE') != "disk": continue if min_size is not None and int(data.get('SIZE', '0')) < min_size: continue good.append(devname) return good def get_blockdev_for_partition(devpath, strict=True): """ find the parent device for a partition. returns a tuple of the parent block device and the partition number if device is not a partition, None will be returned for partition number """ # normalize path rpath = os.path.realpath(devpath) # convert an entry in /dev/ to parent disk and partition number # if devpath is a block device and not a partition, return (devpath, None) base = '/sys/class/block' # input of /dev/vdb, /dev/disk/by-label/foo, /sys/block/foo, # /sys/block/class/foo, or just foo syspath = os.path.join(base, path_to_kname(devpath)) # don't need to try out multiple sysfs paths as path_to_kname handles cciss if strict and not os.path.exists(syspath): raise OSError("%s had no syspath (%s)" % (devpath, syspath)) if rpath.startswith('/dev/dm-'): parent_info = multipath.mpath_partition_to_mpath_id_and_partnumber( rpath) if parent_info is not None: mpath_id, ptnum = parent_info return os.path.realpath('/dev/mapper/' + mpath_id), ptnum ptpath = os.path.join(syspath, "partition") if not os.path.exists(ptpath): return (rpath, None) ptnum = util.load_file(ptpath).rstrip() # for a partition, real syspath is something like: # /sys/devices/pci0000:00/0000:00:04.0/virtio1/block/vda/vda1 rsyspath = os.path.realpath(syspath) disksyspath = os.path.dirname(rsyspath) diskmajmin = util.load_file(os.path.join(disksyspath, "dev")).rstrip() diskdevpath = os.path.realpath("/dev/block/%s" % diskmajmin) # diskdevpath has something like 253:0 # and udev has put links in /dev/block/253:0 to the device name in /dev/ return (diskdevpath, ptnum) def get_sysfs_partitions(device): """ get a list of sysfs paths for partitions under a block device accepts input as a device kname, sysfs path, or dev path returns empty list if no partitions available """ sysfs_path = sys_block_path(device) return [sys_block_path(kname) for kname in os.listdir(sysfs_path) if os.path.exists(os.path.join(sysfs_path, kname, 'partition'))] def get_pardevs_on_blockdevs(devs): """ return a dict of partitions with their info that are on provided devs """ if devs is None: devs = [] devs = [get_dev_name_entry(d)[1] for d in devs] found = _lsblock(devs) ret = {} for short in found: if found[short]['device_path'] not in devs: ret[short] = found[short] return ret def stop_all_unused_multipath_devices(): """ Stop all unused multipath devices. """ multipath = util.which('multipath') # Command multipath is not available only when multipath-tools package # is not installed. Nothing needs to be done in this case because system # doesn't create multipath devices without this package installed and we # have nothing to stop. if not multipath: return # Command multipath -F flushes all unused multipath device maps cmd = [multipath, '-F'] try: # unless multipath cleared *everything* it will exit with 1 util.subp(cmd, rcs=[0, 1]) except util.ProcessExecutionError as e: LOG.warn("Failed to stop multipath devices: %s", e) def rescan_block_devices(devices=None, warn_on_fail=True): """ run 'blockdev --rereadpt' for all block devices not currently mounted """ if not devices: unused = get_unused_blockdev_info() devices = [] for devname, data in unused.items(): if data.get('RM') == "1": continue if data.get('RO') != "0" or data.get('TYPE') != "disk": continue devices.append(data['device_path']) if not devices: LOG.debug("no devices found to rescan") return # blockdev needs /dev/ parameters, convert if needed cmd = ['blockdev', '--rereadpt'] + [dev if dev.startswith('/dev/') else sysfs_to_devpath(dev) for dev in devices] try: util.subp(cmd, capture=True) except util.ProcessExecutionError as e: if warn_on_fail: # FIXME: its less than ideal to swallow this error, but until # we fix LP: #1489521 we kind of need to. LOG.warn( "Error rescanning devices, possibly known issue LP: #1489521") # Reformatting the exception output so as to not trigger # vmtest scanning for Unexepected errors in install logfile LOG.warn("cmd: %s\nstdout:%s\nstderr:%s\nexit_code:%s", e.cmd, e.stdout, e.stderr, e.exit_code) udevadm_settle() return def blkid(devs=None, cache=True): """ get data about block devices from blkid and convert to dict """ if devs is None: devs = [] # 14.04 blkid reads undocumented /dev/.blkid.tab # man pages mention /run/blkid.tab and /etc/blkid.tab if not cache: cfiles = ("/run/blkid/blkid.tab", "/dev/.blkid.tab", "/etc/blkid.tab") for cachefile in cfiles: if os.path.exists(cachefile): os.unlink(cachefile) cmd = ['blkid', '-o', 'full'] cmd.extend(devs) # blkid output is : KEY=VALUE # where KEY is TYPE, UUID, PARTUUID, LABEL out, err = util.subp(cmd, capture=True) data = {} for line in out.splitlines(): curdev, curdata = line.split(":", 1) data[curdev] = dict(tok.split('=', 1) for tok in util.shlex_split(curdata)) return data def _legacy_detect_multipath(target_mountpoint=None): """ Detect if the operating system has been installed to a multipath device. """ # The obvious way to detect multipath is to use multipath utility which is # provided by the multipath-tools package. Unfortunately, multipath-tools # package is not available in all ephemeral images hence we can't use it. # Another reasonable way to detect multipath is to look for two (or more) # devices with the same World Wide Name (WWN) which can be fetched using # scsi_id utility. This way doesn't work as well because WWNs are not # unique in some cases which leads to false positives which may prevent # system from booting (see LP: #1463046 for details). # Taking into account all the issues mentioned above, curent implementation # detects multipath by looking for a filesystem with the same UUID # as the target device. It relies on the fact that all alternative routes # to the same disk observe identical partition information including UUID. # There are some issues with this approach as well though. We won't detect # multipath disk if it doesn't any filesystems. Good news is that # target disk will always have a filesystem because curtin creates them # while installing the system. rescan_block_devices() binfo = blkid(cache=False) LOG.debug("legacy_detect_multipath found blkid info: %s", binfo) # get_devices_for_mp may return multiple devices by design. It is not yet # implemented but it should return multiple devices when installer creates # separate disk partitions for / and /boot. We need to do UUID-based # multipath detection against each of target devices. target_devs = get_devices_for_mp(target_mountpoint) LOG.debug("target_devs: %s" % target_devs) for devpath, data in binfo.items(): # We need to figure out UUID of the target device first if devpath not in target_devs: continue # This entry contains information about one of target devices target_uuid = data.get('UUID') # UUID-based multipath detection won't work if target partition # doesn't have UUID assigned if not target_uuid: LOG.warn("Target partition %s doesn't have UUID assigned", devpath) continue LOG.debug("%s: %s" % (devpath, data.get('UUID', ""))) # Iterating over available devices to see if any other device # has the same UUID as the target device. If such device exists # we probably installed the system to the multipath device. for other_devpath, other_data in binfo.items(): if ((other_data.get('UUID') == target_uuid) and (other_devpath != devpath)): return True # No other devices have the same UUID as the target devices. # We probably installed the system to the non-multipath device. return False def _device_is_multipathed(devpath): devpath = os.path.realpath(devpath) info = udevadm_info(devpath) if multipath.is_mpath_device(devpath, info=info): return True if multipath.is_mpath_partition(devpath, info=info): return True if devpath.startswith('/dev/dm-'): # check members of composed devices (LVM, dm-crypt) if 'DM_LV_NAME' in info: volgroup = info.get('DM_VG_NAME') if volgroup: if any((multipath.is_mpath_member(pv) for pv in lvm.get_pvols_in_volgroup(volgroup))): return True elif devpath.startswith('/dev/md'): if any((multipath.is_mpath_member(md) for md in md_get_devices_list(devpath) + md_get_spares_list(devpath))): return True result = multipath.is_mpath_member(devpath) return result def _md_get_members_list(devpath, state_check): md_dev, _partno = get_blockdev_for_partition(devpath) sysfs_md = sys_block_path(md_dev, "md") return [ dev_path(dev[4:]) for dev in os.listdir(sysfs_md) if (dev.startswith('dev-') and state_check( util.load_file(os.path.join(sysfs_md, dev, 'state')).strip()))] def md_get_spares_list(devpath): def state_is_spare(state): return (state == 'spare') return _md_get_members_list(devpath, state_is_spare) def md_get_devices_list(devpath): def state_is_not_spare(state): return (state != 'spare') return _md_get_members_list(devpath, state_is_not_spare) def detect_multipath(target_mountpoint=None): if multipath.multipath_supported(): for device in (os.path.realpath(dev) for (dev, _mp, _vfs, _opts, _freq, _passno) in get_proc_mounts() if dev.startswith('/dev/')): if not is_block_device(device): # A tmpfs can be mounted with any old junk in the "device" # field and unfortunately casper sometimes puts "/dev/shm" # there, which is usually a directory. Ignore such cases. # (See https://bugs.launchpad.net/bugs/1876626) continue if _device_is_multipathed(device): return device return _legacy_detect_multipath(target_mountpoint) def get_scsi_wwid(device, replace_whitespace=False): """ Issue a call to scsi_id utility to get WWID of the device. """ cmd = ['/lib/udev/scsi_id', '--whitelisted', '--device=%s' % device] if replace_whitespace: cmd.append('--replace-whitespace') try: (out, err) = util.subp(cmd, capture=True) LOG.debug("scsi_id output raw:\n%s\nerror:\n%s", out, err) scsi_wwid = out.rstrip('\n') return scsi_wwid except util.ProcessExecutionError as e: LOG.warn("Failed to get WWID: %s", e) return None def get_multipath_wwids(): """ Get WWIDs of all multipath devices available in the system. """ multipath_devices = set() multipath_wwids = set() devuuids = [(d, i['UUID']) for d, i in blkid().items() if 'UUID' in i] # Looking for two disks which contain filesystems with the same UUID. for (dev1, uuid1), (dev2, uuid2) in itertools.combinations(devuuids, 2): if uuid1 == uuid2: multipath_devices.add(get_blockdev_for_partition(dev1)[0]) for device in multipath_devices: wwid = get_scsi_wwid(device) # Function get_scsi_wwid() may return None in case of errors or # WWID field may be empty for some buggy disk. We don't want to # propagate both of these value further to avoid generation of # incorrect /etc/multipath/bindings file. if wwid: multipath_wwids.add(wwid) return multipath_wwids def get_root_device(dev, paths=None): """ Get root partition for specified device, based on presence of any paths in the provided paths list: """ if paths is None: paths = ["curtin"] LOG.debug('Searching for filesystem on %s containing one of: %s', dev, paths) partitions = get_pardevs_on_blockdevs(dev) target = None tmp_mount = tempfile.mkdtemp() for i in partitions: dev_path = partitions[i]['device_path'] mp = None try: util.do_mount(dev_path, tmp_mount) mp = tmp_mount for path in paths: fullpath = os.path.join(tmp_mount, path) if os.path.isdir(fullpath): target = dev_path LOG.debug("Found path '%s' on device '%s'", path, dev_path) break except Exception: pass finally: if mp: util.do_umount(mp) os.rmdir(tmp_mount) if target is None: raise ValueError( "Did not find any filesystem on %s that contained one of %s" % (dev, paths)) return target def get_blockdev_sector_size(devpath): """ Get the logical and physical sector size of device at devpath Returns a tuple of integer values (logical, physical). """ info = {} try: info = _lsblock([devpath]) except util.ProcessExecutionError as e: # raise on all errors except device missing error if str(e.exit_code) != "32": raise if info: LOG.debug('get_blockdev_sector_size: info:\n%s', util.json_dumps(info)) # (LP: 1598310) The call to _lsblock() may return multiple results. # If it does, then search for a result with the correct device path. # If no such device is found among the results, then fall back to # previous behavior, which was taking the first of the results assert len(info) > 0 for (k, v) in info.items(): if v.get('device_path') == devpath: parent = k break else: parent = list(info.keys())[0] logical = info[parent]['LOG-SEC'] physical = info[parent]['PHY-SEC'] else: sys_path = sys_block_path(devpath) logical = util.load_file( os.path.join(sys_path, 'queue/logical_block_size')) physical = util.load_file( os.path.join(sys_path, 'queue/hw_sector_size')) LOG.debug('get_blockdev_sector_size: (log=%s, phys=%s)', logical, physical) return (int(logical), int(physical)) def read_sys_block_size_bytes(device): """ /sys/class/block//size and return integer value in bytes""" device_dir = os.path.join('/sys/class/block', os.path.basename(device)) blockdev_size = os.path.join(device_dir, 'size') with open(blockdev_size) as d: size = int(d.read().strip()) * SECTOR_SIZE_BYTES return size def get_volume_uuid(path): """ Get uuid of disk with given path. This address uniquely identifies the device and remains consistant across reboots """ (out, _err) = util.subp(["blkid", "-o", "export", path], capture=True) for line in out.splitlines(): if "UUID" in line: return line.split('=')[-1] return '' def get_mountpoints(): """ Returns a list of all mountpoints where filesystems are currently mounted. """ info = _lsblock() proc_mounts = [mp for (dev, mp, vfs, opts, freq, passno) in get_proc_mounts()] lsblock_mounts = list(i.get("MOUNTPOINT") for name, i in info.items() if i.get("MOUNTPOINT") is not None and i.get("MOUNTPOINT") != "") return list(set(proc_mounts + lsblock_mounts)) def get_proc_mounts(): """ Returns a list of tuples for each entry in /proc/mounts """ mounts = [] with open("/proc/mounts", "r") as fp: for line in fp: try: (dev, mp, vfs, opts, freq, passno) = \ line.strip().split(None, 5) mounts.append((dev, mp, vfs, opts, freq, passno)) except ValueError: continue return mounts def _get_dev_disk_by_prefix(prefix): """ Construct a dictionary mapping devname to disk/ paths :returns: Dictionary populated by examining /dev/disk//* { '/dev/sda': '/dev/disk//virtio-aaaa', '/dev/sda1': '/dev/disk//virtio-aaaa-part1', } """ if not os.path.exists(prefix): return {} return { os.path.realpath(bypfx): bypfx for bypfx in [os.path.join(prefix, path) for path in os.listdir(prefix)] } def get_dev_disk_byid(): """ Construct a dictionary mapping devname to disk/by-id paths :returns: Dictionary populated by examining /dev/disk/by-id/* { '/dev/sda': '/dev/disk/by-id/virtio-aaaa', '/dev/sda1': '/dev/disk/by-id/virtio-aaaa-part1', } """ return _get_dev_disk_by_prefix('/dev/disk/by-id') def disk_to_byid_path(kname): """" Return a /dev/disk/by-id path to kname if present. """ mapping = get_dev_disk_byid() return mapping.get(dev_path(kname)) def disk_to_bypath_path(kname): """" Return a /dev/disk/by-path path to kname if present. """ mapping = _get_dev_disk_by_prefix('/dev/disk/by-path') return mapping.get(dev_path(kname)) def get_device_mapper_links(devpath, first=False): """ Return the best devlink to device at devpath. """ info = udevadm_info(devpath) if 'DEVLINKS' not in info: raise ValueError('Device %s does not have device symlinks' % devpath) devlinks = [devlink for devlink in sorted(info['DEVLINKS']) if devlink] if not devlinks: raise ValueError('Unexpected DEVLINKS list contained empty values') if first: return devlinks[0] return devlinks def lookup_disk(serial): """ Search for a disk by its serial number using /dev/disk/by-id/ """ # Get all volumes in /dev/disk/by-id/ containing the serial string. The # string specified can be either in the short or long serial format # hack, some serials have spaces, udev usually converts ' ' -> '_' serial_udev = serial.replace(' ', '_') LOG.info('Processing serial %s via udev to %s', serial, serial_udev) disks = list(filter(lambda x: serial_udev in x, os.listdir("/dev/disk/by-id/"))) if not disks or len(disks) < 1: raise ValueError("no disk with serial '%s' found" % serial_udev) # Sort by length and take the shortest path name, as the longer path names # will be the partitions on the disk. Then use os.path.realpath to # determine the path to the block device in /dev/ disks.sort(key=lambda x: len(x)) LOG.debug('lookup_disks found: %s', disks) path = os.path.realpath("/dev/disk/by-id/%s" % disks[0]) # /dev/dm-X if multipath.is_mpath_device(path): info = udevadm_info(path) path = os.path.join('/dev/mapper', info['DM_NAME']) # /dev/sdX elif multipath.is_mpath_member(path): mp_name = multipath.find_mpath_id_by_path(path) path = os.path.join('/dev/mapper', mp_name) if not os.path.exists(path): raise ValueError("path '%s' to block device for disk with serial '%s' \ does not exist" % (path, serial_udev)) LOG.debug('block.lookup_disk() returning path %s', path) return path def lookup_dasd(bus_id): """ Search for a dasd by its bus_id. :param bus_id: s390x ccw bus_id 0.0.NNNN specifying the dasd :returns: dasd kernel device path (/dev/dasda) """ LOG.info('Processing ccw bus_id %s', bus_id) sys_ccw_dev = '/sys/bus/ccw/devices/%s/block' % bus_id if not os.path.exists(sys_ccw_dev): raise ValueError('Failed to find a block device at %s' % sys_ccw_dev) dasds = os.listdir(sys_ccw_dev) if not dasds or len(dasds) < 1: raise ValueError("no dasd with device_id '%s' found" % bus_id) path = '/dev/%s' % dasds[0] if not os.path.exists(path): raise ValueError("path '%s' to block device for dasd with bus_id '%s' \ does not exist" % (path, bus_id)) return path def sysfs_partition_data(blockdev=None, sysfs_path=None): # given block device or sysfs_path, return a list of tuples # of (kernel_name, number, offset, size) if blockdev: blockdev = os.path.normpath(blockdev) sysfs_path = sys_block_path(blockdev) elif sysfs_path: # use normpath to ensure that paths with trailing slash work sysfs_path = os.path.normpath(sysfs_path) blockdev = os.path.join('/dev', os.path.basename(sysfs_path)) else: raise ValueError("Blockdev and sysfs_path cannot both be None") # queue property is only on parent devices, ie, we can't read # /sys/class/block/vda/vda1/queue/* as queue is only on the # parent device sysfs_prefix = sysfs_path (parent, partnum) = get_blockdev_for_partition(blockdev) if partnum: sysfs_prefix = sys_block_path(parent) partnum = int(partnum) block_size = int(util.load_file(os.path.join( sysfs_prefix, 'queue/logical_block_size'))) unit = block_size ptdata = [] for part_sysfs in get_sysfs_partitions(sysfs_prefix): data = {} for sfile in ('partition', 'start', 'size'): dfile = os.path.join(part_sysfs, sfile) if not os.path.isfile(dfile): continue data[sfile] = int(util.load_file(dfile)) if partnum is None or data['partition'] == partnum: ptdata.append((path_to_kname(part_sysfs), data['partition'], data['start'] * unit, data['size'] * unit,)) return ptdata def get_part_table_type(device): """ check the type of partition table present on the specified device returns None if no ptable was present or device could not be read """ # it is neccessary to look for the gpt signature first, then the dos # signature, because a gpt formatted disk usually has a valid mbr to # protect the disk from being modified by older partitioning tools return ('gpt' if check_efi_signature(device) else 'dos' if check_dos_signature(device) else 'vtoc' if check_vtoc_signature(device) else None) def check_dos_signature(device): """ check if there is a dos partition table signature present on device """ # the last 2 bytes of a dos partition table have the signature with the # value 0xAA55. the dos partition table is always 0x200 bytes long, even if # the underlying disk uses a larger logical block size, so the start of # this signature must be at 0x1fe # https://en.wikipedia.org/wiki/Master_boot_record#Sector_layout devname = dev_path(path_to_kname(device)) return (is_block_device(devname) and util.file_size(devname) >= 0x200 and (util.load_file(devname, decode=False, read_len=2, offset=0x1fe) == b'\x55\xAA')) def check_efi_signature(device): """ check if there is a gpt partition table signature present on device """ # the gpt partition table header is always on lba 1, regardless of the # logical block size used by the underlying disk. therefore, a static # offset cannot be used, the offset to the start of the table header is # always the sector size of the disk # the start of the gpt partition table header shoult have the signaure # 'EFI PART'. # https://en.wikipedia.org/wiki/GUID_Partition_Table devname = dev_path(path_to_kname(device)) sector_size = get_blockdev_sector_size(devname)[0] return (is_block_device(devname) and util.file_size(devname) >= 2 * sector_size and (util.load_file(devname, decode=False, read_len=8, offset=sector_size) == b'EFI PART')) def check_vtoc_signature(device): """ check if the specified device has a vtoc partition table. """ devname = dev_path(path_to_kname(device)) try: util.subp(['fdasd', '--table', devname]) except util.ProcessExecutionError: return False return True def is_extended_partition(device): """ check if the specified device path is a dos extended partition """ # an extended partition must be on a dos disk, must be a partition, must be # within the first 4 partitions and will have a valid dos signature, # because the format of the extended partition matches that of a real mbr (parent_dev, part_number) = get_blockdev_for_partition(device) return (get_part_table_type(parent_dev) in ['dos', 'msdos'] and part_number is not None and int(part_number) <= 4 and check_dos_signature(device)) def is_zfs_member(device): """ check if the specified device path is a zfs member """ info = _lsblock() kname = path_to_kname(device) if kname in info and info[kname].get('FSTYPE') == 'zfs_member': return True return False def is_online(device): """ check if device is online """ sys_path = sys_block_path(device) device_size = util.load_file( os.path.join(sys_path, 'size')) # a block device should have non-zero size to be usable return int(device_size) > 0 def zkey_supported(strict=True): """ Return True if zkey cmd present and can generate keys, else False.""" LOG.debug('Checking if zkey encryption is supported...') try: util.load_kernel_module('pkey') except util.ProcessExecutionError as err: msg = "Failed to load 'pkey' kernel module" LOG.error(msg + ": %s" % err) if strict else LOG.warning(msg) return False try: with tempfile.NamedTemporaryFile() as tf: util.subp(['zkey', 'generate', tf.name], capture=True) LOG.debug('zkey encryption supported.') return True except util.ProcessExecutionError as err: msg = "zkey not supported" LOG.error(msg + ": %s" % err) if strict else LOG.warning(msg) return False @contextmanager def exclusive_open(path, exclusive=True): """ Obtain an exclusive file-handle to the file/device specified unless caller specifics exclusive=False. """ mode = 'rb+' fd = None if not os.path.exists(path): raise ValueError("No such file at path: %s" % path) flags = os.O_RDWR if exclusive: flags += os.O_EXCL try: fd = os.open(path, flags) try: fd_needs_closing = True with os.fdopen(fd, mode) as fo: yield fo fd_needs_closing = False except OSError: LOG.exception("Failed to create file-object from fd") raise finally: # python2 leaves fd open if there os.fdopen fails if fd_needs_closing and sys.version_info.major == 2: os.close(fd) except OSError: LOG.error("Failed to exclusively open path: %s", path) holders = get_holders(path) LOG.error('Device holders with exclusive access: %s', holders) mount_points = util.list_device_mounts(path) LOG.error('Device mounts: %s', mount_points) fusers = util.fuser_mount(path) LOG.error('Possible users of %s:\n%s', path, fusers) raise def wipe_file(path, reader=None, buflen=4 * 1024 * 1024, exclusive=True): """ wipe the existing file at path. if reader is provided, it will be called as a 'reader(buflen)' to provide data for each write. Otherwise, zeros are used. writes will be done in size of buflen. """ if reader: readfunc = reader else: buf = buflen * b'\0' def readfunc(size): return buf size = util.file_size(path) LOG.debug("%s is %s bytes. wiping with buflen=%s", path, size, buflen) with exclusive_open(path, exclusive=exclusive) as fp: while True: pbuf = readfunc(buflen) pos = fp.tell() if len(pbuf) != buflen and len(pbuf) + pos < size: raise ValueError( "short read on reader got %d expected %d after %d" % (len(pbuf), buflen, pos)) if pos + buflen >= size: fp.write(pbuf[0:size-pos]) break else: fp.write(pbuf) def quick_zero(path, partitions=True, exclusive=True): """ zero 1M at front, 1M at end, and 1M at front if this is a block device and partitions is true, then zero 1M at front and end of each partition. """ buflen = 1024 count = 1024 zero_size = buflen * count offsets = [0, -zero_size] is_block = is_block_device(path) if not (is_block or os.path.isfile(path)): raise ValueError("%s: not an existing file or block device", path) pt_names = [] if partitions and is_block: ptdata = sysfs_partition_data(path) for kname, ptnum, start, size in ptdata: pt_names.append((dev_path(kname), kname, ptnum)) pt_names.reverse() for (pt, kname, ptnum) in pt_names: LOG.debug('Wiping path: dev:%s kname:%s partnum:%s', pt, kname, ptnum) quick_zero(pt, partitions=False) LOG.debug("wiping 1M on %s at offsets %s", path, offsets) return zero_file_at_offsets(path, offsets, buflen=buflen, count=count, exclusive=exclusive) def zero_file_at_offsets(path, offsets, buflen=1024, count=1024, strict=False, exclusive=True): """ write zeros to file at specified offsets """ bmsg = "{path} (size={size}): " m_short = bmsg + "{tot} bytes from {offset} > size." m_badoff = bmsg + "invalid offset {offset}." if not strict: m_short += " Shortened to {wsize} bytes." m_badoff += " Skipping." buf = b'\0' * buflen tot = buflen * count msg_vals = {'path': path, 'tot': buflen * count} # allow caller to control if we require exclusive open with exclusive_open(path, exclusive=exclusive) as fp: # get the size by seeking to end. fp.seek(0, 2) size = fp.tell() msg_vals['size'] = size for offset in offsets: if offset < 0: pos = size + offset else: pos = offset msg_vals['offset'] = offset msg_vals['pos'] = pos if pos > size or pos < 0: if strict: raise ValueError(m_badoff.format(**msg_vals)) else: LOG.debug(m_badoff.format(**msg_vals)) continue msg_vals['wsize'] = size - pos if pos + tot > size: if strict: raise ValueError(m_short.format(**msg_vals)) else: LOG.debug(m_short.format(**msg_vals)) fp.seek(pos) for i in range(count): pos = fp.tell() if pos + buflen > size: fp.write(buf[0:size-pos]) else: fp.write(buf) def wipe_volume(path, mode="superblock", exclusive=True): """wipe a volume/block device :param path: a path to a block device :param mode: how to wipe it. pvremove: wipe a lvm physical volume zero: write zeros to the entire volume random: write random data (/dev/urandom) to the entire volume superblock: zero the beginning and the end of the volume superblock-recursive: zero the beginning of the volume, the end of the volume and beginning and end of any partitions that are known to be on this device. :param exclusive: boolean to control how path is opened """ if mode == "pvremove": # We need to use --force --force in case it's already in a volgroup and # pvremove doesn't want to remove it # If pvremove is run and there is no label on the system, # then it exits with 5. That is also okay, because we might be # wiping something that is already blank util.subp(['pvremove', '--force', '--force', '--yes', path], rcs=[0, 5], capture=True) lvm.lvm_scan() elif mode == "zero": wipe_file(path, exclusive=exclusive) elif mode == "random": with open("/dev/urandom", "rb") as reader: wipe_file(path, reader=reader.read, exclusive=exclusive) elif mode == "superblock": quick_zero(path, partitions=False, exclusive=exclusive) elif mode == "superblock-recursive": quick_zero(path, partitions=True, exclusive=exclusive) else: raise ValueError("wipe mode %s not supported" % mode) def get_supported_filesystems(): """ Return a list of filesystems that the kernel currently supports as read from /proc/filesystems. Raises RuntimeError if /proc/filesystems does not exist. """ proc_fs = "/proc/filesystems" if not os.path.exists(proc_fs): raise RuntimeError("Unable to read 'filesystems' from %s" % proc_fs) return [line.split('\t')[1].strip() for line in util.load_file(proc_fs).splitlines()] def _discover_get_probert_data(): try: LOG.debug('Importing probert prober') from probert import prober except Exception: LOG.error('Failed to import probert, discover disabled') return {} probe = prober.Prober() LOG.debug('Probing system for storage devices') probe.probe_storage() return probe.get_results() def discover(): probe_data = _discover_get_probert_data() if 'storage' not in probe_data: raise ValueError('Probing storage failed') LOG.debug('Extracting storage config from discovered devices') try: return storage_config.extract_storage_config(probe_data.get('storage')) except ImportError as e: LOG.exception(e) return {} # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/bcache.py000066400000000000000000000455531415350476600171470ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import errno import os import time from curtin import util from curtin.log import LOG from curtin.udev import udevadm_settle from . import dev_path, sys_block_path # Wait up to 20 minutes (150 + 300 + 750 = 1200 seconds) BCACHE_RETRIES = [sleep for nap in [1, 2, 5] for sleep in [nap] * 150] BCACHE_REGISTRATION_RETRY = [0.2] * 60 def superblock_asdict(device=None, data=None): """ Convert output from bcache-super-show into a dictionary""" if not device and not data: raise ValueError('Supply a device name, or data to parse') if not data: try: data, _err = util.subp(['bcache-super-show', device], capture=True) except util.ProcessExecutionError as e: LOG.debug('Failed to parse bcache superblock on %s:%s', device, e) return None bcache_super = {} for line in data.splitlines(): if not line: continue values = [val for val in line.split('\t') if val] bcache_super.update({values[0]: values[1]}) return bcache_super def parse_sb_version(device=None, sbdict=None): """ Parse bcache 'sb_version' field to integer if possible. """ if not device and not sbdict: raise ValueError('Supply a device name or bcache superblock dict') if not sbdict: sbdict = superblock_asdict(device=device) if not sbdict: LOG.info('Cannot parse sb.version without bcache superblock') return None if not isinstance(sbdict, dict): raise ValueError('Invalid sbdict type, must be dict') sb_version = sbdict.get('sb.version') try: # 'sb.version': '1 [backing device]' # 'sb.version': '3 [caching device]' version = int(sb_version.split()[0]) except (AttributeError, ValueError): LOG.warning("Failed to parse bcache 'sb.version' field" " as integer: %s", sb_version) raise return version def _check_bcache_type(device, sysfs_attr, sb_version, superblock=False): """ helper for checking bcache type via sysfs or bcache superblock. """ if not superblock: if not device.endswith('bcache'): sys_block = os.path.join(sys_block_path(device), 'bcache') else: sys_block = device bcache_sys_attr = os.path.join(sys_block, sysfs_attr) LOG.debug('path exists %s', bcache_sys_attr) return os.path.exists(bcache_sys_attr) else: return parse_sb_version(device=device) == sb_version def is_backing(device, superblock=False): """ Test if device is a bcache backing device A runtime check for an active bcache backing device is to examine /sys/class/block//bcache/label However if a device is not active then read the superblock of the device and check that sb.version == 1""" return _check_bcache_type(device, 'label', 1, superblock=superblock) def is_caching(device, superblock=False): """ Test if device is a bcache caching device A runtime check for an active bcache backing device is to examine /sys/class/block//bcache/cache_replacement_policy However if a device is not active then read the superblock of the device and check that sb.version == 3""" LOG.debug('Checking if %s is bcache caching device', device) return _check_bcache_type(device, 'cache_replacement_policy', 3, superblock=superblock) def sysfs_path(device, strict=True): """ Return /sys/class/block//bcache path for device. """ path = os.path.join(sys_block_path(device, strict=strict), 'bcache') if strict and not os.path.exists(path): err = OSError( "device '{}' did not have existing syspath '{}'".format( device, path)) err.errno = errno.ENOENT raise err return path def write_label(label, device): """ write label to bcache device """ bcache_sys_attr = os.path.join(sysfs_path(device), 'label') util.write_file(bcache_sys_attr, content=label, mode=None) def get_attached_cacheset(device): """ return the sysfs path to an attached cacheset. """ bcache_cache = os.path.join(sysfs_path(device), 'cache') if os.path.exists(bcache_cache): return os.path.basename(os.path.realpath(bcache_cache)) return None def get_cacheset_members(cset_uuid): """ return a list of sysfs paths to backing devices attached to the specified cache set. Example: % get_cacheset_members('08307315-48e7-4e46-8742-2ec37d615829') ['/sys/devices/pci0000:00/0000:00:08.0/virtio5/block/vdc/bcache', '/sys/devices/pci0000:00/0000:00:07.0/virtio4/block/vdb/bcache', '/sys/devices/pci0000:00/0000:00:06.0/virtio3/block/vda/vda1/bcache'] """ cset_path = '/sys/fs/bcache/%s' % cset_uuid members = [] if os.path.exists(cset_path): # extract bdev* links bdevs = [link for link in os.listdir(cset_path) if link.startswith('bdev')] # resolve symlink to target members = [os.path.realpath("%s/%s" % (cset_path, bdev)) for bdev in bdevs] return members def get_cacheset_cachedev(cset_uuid): """ Return a sysfs path to a cacheset cache device's bcache dir.""" # XXX: bcache cachesets only have a single cache0 entry cachedev = '/sys/fs/bcache/%s/cache0' % cset_uuid if os.path.exists(cachedev): return os.path.realpath(cachedev) return None def attach_backing_to_cacheset(backing_device, cache_device, cset_uuid): LOG.info("Attaching backing device to cacheset: " "{} -> {} cset.uuid: {}".format(backing_device, cache_device, cset_uuid)) backing_device_sysfs = sys_block_path(backing_device) attach = os.path.join(backing_device_sysfs, "bcache", "attach") util.write_file(attach, cset_uuid, mode=None) def get_backing_device(bcache_kname): """ For a given bcacheN kname, return the backing device bcache sysfs dir. bcache0 -> /sys/.../devices/.../device/bcache """ bcache_deps = '/sys/class/block/%s/slaves' % bcache_kname try: # if the bcache device is deleted, this may fail deps = os.listdir(bcache_deps) except util.FileMissingError as e: LOG.debug('Transient race, bcache slave path not found: %s', e) return None # a running bcache device has two entries in slaves, the cacheset # device, and the backing device. There may only be the backing # device (if a bcache device is found but not currently attached # to a cacheset. if len(deps) == 0: raise RuntimeError( '%s unexpected empty dir: %s' % (bcache_kname, bcache_deps)) for dev in (sysfs_path(dep) for dep in deps): if is_backing(dev): return dev return None def stop_cacheset(cset_uuid): """stop specified bcache cacheset.""" # we may be called with a full path or just the uuid if cset_uuid.startswith('/sys/fs/bcache/'): cset_device = cset_uuid else: cset_device = "/sys/fs/bcache/%s" % cset_uuid LOG.info('Stopping bcache set device: %s', cset_device) _stop_device(cset_device) def stop_device(device): """Stop the specified bcache device.""" if not device.startswith('/sys'): raise ValueError('Invalid device %s, must be sysfs path' % device) if not any(f(device) for f in (is_backing, is_caching)): raise ValueError('Cannot stop non-bcache device: %s' % device) LOG.debug('Stopping bcache layer on %s', device) _stop_device(device) def _stop_device(device): """ write to sysfs 'stop' and wait for path to be removed The caller needs to ensure that supplied path to the device is a 'bcache' sysfs path on a device. This may be one of the following scenarios: Cacheset: /sys/fs/bcache// Bcache device: /sys/class/block/bcache0/bcache Backing device /sys/class/block/vdb/bcache Cached device /sys/class/block/nvme0n1p1/bcache/set To support all of these, we append 'stop' to the path and write '1' and then wait for the 'stop' path to be removed. """ bcache_stop = os.path.join(device, 'stop') if not os.path.exists(bcache_stop): LOG.debug('bcache._stop_device: already removed %s', bcache_stop) return LOG.debug('bcache._stop_device: device=%s stop_path=%s', device, bcache_stop) try: util.write_file(bcache_stop, '1', mode=None) except (IOError, OSError) as e: # Note: if we get any exceptions in the above exception classes # it is a result of attempting to write "1" into the sysfs path # The range of errors changes depending on when we race with # the kernel asynchronously removing the sysfs path. Therefore # we log the exception errno we got, but do not re-raise as # the calling process is watching whether the same sysfs path # is being removed; if it fails to go away then we'll have # a log of the exceptions to debug. LOG.debug('Error writing to bcache stop file %s, device removed: %s', bcache_stop, e) finally: util.wait_for_removal(bcache_stop, retries=BCACHE_RETRIES) def register_bcache(bcache_device): LOG.debug('register_bcache: %s > /sys/fs/bcache/register', bcache_device) util.write_file('/sys/fs/bcache/register', bcache_device, mode=None) def set_cache_mode(bcache_dev, cache_mode): LOG.info("Setting cache_mode on {} to {}".format(bcache_dev, cache_mode)) cache_mode_file = '/sys/block/{}/bcache/cache_mode'.format(bcache_dev) util.write_file(cache_mode_file, cache_mode, mode=None) def validate_bcache_ready(bcache_device, bcache_sys_path): """ check if bcache is ready, dump info For cache devices, we expect to find a cacheN symlink which will point to the underlying cache device; Find this symlink, read it and compare bcache_device specified in the parameters. For backing devices, we expec to find a dev symlink pointing to the bcacheN device to which the backing device is enslaved. From the dev symlink, we can read the bcacheN holders list, which should contain the backing device kname. In either case, if we fail to find the correct symlinks in sysfs, this method will raise an OSError indicating the missing attribute. """ # cacheset # /sys/fs/bcache/ # cache device # /sys/class/block//bcache/set -> # .../fs/bcache/uuid # backing # /sys/class/block//bcache/cache -> # .../block/bcacheN # /sys/class/block//bcache/dev -> # .../block/bcacheN if bcache_sys_path.startswith('/sys/fs/bcache'): LOG.debug("validating bcache caching device '%s' from sys_path" " '%s'", bcache_device, bcache_sys_path) # we expect a cacheN symlink to point to bcache_device/bcache sys_path_links = [os.path.join(bcache_sys_path, file_name) for file_name in os.listdir(bcache_sys_path)] cache_links = [file_path for file_path in sys_path_links if os.path.islink(file_path) and ( os.path.basename(file_path).startswith('cache'))] if len(cache_links) == 0: msg = ('Failed to find any cache links in %s:%s' % ( bcache_sys_path, sys_path_links)) raise OSError(msg) for link in cache_links: target = os.readlink(link) LOG.debug('Resolving symlink %s -> %s', link, target) # cacheN -> ../../../devices/...//bcache # basename(dirname(readlink(link))) target_cache_device = os.path.basename( os.path.dirname(target)) if os.path.basename(bcache_device) == target_cache_device: LOG.debug('Found match: bcache_device=%s target_device=%s', bcache_device, target_cache_device) return else: msg = ('Cache symlink %s ' % target_cache_device + 'points to incorrect device: %s' % bcache_device) raise OSError(msg) elif bcache_sys_path.startswith('/sys/class/block'): LOG.debug("validating bcache backing device '%s' from sys_path" " '%s'", bcache_device, bcache_sys_path) # we expect a 'dev' symlink to point to the bcacheN device bcache_dev = os.path.join(bcache_sys_path, 'dev') if os.path.islink(bcache_dev): bcache_dev_link = ( os.path.basename(os.readlink(bcache_dev))) LOG.debug('bcache device %s using bcache kname: %s', bcache_sys_path, bcache_dev_link) bcache_slaves_path = os.path.join(bcache_dev, 'slaves') slaves = os.listdir(bcache_slaves_path) LOG.debug('bcache device %s has slaves: %s', bcache_sys_path, slaves) if os.path.basename(bcache_device) in slaves: LOG.debug('bcache device %s found in slaves', os.path.basename(bcache_device)) return else: msg = ('Failed to find bcache device %s' % bcache_device + 'in slaves list %s' % slaves) raise OSError(msg) else: msg = 'didnt find "dev" attribute on: %s', bcache_dev return OSError(msg) else: LOG.debug("Failed to validate bcache device '%s' from sys_path" " '%s'", bcache_device, bcache_sys_path) msg = ('sysfs path %s does not appear to be a bcache device' % bcache_sys_path) return ValueError(msg) def ensure_bcache_is_registered(bcache_device, expected, retry=None): """ Test that bcache_device is found at an expected path and re-register the device if it's not ready. Retry the validation and registration as needed. """ if not retry: retry = BCACHE_REGISTRATION_RETRY for attempt, wait in enumerate(retry): # find the actual bcache device name via sysfs using the # backing device's holders directory. LOG.debug('check just created bcache %s if it is registered,' ' try=%s', bcache_device, attempt + 1) try: udevadm_settle() if os.path.exists(expected): LOG.debug('Found bcache dev %s at expected path %s', bcache_device, expected) validate_bcache_ready(bcache_device, expected) else: msg = 'bcache device path not found: %s' % expected LOG.debug(msg) raise ValueError(msg) # if bcache path exists and holders are > 0 we can return LOG.debug('bcache dev %s at path %s successfully registered' ' on attempt %s/%s', bcache_device, expected, attempt + 1, len(retry)) return except (OSError, IndexError, ValueError): # Some versions of bcache-tools will register the bcache device # as soon as we run make-bcache using udev rules, so wait for # udev to settle, then try to locate the dev, on older versions # we need to register it manually though LOG.debug('bcache device was not registered, registering %s ' 'at /sys/fs/bcache/register', bcache_device) try: register_bcache(bcache_device) except IOError: # device creation is notoriously racy and this can trigger # "Invalid argument" IOErrors if it got created in "the # meantime" - just restart the function a few times to # check it all again pass LOG.debug("bcache dev %s not ready, waiting %ss", bcache_device, wait) time.sleep(wait) # we've exhausted our retries LOG.warning('Repetitive error registering the bcache dev %s', bcache_device) raise RuntimeError("bcache device %s can't be registered" % bcache_device) def create_cache_device(cache_device): # /sys/class/block/XXX/YYY/ cache_device_sysfs = sys_block_path(cache_device) if os.path.exists(os.path.join(cache_device_sysfs, "bcache")): LOG.debug('caching device already exists at {}/bcache. Read ' 'cset.uuid'.format(cache_device_sysfs)) (out, err) = util.subp(["bcache-super-show", cache_device], capture=True) LOG.debug('bcache-super-show=[{}]'.format(out)) [cset_uuid] = [line.split()[-1] for line in out.split("\n") if line.startswith('cset.uuid')] else: LOG.debug('caching device does not yet exist at {}/bcache. Make ' 'cache and get uuid'.format(cache_device_sysfs)) # make the cache device, extracting cacheset uuid (out, err) = util.subp(["make-bcache", "-C", cache_device], capture=True) LOG.debug('out=[{}]'.format(out)) [cset_uuid] = [line.split()[-1] for line in out.split("\n") if line.startswith('Set UUID:')] target_sysfs_path = '/sys/fs/bcache/%s' % cset_uuid ensure_bcache_is_registered(cache_device, target_sysfs_path) return cset_uuid def create_backing_device(backing_device, cache_device, cache_mode, cset_uuid): backing_device_sysfs = sys_block_path(backing_device) target_sysfs_path = os.path.join(backing_device_sysfs, "bcache") # there should not be any pre-existing bcache device bdir = os.path.join(backing_device_sysfs, "bcache") if os.path.exists(bdir): raise RuntimeError( 'Unexpected old bcache device: %s', backing_device) LOG.debug('Creating a backing device on %s', backing_device) util.subp(["make-bcache", "-B", backing_device]) ensure_bcache_is_registered(backing_device, target_sysfs_path) # via the holders we can identify which bcache device we just created # for a given backing device from .clear_holders import get_holders holders = get_holders(backing_device) if len(holders) != 1: err = ('Invalid number {} of holding devices:' ' "{}"'.format(len(holders), holders)) LOG.error(err) raise ValueError(err) [bcache_dev] = holders LOG.debug('The just created bcache device is {}'.format(holders)) if cache_device: # if we specify both then we need to attach backing to cache if cset_uuid: attach_backing_to_cacheset(backing_device, cache_device, cset_uuid) else: msg = "Invalid cset_uuid: {}".format(cset_uuid) LOG.error(msg) raise ValueError(msg) if cache_mode: set_cache_mode(bcache_dev, cache_mode) return dev_path(bcache_dev) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/clear_holders.py000066400000000000000000000675621415350476600205540ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ This module provides a mechanism for shutting down virtual storage layers on top of a block device, making it possible to reuse the block device without having to reboot the system """ import glob import os import time from curtin import (block, udev, util) from curtin.swap import is_swap_device from curtin.block import bcache from curtin.block import lvm from curtin.block import mdadm from curtin.block import multipath from curtin.block import zfs from curtin.log import LOG # poll frequenty, but wait up to 60 seconds total MDADM_RELEASE_RETRIES = [0.4] * 150 def _define_handlers_registry(): """ returns instantiated dev_types """ return { 'partition': {'shutdown': wipe_superblock, 'ident': identify_partition}, 'lvm': {'shutdown': shutdown_lvm, 'ident': identify_lvm}, 'crypt': {'shutdown': shutdown_crypt, 'ident': identify_crypt}, 'raid': {'shutdown': shutdown_mdadm, 'ident': identify_mdadm}, 'bcache': {'shutdown': shutdown_bcache, 'ident': identify_bcache}, 'disk': {'ident': lambda x: False, 'shutdown': wipe_superblock}, } def get_dmsetup_uuid(device): """ get the dm uuid for a specified dmsetup device """ blockdev = block.sysfs_to_devpath(device) (out, _) = util.subp(['dmsetup', 'info', blockdev, '-C', '-o', 'uuid', '--noheadings'], capture=True) return out.strip() def shutdown_bcache(device): """ Shut down bcache for specified bcache device 1. wipe the bcache device contents 2. extract the cacheset uuid (if cached) 3. extract the backing device 4. stop cacheset (if present) 5. stop the bcacheN device 6. wait for removal of sysfs path to bcacheN, bcacheN/bcache and backing/bcache to go away """ if not device.startswith('/sys/class/block'): raise ValueError('Invalid Device (%s): ' 'Device path must start with /sys/class/block/', device) # bcache device removal should be fast but in an extreme # case, might require the cache device to flush large # amounts of data to a backing device. The strategy here # is to wait for approximately 30 seconds but to check # frequently since curtin cannot proceed until devices # cleared. bcache_shutdown_message = ('shutdown_bcache running on {} has determined ' 'that the device has already been shut down ' 'during handling of another bcache dev. ' 'skipping'.format(device)) if not os.path.exists(device): LOG.info(bcache_shutdown_message) return LOG.info('Wiping superblock on bcache device: %s', device) _wipe_superblock(block.sysfs_to_devpath(device), exclusive=False) # collect required information before stopping bcache device # UUID from /sys/fs/cache/UUID cset_uuid = bcache.get_attached_cacheset(device) # /sys/class/block/vdX which is a backing dev of device (bcacheN) backing_sysfs = bcache.get_backing_device(block.path_to_kname(device)) # /sys/class/block/bcacheN/bache bcache_sysfs = bcache.sysfs_path(device, strict=False) # stop cacheset if one is presennt if cset_uuid: LOG.info('%s was attached to cacheset %s, stopping cacheset', device, cset_uuid) bcache.stop_cacheset(cset_uuid) # let kernel settle before the next remove udev.udevadm_settle() LOG.info('bcache cacheset stopped: %s', cset_uuid) # test and log whether the device paths are still present to_check = [bcache_sysfs, backing_sysfs] found_devs = [os.path.exists(p) for p in to_check] LOG.debug('os.path.exists on blockdevs:\n%s', list(zip(to_check, found_devs))) if not any(found_devs): LOG.info('bcache backing device already removed: %s (%s)', bcache_sysfs, device) LOG.debug('bcache backing device checked: %s', backing_sysfs) else: LOG.info('stopping bcache backing device at: %s', bcache_sysfs) bcache.stop_device(bcache_sysfs) return def shutdown_lvm(device): """ Shutdown specified lvm device. """ device = block.sys_block_path(device) # lvm devices have a dm directory that containes a file 'name' containing # '{volume group}-{logical volume}'. The volume can be freed using lvremove name_file = os.path.join(device, 'dm', 'name') lvm_name = util.load_file(name_file).strip() (vg_name, lv_name) = lvm.split_lvm_name(lvm_name) vg_lv_name = "%s/%s" % (vg_name, lv_name) devname = "/dev/" + vg_lv_name # wipe contents of the logical volume first LOG.info('Wiping lvm logical volume: %s', devname) block.quick_zero(devname, partitions=False) # remove the logical volume LOG.debug('using "lvremove" on %s', vg_lv_name) util.subp(['lvremove', '--force', '--force', vg_lv_name]) # if that was the last lvol in the volgroup, get rid of volgroup if len(lvm.get_lvols_in_volgroup(vg_name)) == 0: pvols = lvm.get_pvols_in_volgroup(vg_name) util.subp(['vgremove', '--force', '--force', vg_name], rcs=[0, 5]) # wipe the underlying physical volumes for pv in pvols: LOG.info('Wiping lvm physical volume: %s', pv) block.quick_zero(pv, partitions=False) # refresh lvmetad lvm.lvm_scan() def shutdown_crypt(device): """ Shutdown specified cryptsetup device """ blockdev = block.sysfs_to_devpath(device) util.subp(['cryptsetup', 'remove', blockdev], capture=True) def shutdown_mdadm(device): """ Shutdown specified mdadm device. """ blockdev = block.sysfs_to_devpath(device) if mdadm.md_is_in_container(blockdev): LOG.info('Array is in a container, skip discovering ' + 'raid devices and spares for %s', device) md_devs = [] else: LOG.info('Discovering raid devices and spares for %s', device) md_devs = ( mdadm.md_get_devices_list(blockdev) + mdadm.md_get_spares_list(blockdev)) mdadm.set_sync_action(blockdev, action="idle") mdadm.set_sync_action(blockdev, action="frozen") LOG.info('Wiping superblock on raid device: %s', device) try: _wipe_superblock(blockdev, exclusive=False) except ValueError as e: # if the array is not functional, writes to the device may fail # and _wipe_superblock will raise ValueError for short writes # which happens on inactive raid volumes. In that case we # shouldn't give up yet as we still want to disassemble # array and wipe members. Other errors such as IOError or OSError # are unwelcome and will stop deployment. LOG.debug('Non-fatal error writing to array device %s, ' 'proceeding with shutdown: %s', blockdev, e) LOG.info('Removing raid array members: %s', md_devs) for mddev in md_devs: try: mdadm.fail_device(blockdev, mddev) mdadm.remove_device(blockdev, mddev) except util.ProcessExecutionError as e: LOG.debug('Non-fatal error clearing raid array: %s', e.stderr) pass LOG.debug('using mdadm.mdadm_stop on dev: %s', blockdev) mdadm.mdadm_stop(blockdev) LOG.debug('Wiping mdadm member devices: %s' % md_devs) for mddev in md_devs: mdadm.zero_device(mddev, force=True) # mdadm stop operation is asynchronous so we must wait for the kernel to # release resources. For more details see LP: #1682456 try: for wait in MDADM_RELEASE_RETRIES: if mdadm.md_present(block.path_to_kname(blockdev)): time.sleep(wait) else: LOG.debug('%s has been removed', blockdev) break if mdadm.md_present(block.path_to_kname(blockdev)): raise OSError('Timeout exceeded for removal of %s', blockdev) except OSError: LOG.critical('Failed to stop mdadm device %s', device) if os.path.exists('/proc/mdstat'): LOG.critical("/proc/mdstat:\n%s", util.load_file('/proc/mdstat')) raise def wipe_superblock(device): """ Wrapper for block.wipe_volume compatible with shutdown function interface """ blockdev = block.sysfs_to_devpath(device) # when operating on a disk that used to have a dos part table with an # extended partition, attempting to wipe the extended partition will fail try: if not block.is_online(blockdev): LOG.debug("Device is not online (size=0), so skipping:" " '%s'", blockdev) return if block.is_extended_partition(blockdev): LOG.info("extended partitions do not need wiping, so skipping:" " '%s'", blockdev) return except OSError as e: if util.is_file_not_found_exc(e): LOG.debug('Device to wipe disappeared: %s', e) LOG.debug('/proc/partitions says: %s', util.load_file('/proc/partitions')) (parent, partnum) = block.get_blockdev_for_partition(blockdev) out, _e = util.subp(['sfdisk', '-d', parent], capture=True, combine_capture=True) LOG.debug('Disk partition info:\n%s', out) return else: raise e # gather any partitions partitions = block.get_sysfs_partitions(device) # release zfs member by exporting the pool if zfs.zfs_supported() and block.is_zfs_member(blockdev): poolname = zfs.device_to_poolname(blockdev) # only export pools that have been imported if poolname in zfs.zpool_list(): try: zfs.zpool_export(poolname) except util.ProcessExecutionError as e: LOG.warning('Failed to export zpool "%s": %s', poolname, e) if is_swap_device(blockdev): shutdown_swap(blockdev) # some volumes will be claimed by the bcache layer but do not surface # an actual /dev/bcacheN device which owns the parts (backing, cache) # The result is that some volumes cannot be wiped while bcache claims # the device. Resolve this by stopping bcache layer on those volumes # if present. for bcache_path in ['bcache', 'bcache/set']: stop_path = os.path.join(device, bcache_path) if os.path.exists(stop_path): LOG.debug('Attempting to release bcache layer from device: %s:%s', device, stop_path) if stop_path.endswith('set'): rp = os.path.realpath(stop_path) bcache.stop_cacheset(rp) else: bcache._stop_device(stop_path) # the blockdev (e.g. /dev/sda2) may be a multipath partition which can # only be wiped via its device mapper device (e.g. /dev/dm-4) # check for this and determine the correct device mapper value to use. if multipath.multipath_supported(): # handle /dev/mapper/mpatha , base mp device if multipath.is_mpath_device(blockdev): # if mpath device has "partitions" those need to be removed. # clear-holders will have already wiped these devices as they # are higher up in the dependency tree. mpath_id = multipath.find_mpath_id(blockdev) for mp_part_id in multipath.find_mpath_partitions(mpath_id): multipath.remove_partition(mp_part_id) # handle /dev/sdX which are held by multipath layer if multipath.is_mpath_member(blockdev): LOG.debug('Skipping multipath partition path member: %s', blockdev) return _wipe_superblock(blockdev) # if we had partitions, make sure they've been removed if partitions: LOG.debug('%s had partitions, issuing partition reread', device) retries = [.5, .5, 1, 2, 5, 7] for attempt, wait in enumerate(retries): try: # only rereadpt on wiped device block.rescan_block_devices(devices=[blockdev]) # may raise IOError, OSError due to wiped partition table curparts = block.get_sysfs_partitions(device) if len(curparts) == 0: return except (IOError, OSError): if attempt + 1 >= len(retries): raise LOG.debug("%s partitions still present, rereading pt" " (%s/%s). sleeping %ss before retry", device, attempt + 1, len(retries), wait) time.sleep(wait) def _wipe_superblock(blockdev, exclusive=True): """ No checks, just call wipe_volume """ retries = [1, 3, 5, 7] LOG.info('wiping superblock on %s', blockdev) for attempt, wait in enumerate(retries): LOG.debug('wiping %s attempt %s/%s', blockdev, attempt + 1, len(retries)) try: block.wipe_volume(blockdev, mode='superblock', exclusive=exclusive) LOG.debug('successfully wiped device %s on attempt %s/%s', blockdev, attempt + 1, len(retries)) return except OSError: if attempt + 1 >= len(retries): raise else: LOG.debug("wiping device '%s' failed on attempt" " %s/%s. sleeping %ss before retry", blockdev, attempt + 1, len(retries), wait) time.sleep(wait) def identify_lvm(device): """ determine if specified device is a lvm device """ return (block.path_to_kname(device).startswith('dm') and get_dmsetup_uuid(device).startswith('LVM')) def identify_crypt(device): """ determine if specified device is dm-crypt device """ return (block.path_to_kname(device).startswith('dm') and get_dmsetup_uuid(device).startswith('CRYPT')) def identify_mdadm(device): """ determine if specified device is a mdadm device """ # RAID0 and 1 devices can be partitioned and the partitions are *not* # raid devices with a sysfs 'md' subdirectory partition = identify_partition(device) return block.path_to_kname(device).startswith('md') and not partition def identify_bcache(device): """ determine if specified device is a bcache device """ # bcache devices can be partitioned and the partitions are *not* # bcache devices with a sysfs 'slaves' subdirectory partition = identify_partition(device) return block.path_to_kname(device).startswith('bcache') and not partition def identify_partition(device): """ determine if specified device is a partition """ path = os.path.join(device, 'partition') if os.path.exists(path): return True blockdev = block.sysfs_to_devpath(device) if multipath.is_mpath_partition(blockdev): return True return False def shutdown_swap(path): """release swap device from kernel swap pool if present""" procswaps = util.load_file('/proc/swaps') for swapline in procswaps.splitlines(): if swapline.startswith(path): msg = ('Removing %s from active use as swap device, ' 'needed for storage config' % path) LOG.warning(msg) util.subp(['swapoff', path]) return def get_holders(device): """ Look up any block device holders, return list of knames """ # block.sys_block_path works when given a /sys or /dev path sysfs_path = block.sys_block_path(device) # get holders hpath = os.path.join(sysfs_path, 'holders') holders = os.listdir(hpath) LOG.debug("devname '%s' had holders: %s", device, holders) return holders def gen_holders_tree(device): """ generate a tree representing the current storage hirearchy above 'device' """ device = block.sys_block_path(device) dev_name = block.path_to_kname(device) # the holders for a device should consist of the devices in the holders/ # dir in sysfs and any partitions on the device. this ensures that a # storage tree starting from a disk will include all devices holding the # disk's partitions holders = get_holders(device) holder_paths = ([block.sys_block_path(h) for h in holders] + block.get_sysfs_partitions(device)) # the DEV_TYPE registry contains a function under the key 'ident' for each # device type entry that returns true if the device passed to it is of the # correct type. there should never be a situation in which multiple # identify functions return true. therefore, it will always work to take # the device type with the first identify function that returns true as the # device type for the current device. in the event that no identify # functions return true, the device will be treated as a disk # (DEFAULT_DEV_TYPE). the identify function for disk never returns true. # the next() builtin in python will not raise a StopIteration exception if # there is a default value defined dev_type = next((k for k, v in DEV_TYPES.items() if v['ident'](device)), DEFAULT_DEV_TYPE) return { 'device': device, 'dev_type': dev_type, 'name': dev_name, 'holders': [gen_holders_tree(h) for h in holder_paths], } def plan_shutdown_holder_trees(holders_trees): """ plan best order to shut down holders in, taking into account high level storage layers that may have many devices below them returns a sorted list of descriptions of storage config entries including their path in /sys/block and their dev type can accept either a single storage tree or a list of storage trees assumed to start at an equal place in storage hirearchy (i.e. a list of trees starting from disk) """ # holds a temporary registry of holders to allow cross references # key = device sysfs path, value = {} of priority level, shutdown function reg = {} # normalize to list of trees if not isinstance(holders_trees, (list, tuple)): holders_trees = [holders_trees] # sort the trees to ensure we generate a consistent plan holders_trees = sorted(holders_trees, key=lambda x: x['device']) def htree_level(tree): if len(tree['holders']) == 0: return 0 return 1 + sum(htree_level(holder) for holder in tree['holders']) def flatten_holders_tree(tree, level=0): """ add entries from holders tree to registry with level key corresponding to how many layers from raw disks the current device is at """ device = tree['device'] device_level = htree_level(tree) # always go with highest level if current device has been # encountered already. since the device and everything above it is # re-added to the registry it ensures that any increase of level # required here will propagate down the tree # this handles a scenario like mdadm + bcache, where the backing # device for bcache is a 3rd level item like mdadm, but the cache # device is 1st level (disk) or second level (partition), ensuring # that the bcache item is always considered higher level than # anything else regardless of whether it was added to the tree via # the cache device or backing device first if device in reg: level = max(reg[device]['level'], level) + 1 else: # first time device to registry, assume the larger value of the # current level or the length of its dependencies. level = max(device_level, level) reg[device] = {'level': level, 'device': device, 'dev_type': tree['dev_type']} # handle holders above this level for holder in tree['holders']: flatten_holders_tree(holder, level=level + 1) # flatten the holders tree into the registry for holders_tree in holders_trees: flatten_holders_tree(holders_tree) def devtype_order(dtype): """Return the order in which we want to clear device types, higher value should be cleared first. :param: dtype: string. A device types name from the holders registry, see _define_handlers_registry() :returns: integer """ dev_type_order = [ 'disk', 'partition', 'bcache', 'lvm', 'raid', 'crypt'] return 1 + dev_type_order.index(dtype) # return list of entry dicts with greatest htree depth. The 'level' value # indicates the number of additional devices that are "below" this device. # Devices must be cleared in descending 'level' value. For devices which # have the same 'level' value, we sort within the 'level' by devtype order. return [reg[k] for k in sorted(reg, reverse=True, key=lambda x: (reg[x]['level'], devtype_order(reg[x]['dev_type'])))] def format_holders_tree(holders_tree): """ draw a nice dirgram of the holders tree """ # spacer styles based on output of 'tree --charset=ascii' spacers = (('`-- ', ' ' * 4), ('|-- ', '|' + ' ' * 3)) def format_tree(tree): """ format entry and any subentries """ result = [tree['name']] holders = tree['holders'] for (holder_no, holder) in enumerate(holders): spacer_style = spacers[min(len(holders) - (holder_no + 1), 1)] subtree_lines = format_tree(holder) for (line_no, line) in enumerate(subtree_lines): result.append(spacer_style[min(line_no, 1)] + line) return result return '\n'.join(format_tree(holders_tree)) def get_holder_types(tree): """ get flattened list of types of holders in holders tree and the devices they correspond to """ types = {(tree['dev_type'], tree['device'])} for holder in tree['holders']: types.update(get_holder_types(holder)) return types def assert_clear(base_paths): """ Check if all paths in base_paths are clear to use """ valid = ('disk', 'partition') if not isinstance(base_paths, (list, tuple)): base_paths = [base_paths] base_paths = [block.sys_block_path(path, strict=False) for path in base_paths] for holders_tree in [gen_holders_tree(p) for p in base_paths if os.path.exists(p)]: if any(holder_type not in valid and path not in base_paths for (holder_type, path) in get_holder_types(holders_tree)): raise OSError('Storage not clear, remaining:\n{}' .format(format_holders_tree(holders_tree))) def clear_holders(base_paths, try_preserve=False): """ Clear all storage layers depending on the devices specified in 'base_paths' A single device or list of devices can be specified. Device paths can be specified either as paths in /dev or /sys/block Will throw OSError if any holders could not be shut down """ # handle single path if not isinstance(base_paths, (list, tuple)): base_paths = [base_paths] LOG.info('Generating device storage trees for path(s): %s', base_paths) # get current holders and plan how to shut them down holder_trees = [gen_holders_tree(path) for path in base_paths] LOG.info('Current device storage tree:\n%s', '\n'.join(format_holders_tree(tree) for tree in holder_trees)) ordered_devs = plan_shutdown_holder_trees(holder_trees) LOG.info('Shutdown Plan:\n%s', "\n".join(map(str, ordered_devs))) # run shutdown functions for dev_info in ordered_devs: dev_type = DEV_TYPES.get(dev_info['dev_type']) shutdown_function = dev_type.get('shutdown') if not shutdown_function: continue if try_preserve and shutdown_function in DATA_DESTROYING_HANDLERS: LOG.info('shutdown function for holder type: %s is destructive. ' 'attempting to preserve data, so skipping' % dev_info['dev_type']) continue if os.path.exists(dev_info['device']): LOG.info("shutdown running on holder type: '%s' syspath: '%s'", dev_info['dev_type'], dev_info['device']) shutdown_function(dev_info['device']) def start_clear_holders_deps(): """ prepare system for clear holders to be able to scan old devices """ # a mdadm scan has to be started in case there is a md device that needs to # be detected. if the scan fails, it is either because there are no mdadm # devices on the system, or because there is a mdadm device in a damaged # state that could not be started. due to the nature of mdadm tools, it is # difficult to know which is the case. if any errors did occur, then ignore # them, since no action needs to be taken if there were no mdadm devices on # the system, and in the case where there is some mdadm metadata on a disk, # but there was not enough to start the array, the call to wipe_volume on # all disks and partitions should be sufficient to remove the mdadm # metadata mdadm.mdadm_assemble(scan=True, ignore_errors=True) # collect detail on any assembling arrays for md in [md for md in glob.glob('/dev/md*') if not os.path.isdir(md) and not identify_partition(md)]: mdstat = None if os.path.exists('/proc/mdstat'): mdstat = util.load_file('/proc/mdstat') LOG.debug("/proc/mdstat:\n%s", mdstat) found = [line for line in mdstat.splitlines() if os.path.basename(md) in line] # in some cases we have a /dev/md0 device node # but the kernel has already renamed the device /dev/md127 if len(found) == 0: LOG.debug('Ignoring md device %s, not present in mdstat', md) continue # give it a second poke to encourage running try: LOG.debug('Activating mdadm array %s', md) (out, err) = mdadm.mdadm_run(md) LOG.debug('MDADM run on %s stdout:\n%s\nstderr:\n%s', md, out, err) except util.ProcessExecutionError: LOG.debug('Non-fatal error when starting mdadm device %s', md) # extract details if we can try: (out, err) = mdadm.mdadm_query_detail(md, export=False, rawoutput=True) LOG.debug('MDADM detail on %s stdout:\n%s\nstderr:\n%s', md, out, err) except util.ProcessExecutionError: LOG.debug('Non-fatal error when querying mdadm detail on %s', md) mp_support = multipath.multipath_supported() if mp_support: LOG.debug('Detected multipath support, reload maps') multipath.reload() multipath.force_devmapper_symlinks() # scan and activate for logical volumes lvm.lvm_scan(multipath=mp_support) try: lvm.activate_volgroups(multipath=mp_support) except util.ProcessExecutionError: # partial vg may not come up due to missing members, that's OK pass udev.udevadm_settle() # the bcache module needs to be present to properly detect bcache devs # on some systems (precise without hwe kernel) it may not be possible to # lad the bcache module bcause it is not present in the kernel. if this # happens then there is no need to halt installation, as the bcache devices # will never appear and will never prevent the disk from being reformatted util.load_kernel_module('bcache') if not zfs.zfs_supported(): LOG.warning('zfs filesystem is not supported in this environment') # anything that is not identified can assumed to be a 'disk' or similar DEFAULT_DEV_TYPE = 'disk' # handlers that should not be run if an attempt is being made to preserve data DATA_DESTROYING_HANDLERS = [wipe_superblock] # types of devices that could be encountered by clear holders and functions to # identify them and shut them down DEV_TYPES = _define_handlers_registry() # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/dasd.py000066400000000000000000000367371415350476600166610ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import glob import os import re import tempfile from curtin import util from curtin.log import LOG, logged_time class DasdPartition: def __init__(self, device, start, end, length, id, system): self.device = device self.start = int(start) self.end = int(end) self.length = int(length) self.id = id self.system = system class DasdPartitionTable: def __init__(self, devname, blocks_per_track, bytes_per_block): self.devname = devname self.blocks_per_track = blocks_per_track self.bytes_per_block = bytes_per_block self.partitions = [] @property def bytes_per_track(self): return self.bytes_per_block * self.blocks_per_track def tracks_needed(self, size_in_bytes): return ((size_in_bytes - 1) // self.bytes_per_track) + 1 def _ptable_for_new_partition(self, partnumber, partsize): if partnumber > 3: raise ValueError('DASD devices only allow 3 partitions') # first partition always starts at track 2 # all others start after the previous partition ends if partnumber == 1: start = 2 else: start = int(self.partitions[-1].end) + 1 end = start + self.tracks_needed(partsize) - 1 return [ (p.start, p.end) for p in self.partitions[:partnumber-1] ] + [(start, end)] def add_partition(self, partnumber, partsize): """ Add a partition to this DasdDevice specifying partnumber and size. :param partnumber: integer value of partition number (1, 2 or 3) :param partsize: partition sizes in bytes. :raises: ValueError on invalid devname Example fdasd command with defaults: fdasd --verbose --config=/tmp/curtin/dasd-part1.fdasd /dev/dasdb """ LOG.debug( "add_partition: partnumber: %s partsize: %s", partnumber, partsize) partitions = self._ptable_for_new_partition(partnumber, partsize) LOG.debug("fdasd: partitions to be created: %s", partitions) content = "\n".join([ "[%s,%s]" % (part[0], part[1]) for part in partitions ]) LOG.debug("fdasd: content=\n%s", content) wfp = tempfile.NamedTemporaryFile(suffix=".fdasd", delete=False) wfp.close() util.write_file(wfp.name, content) cmd = ['fdasd', '--verbose', '--config=%s' % wfp.name, self.devname] LOG.debug('Partitioning %s with %s', self.devname, cmd) try: out, err = util.subp(cmd, capture=True) except util.ProcessExecutionError as e: LOG.error("Partitioning failed: %s", e) raise finally: if os.path.exists(wfp.name): os.unlink(wfp.name) @classmethod def from_fdasd_output(cls, devname, output): line_iter = iter(output.splitlines()) for line in line_iter: if line.startswith("Disk"): break kw = {'devname': devname} label_to_attr = { 'blocks per track': 'blocks_per_track', 'bytes per block': 'bytes_per_block' } for line in line_iter: if '--- tracks ---' in line: break if ':' in line: label, value = line.split(':', 1) label = label.strip(' .') value = value.strip() if label in label_to_attr: kw[label_to_attr[label]] = int(value) table = cls(**kw) for line in line_iter: if line.startswith('exiting'): break vals = line.split(None, 5) if vals[0].startswith('/dev/'): table.partitions.append(DasdPartition(*vals)) return table @classmethod def from_fdasd(cls, devname): """Use fdasd to construct a DasdPartitionTable. % fdasd --table /dev/dasdc reading volume label ..: VOL1 reading vtoc ..........: ok Disk /dev/dasdc: cylinders ............: 10017 tracks per cylinder ..: 15 blocks per track .....: 12 bytes per block ......: 4096 volume label .........: VOL1 volume serial ........: 0X1522 max partitions .......: 3 ------------------------------- tracks ------------------------------- Device start end length Id System /dev/dasdc1 2 43694 43693 1 Linux native /dev/dasdc2 43695 87387 43693 2 Linux native /dev/dasdc3 87388 131080 43693 3 Linux native 131081 150254 19174 unused exiting... """ cmd = ['fdasd', '--table', devname] out, _err = util.subp(cmd, capture=True) LOG.debug("from_fdasd output:\n---\n%s\n---\n", out) return cls.from_fdasd_output(devname, out) def dasdinfo(device_id): ''' Run dasdinfo command and return the exported values. :param: device_id: string, device_id of the dasd device to query. :returns: dictionary of udev key=value pairs. :raises: ValueError on None-ish device_id. :raises: ProcessExecutionError if dasdinfo returns non-zero. e.g. % info = dasdinfo('0.0.1544') % pprint.pprint(info) {'ID_BUS': 'ccw', 'ID_SERIAL': '0X1544', 'ID_TYPE': 'disk', 'ID_UID': 'IBM.750000000DXP71.1500.44', 'ID_XUID': 'IBM.750000000DXP71.1500.44'} ''' _valid_device_id(device_id) out, err = util.subp( ['dasdinfo', '--all', '--export', '--busid=%s' % device_id], capture=True) return util.load_shell_content(out) def dasd_format(devname): """Return the format (ldl/cdl/not-formatted) of devname.""" if not os.path.exists(devname): raise ValueError("Invalid dasd device name: '%s'" % devname) out, err = util.subp(['dasdview', '--extended', devname], capture=True) return _dasd_format(out) DASD_FORMAT = r"^format\s+:.+\s+(?P\w+\s\w+)$" def find_val(regex, content): m = re.search(regex, content, re.MULTILINE) if m is not None: return m.group("value") def _dasd_format(dasdview_output): """ Read and return specified device "disk_layout" value. :returns: string: One of ['cdl', 'ldl', 'not-formatted']. :raises: ValueError if dasdview result missing 'format' section. """ if not dasdview_output: return mapping = { 'cdl formatted': 'cdl', 'ldl formatted': 'ldl', 'not formatted': 'not-formatted', } diskfmt = find_val(DASD_FORMAT, dasdview_output) if diskfmt is not None: return mapping.get(diskfmt.lower()) def _valid_device_id(device_id): """ validate device_id string. :param device_id: string representing a s390 ccs device in the format .. e.g. 0.0.74fc """ if not device_id or not isinstance(device_id, util.string_types): raise ValueError( "device_id invalid: value None or non-string: '%s'" % device_id) if device_id.count('.') != 2: raise ValueError( "device_id invalid: format requires two '.' chars: %s" % device_id) (css, dsn, dev) = device_id.split('.') if not all([css, dsn, dev]): raise ValueError( "device_id invalid: format must be X.X.XXXX: '%s'" % device_id) if not (0 <= int(css, 16) < 256): raise ValueError("device_id invalid: css not in 0-255: '%s'" % css) if not (0 <= int(dsn, 16) < 256): raise ValueError("device_id invalid: dsn not in 0-255: '%s'" % dsn) if not (0 <= int(dev.lower(), 16) <= 65535): raise ValueError( "device_id invalid: devno not in 0-0xffff: '%s'" % dev) return True class CcwDevice(object): def __init__(self, device_id): self.device_id = device_id _valid_device_id(self.device_id) def ccw_device_attr_path(self, attr): return '/sys/bus/ccw/devices/%s/%s' % (self.device_id, attr) def ccw_device_attr(self, attr): """ Read a ccw_device attribute from sysfs for specified device_id. :param device_id: string of device ccw bus_id :param attr: string of which sysfs attribute to read :returns stripped string of the value in the specified attribute otherwise empty string if path to attribute does not exist. :raises: ValueError if device_id is not valid """ attrdata = None sysfs_attr_path = self.ccw_device_attr_path(attr) if os.path.isfile(sysfs_attr_path): attrdata = util.load_file(sysfs_attr_path).strip() return attrdata class DasdDevice(CcwDevice): @property def devname(self): return '/dev/disk/by-path/ccw-%s' % self.device_id def is_not_formatted(self): """ Returns a boolean indicating if the specified device_id is not yet formatted. :returns: boolean: True if the device is not formatted. """ return self.ccw_device_attr('status') == "unformatted" def blocksize(self): """ Read and return device_id's 'blocksize' value. :param: device_id: string of device ccw bus_id. :returns: string: the device's current blocksize. """ blkattr = 'block/*/queue/hw_sector_size' # In practice there will only be one entry in the directory # /sys/bus/ccw/devices/{device_id}/block/, but in case # something strange happens and there are more, this assumes # all block devices connected to the dasd have the same block # size... path = glob.glob(self.ccw_device_attr_path(blkattr))[0] return util.load_file(path) def disk_layout(self): """ Read and return specified device "disk_layout" value. :returns: string: One of ['cdl', 'ldl', 'not-formatted']. :raises: ValueError if dasdview result missing 'format' section. """ format = dasd_format(self.devname) if not format: raise ValueError( 'could not determine format of %s' % self.devname) return format def label(self): """Read and return specified device label (VOLSER) value. :returns: string: devices's label (VOLSER) value. :raises: ValueError if it cannot get label value. """ info = dasdinfo(self.device_id) if 'ID_SERIAL' not in info: raise ValueError( 'Failed to read %s label (VOLSER)' % self.device_id) return info['ID_SERIAL'] def needs_formatting(self, blksize, layout, volser): """ Determine if DasdDevice attributes matches the required parameters. Note that devices that indicate they are unformatted will require formatting. :param blksize: expected blocksize of the device. :param layout: expected disk layout. :param volser: expected label, if None, label is ignored. :returns: boolean, True if formatting is needed, else False. """ LOG.debug('Checking if dasd %s needs formatting', self.device_id) if self.is_not_formatted(): LOG.debug('dasd %s is not formatted', self.device_id) return True if int(blksize) != int(self.blocksize()): LOG.debug('dasd %s block size (%s) does not match (%s)', self.device_id, self.blocksize(), blksize) return True if layout != self.disk_layout(): LOG.debug('dasd %s disk layout (%s) does not match %s', self.device_id, self.disk_layout(), layout) return True if volser and volser != self.label(): LOG.debug('dasd %s volser (%s) does not match %s', self.device_id, self.label(), volser) return True return False @logged_time("DASD.FORMAT") def format(self, blksize=4096, layout='cdl', force=False, set_label=None, keep_label=False, no_label=False, mode='quick'): """ Format DasdDevice with supplied parameters. :param blksize: integer value to configure disk block size in bytes. Must be one of 512, 1024, 2048, 4096; defaults to 4096. :param layout: string specify disk layout format. Must be one of 'cdl' (Compatible Disk Layout, default) or 'ldl' (Linux Disk Layout). :param force: boolean set true to skip sanity checks, defaults to False :param set_label: string to write to the volume label for identification. If no label provided, a label is generated from device number of the dasd. Note: is interpreted as ASCII string and is automatically converted to uppercase and then to EBCDIC. e.g. 'a@b\\$c#' to get A@B$C#. :param keep_label: boolean set true to keep existing label on dasd, ignores label param value, defaults to False. :param no_label: boolean set true to skip writing label to dasd, ignores label and keep_label params, defaults to False. :param mode: string to control format mode. Must be one of 'full' (Format the full disk), 'quick' (Format the first two tracks, default), 'expand' (Format unformatted tracks at device end). :param strict: boolean which enforces that dasd device exists before issuing format command, defaults to True. :raises: RuntimeError if devname does not exist. :raises: ValueError on invalid blocksize, disk_layout and mode. :raises: ProcessExecutionError on errors running 'dasdfmt' command. Example dadsfmt command with defaults: dasdformat -y --blocksize=4096 --disk_layout=cdl \ --mode=quick /dev/dasda """ if not os.path.exists(self.devname): raise RuntimeError("devname '%s' does not exist" % self.devname) if no_label: keep_label = False set_label = None if keep_label: set_label = None valid_blocksize = [512, 1024, 2048, 4096] if blksize not in valid_blocksize: raise ValueError( "blksize: '%s' not one of '%s'" % (blksize, valid_blocksize)) valid_layouts = ['cdl', 'ldl'] if layout not in valid_layouts: raise ValueError("layout: '%s' not one of '%s'" % (layout, valid_layouts)) if not mode: mode = 'quick' valid_modes = ['full', 'quick', 'expand'] if mode not in valid_modes: raise ValueError("mode: '%s' not one of '%s'" % (mode, valid_modes)) opts = [ '-y', '--blocksize=%s' % blksize, '--disk_layout=%s' % layout, '--mode=%s' % mode ] if set_label: opts += ['--label=%s' % set_label] if keep_label: opts += ['--keep_label'] if no_label: opts += ['--no_label'] if force: opts += ['--force'] cmd = ['dasdfmt'] + opts + [self.devname] LOG.debug('Formatting %s with %s', self.devname, cmd) try: out, _err = util.subp(cmd, capture=True) except util.ProcessExecutionError as e: LOG.error("Formatting failed: %s", e) raise # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/deps.py000066400000000000000000000070251415350476600166650ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin.distro import DISTROS from curtin.block import iscsi def storage_config_required_packages(storage_config, mapping): """Read storage configuration dictionary and determine which packages are required for the supplied configuration to function. Return a list of packaged to install. """ if not storage_config or not isinstance(storage_config, dict): raise ValueError('Invalid storage configuration. ' 'Must be a dict:\n %s' % storage_config) if not mapping or not isinstance(mapping, dict): raise ValueError('Invalid storage mapping. Must be a dict') if 'storage' in storage_config: storage_config = storage_config.get('storage') needed_packages = [] # get reqs by device operation type dev_configs = set(operation['type'] for operation in storage_config['config']) for dev_type in dev_configs: if dev_type in mapping: needed_packages.extend(mapping[dev_type]) # for disks with path: iscsi: we need iscsi tools iscsi_vols = iscsi.get_iscsi_volumes_from_config(storage_config) if len(iscsi_vols) > 0: needed_packages.extend(mapping['iscsi']) # for any format operations, check the fstype and # determine if we need any mkfs tools as well. format_configs = set([operation['fstype'] for operation in storage_config['config'] if operation['type'] == 'format']) for format_type in format_configs: if format_type in mapping: needed_packages.extend(mapping[format_type]) return needed_packages def detect_required_packages_mapping(osfamily=DISTROS.debian): """Return a dictionary providing a versioned configuration which maps storage configuration elements to the packages which are required for functionality. The mapping key is either a config type value, or an fstype value. """ distro_mapping = { DISTROS.debian: { 'bcache': ['bcache-tools'], 'btrfs': ['^btrfs-(progs|tools)$'], 'ext2': ['e2fsprogs'], 'ext3': ['e2fsprogs'], 'ext4': ['e2fsprogs'], 'jfs': ['jfsutils'], 'iscsi': ['open-iscsi'], 'lvm_partition': ['lvm2'], 'lvm_volgroup': ['lvm2'], 'ntfs': ['ntfs-3g'], 'raid': ['mdadm'], 'reiserfs': ['reiserfsprogs'], 'xfs': ['xfsprogs'], 'zfsroot': ['zfsutils-linux', 'zfs-initramfs'], 'zfs': ['zfsutils-linux', 'zfs-initramfs'], 'zpool': ['zfsutils-linux', 'zfs-initramfs'], }, DISTROS.redhat: { 'bcache': [], 'btrfs': ['btrfs-progs'], 'ext2': ['e2fsprogs'], 'ext3': ['e2fsprogs'], 'ext4': ['e2fsprogs'], 'jfs': [], 'iscsi': ['iscsi-initiator-utils'], 'lvm_partition': ['lvm2'], 'lvm_volgroup': ['lvm2'], 'ntfs': [], 'raid': ['mdadm'], 'reiserfs': [], 'xfs': ['xfsprogs'], 'zfsroot': [], 'zfs': [], 'zpool': [], }, } if osfamily not in distro_mapping: raise ValueError('No block package mapping for distro: %s' % osfamily) return {1: {'handler': storage_config_required_packages, 'mapping': distro_mapping.get(osfamily)}} # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/iscsi.py000066400000000000000000000426541415350476600170530ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. # This module wraps calls to the iscsiadm utility for examining iSCSI # devices. Functions prefixed with 'iscsiadm_' involve executing # the 'iscsiadm' command in a subprocess. The remaining functions handle # manipulation of the iscsiadm output. import os import re import shutil from curtin import (paths, util, udev) from curtin.block import (get_device_slave_knames, path_to_kname) from curtin.log import LOG _ISCSI_DISKS = {} RFC4173_AUTH_REGEX = re.compile(r'''^ (?P[^:]*?):(?P[^:]*?) (?::(?P[^:]*?):(?P[^:]*?))? $ ''', re.VERBOSE) RFC4173_TARGET_REGEX = re.compile(r'''^ (?P[^@:\[\]]*|\[[^@]*\]): # greedy so ipv6 IPs are matched (?P[^:]*): (?P[^:]*): (?P[^:]*): (?P\S*) # greedy so entire suffix is matched $''', re.VERBOSE) ISCSI_PORTAL_REGEX = re.compile(r'^(?P\S*):(?P\d+)$') # @portal is of the form: HOST:PORT def assert_valid_iscsi_portal(portal): if not isinstance(portal, util.string_types): raise ValueError("iSCSI portal (%s) is not a string" % portal) m = re.match(ISCSI_PORTAL_REGEX, portal) if m is None: raise ValueError("iSCSI portal (%s) is not in the format " "(HOST:PORT)" % portal) host = m.group('host') if host.startswith('[') and host.endswith(']'): host = host[1:-1] if not util.is_valid_ipv6_address(host): raise ValueError("Invalid IPv6 address (%s) in iSCSI portal (%s)" % (host, portal)) try: port = int(m.group('port')) except ValueError: raise ValueError("iSCSI portal (%s) port (%s) is not an integer" % (portal, m.group('port'))) return host, port def iscsiadm_sessions(): cmd = ["iscsiadm", "--mode=session", "--op=show"] # rc 21 indicates no sessions currently exist, which is not # inherently incorrect (if not logged in yet) out, _ = util.subp(cmd, rcs=[0, 21], capture=True, log_captured=True) return out def iscsiadm_discovery(portal): # only supported type for now type = 'sendtargets' if not portal: raise ValueError("Portal must be specified for discovery") cmd = ["iscsiadm", "--mode=discovery", "--type=%s" % type, "--portal=%s" % portal] try: util.subp(cmd, capture=True, log_captured=True) except util.ProcessExecutionError as e: LOG.warning("iscsiadm_discovery to %s failed with exit code %d", portal, e.exit_code) raise def iscsiadm_login(target, portal): LOG.debug('iscsiadm_login: target=%s portal=%s', target, portal) cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--login'] util.subp(cmd, capture=True, log_captured=True) def iscsiadm_set_automatic(target, portal): LOG.debug('iscsiadm_set_automatic: target=%s portal=%s', target, portal) cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.startup', '--value=automatic'] util.subp(cmd, capture=True, log_captured=True) def iscsiadm_authenticate(target, portal, user=None, password=None, iuser=None, ipassword=None): LOG.debug('iscsiadm_authenticate: target=%s portal=%s ' 'user=%s password=%s iuser=%s ipassword=%s', target, portal, user, "HIDDEN" if password else None, iuser, "HIDDEN" if ipassword else None) if iuser or ipassword: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.authmethod', '--value=CHAP'] util.subp(cmd, capture=True, log_captured=True) if iuser: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.username_in', '--value=%s' % iuser] util.subp(cmd, capture=True, log_captured=True) if ipassword: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.password_in', '--value=%s' % ipassword] util.subp(cmd, capture=True, log_captured=True, logstring='iscsiadm --mode=node --targetname=%s ' '--portal=%s --op=update ' '--name=node.session.auth.password_in ' '--value=HIDDEN' % (target, portal)) if user or password: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.authmethod', '--value=CHAP'] util.subp(cmd, capture=True, log_captured=True) if user: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.username', '--value=%s' % user] util.subp(cmd, capture=True, log_captured=True) if password: cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--op=update', '--name=node.session.auth.password', '--value=%s' % password] util.subp(cmd, capture=True, log_captured=True, logstring='iscsiadm --mode=node --targetname=%s ' '--portal=%s --op=update ' '--name=node.session.auth.password ' '--value=HIDDEN' % (target, portal)) def iscsiadm_logout(target, portal): LOG.debug('iscsiadm_logout: target=%s portal=%s', target, portal) cmd = ['iscsiadm', '--mode=node', '--targetname=%s' % target, '--portal=%s' % portal, '--logout'] util.subp(cmd, capture=True, log_captured=True) udev.udevadm_settle() def target_nodes_directory(state, iscsi_disk): # we just want to copy in the nodes portion target_nodes_location = os.path.dirname( os.path.join(os.path.split(state['fstab'])[0], iscsi_disk.etciscsi_nodefile[len('/etc/iscsi/'):])) os.makedirs(target_nodes_location) return target_nodes_location def restart_iscsi_service(): LOG.info('restarting iscsi service') if util.uses_systemd(): cmd = ['systemctl', 'reload-or-restart', 'open-iscsi'] else: cmd = ['service', 'open-iscsi', 'restart'] util.subp(cmd, capture=True) def save_iscsi_config(iscsi_disk): state = util.load_command_environment() # A nodes directory will be created in the same directory as the # fstab in the configuration. This will then be copied onto the # system later if state['fstab']: target_nodes_location = target_nodes_directory(state, iscsi_disk) shutil.copy(iscsi_disk.etciscsi_nodefile, target_nodes_location) else: LOG.info("fstab configuration is not present in environment, " "so cannot locate an appropriate directory to write " "iSCSI node file in so not writing iSCSI node file") def ensure_disk_connected(rfc4173, write_config=True): global _ISCSI_DISKS iscsi_disk = _ISCSI_DISKS.get(rfc4173) if not iscsi_disk: iscsi_disk = IscsiDisk(rfc4173) try: iscsi_disk.connect() except util.ProcessExecutionError: LOG.error('Unable to connect to iSCSI disk (%s)' % rfc4173) # what should we do in this case? raise if write_config: save_iscsi_config(iscsi_disk) _ISCSI_DISKS.update({rfc4173: iscsi_disk}) # this is just a sanity check that the disk is actually present and # the above did what we expected if not os.path.exists(iscsi_disk.devdisk_path): LOG.warn('Unable to find iSCSI disk for target (%s) by path (%s)', iscsi_disk.target, iscsi_disk.devdisk_path) return iscsi_disk def connected_disks(): global _ISCSI_DISKS return _ISCSI_DISKS def get_iscsi_volumes_from_config(cfg): """Parse a curtin storage config and return a list of iscsi disk rfc4173 uris for each configuration present. """ if not cfg: cfg = {} if 'storage' in cfg: sconfig = cfg.get('storage', {}).get('config', []) else: sconfig = cfg.get('config', []) if not sconfig or not isinstance(sconfig, list): LOG.warning('Configuration dictionary did not contain' ' a storage configuration') return [] return [disk['path'] for disk in sconfig if disk['type'] == 'disk' and disk.get('path', "").startswith('iscsi:')] def get_iscsi_disks_from_config(cfg): """Return a list of IscsiDisk objects for each iscsi volume present.""" # Construct IscsiDisk objects for each iscsi volume present iscsi_disks = [IscsiDisk(volume) for volume in get_iscsi_volumes_from_config(cfg)] LOG.debug('Found %s iscsi disks in storage config', len(iscsi_disks)) return iscsi_disks def get_iscsi_ports_from_config(cfg): """Return a set of ports that may be used when connecting to volumes.""" ports = set([d.port for d in get_iscsi_disks_from_config(cfg)]) LOG.debug('Found iscsi ports in use: %s', ports) return ports def disconnect_target_disks(target_root_path=None): target_nodes_path = paths.target_path(target_root_path, '/etc/iscsi/nodes') fails = [] if os.path.isdir(target_nodes_path): for target in os.listdir(target_nodes_path): if target not in iscsiadm_sessions(): LOG.debug('iscsi target %s not active, skipping', target) continue # conn is "host,port,lun" for conn in os.listdir( os.path.sep.join([target_nodes_path, target])): host, port, _ = conn.split(',') try: util.subp(['sync']) iscsiadm_logout(target, '%s:%s' % (host, port)) except util.ProcessExecutionError as e: fails.append(target) LOG.warn("Unable to logout of iSCSI target %s: %s", target, e) else: LOG.warning('Skipping disconnect: failed to find iscsi nodes path: %s', target_nodes_path) if fails: raise RuntimeError( "Unable to logout of iSCSI targets: %s" % ', '.join(fails)) # Determines if a /dev/disk/by-path symlink matching the udev pattern # for iSCSI disks is pointing at @kname def kname_is_iscsi(kname): by_path = "/dev/disk/by-path" if os.path.isdir(by_path): for path in os.listdir(by_path): path_target = os.path.realpath(os.path.sep.join([by_path, path])) if kname in path_target and 'iscsi' in path: LOG.debug('kname_is_iscsi: ' 'found by-path link %s for kname %s', path, kname) return True LOG.debug('kname_is_iscsi: no iscsi disk found for kname %s' % kname) return False def volpath_is_iscsi(volume_path): """ Determine if the volume_path's kname is backed by iSCSI. Recursively check volume_path's slave devices as well in case volume_path is a stacked block device (like LVM/MD) returns a boolean """ if not volume_path: raise ValueError("Invalid input for volume_path: '%s'", volume_path) volume_path_slaves = get_device_slave_knames(volume_path) LOG.debug('volume_path=%s found slaves: %s', volume_path, volume_path_slaves) knames = [path_to_kname(volume_path)] + volume_path_slaves return any([kname_is_iscsi(kname) for kname in knames]) class IscsiDisk(object): # Per Debian bug 804162, the iscsi specifier looks like # TARGETSPEC=host:proto:port:lun:targetname # root=iscsi:$TARGETSPEC # root=iscsi:user:password@$TARGETSPEC # root=iscsi:user:password:initiatoruser:initiatorpassword@$TARGETSPEC def __init__(self, rfc4173): auth_m = None _rfc4173 = rfc4173 if not rfc4173.startswith('iscsi:'): raise ValueError('iSCSI specification (%s) did not start with ' 'iscsi:. iSCSI disks must be specified as ' 'iscsi:[user:password[:initiatoruser:' 'initiatorpassword]@]' 'host:proto:port:lun:targetname' % _rfc4173) rfc4173 = rfc4173[6:] if '@' in rfc4173: if rfc4173.count('@') != 1: raise ValueError('Only one @ symbol allowed in iSCSI disk ' 'specification (%s). iSCSI disks must be ' 'specified as' 'iscsi:[user:password[:initiatoruser:' 'initiatorpassword]@]' 'host:proto:port:lun:targetname' % _rfc4173) auth, target = rfc4173.split('@') auth_m = RFC4173_AUTH_REGEX.match(auth) if auth_m is None: raise ValueError('Invalid authentication specified for iSCSI ' 'disk (%s). iSCSI disks must be specified as ' 'iscsi:[user:password[:initiatoruser:' 'initiatorpassword]@]' 'host:proto:port:lun:targetname' % _rfc4173) else: target = rfc4173 target_m = RFC4173_TARGET_REGEX.match(target) if target_m is None: raise ValueError('Invalid target specified for iSCSI disk (%s). ' 'iSCSI disks must be specified as ' 'iscsi:[user:password[:initiatoruser:' 'initiatorpassword]@]' 'host:proto:port:lun:targetname' % _rfc4173) if target_m.group('proto') and target_m.group('proto') != '6': LOG.warn('Specified protocol for iSCSI (%s) is unsupported, ' 'assuming 6 (TCP)', target_m.group('proto')) if not target_m.group('host') or not target_m.group('targetname'): raise ValueError('Both host and targetname must be specified for ' 'iSCSI disks') if auth_m: self.user = auth_m.group('user') self.password = auth_m.group('password') self.iuser = auth_m.group('initiatoruser') self.ipassword = auth_m.group('initiatorpassword') else: self.user = None self.password = None self.iuser = None self.ipassword = None self.host = target_m.group('host') self.proto = '6' self.lun = int(target_m.group('lun')) if target_m.group('lun') else 0 self.target = target_m.group('targetname') try: self.port = int(target_m.group('port')) if target_m.group('port') \ else 3260 except ValueError: raise ValueError('Specified iSCSI port (%s) is not an integer' % target_m.group('port')) portal = '%s:%s' % (self.host, self.port) if self.host.startswith('[') and self.host.endswith(']'): self.host = self.host[1:-1] if not util.is_valid_ipv6_address(self.host): raise ValueError('Specified iSCSI IPv6 address (%s) is not ' 'valid' % self.host) portal = '[%s]:%s' % (self.host, self.port) assert_valid_iscsi_portal(portal) self.portal = portal def __str__(self): rep = 'iscsi' if self.user: rep += ':%s:PASSWORD' % self.user if self.iuser: rep += ':%s:IPASSWORD' % self.iuser rep += ':%s:%s:%s:%s:%s' % (self.host, self.proto, self.port, self.lun, self.target) return rep @property def etciscsi_nodefile(self): return '/etc/iscsi/nodes/%s/%s,%s,%s/default' % ( self.target, self.host, self.port, self.lun) @property def devdisk_path(self): return '/dev/disk/by-path/ip-%s-iscsi-%s-lun-%s' % ( self.portal, self.target, self.lun) def connect(self): if self.target not in iscsiadm_sessions(): iscsiadm_discovery(self.portal) iscsiadm_authenticate(self.target, self.portal, self.user, self.password, self.iuser, self.ipassword) iscsiadm_login(self.target, self.portal) udev.udevadm_settle(self.devdisk_path) # always set automatic mode iscsiadm_set_automatic(self.target, self.portal) def disconnect(self): if self.target not in iscsiadm_sessions(): LOG.warning('Iscsi target %s not in active iscsi sessions', self.target) return try: util.subp(['sync']) iscsiadm_logout(self.target, self.portal) except util.ProcessExecutionError as e: LOG.warn("Unable to logout of iSCSI target %s from portal %s: %s", self.target, self.portal, e) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/lvm.py000066400000000000000000000106511415350476600165270ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ This module provides some helper functions for manipulating lvm devices """ from curtin import distro from curtin import util from curtin.log import LOG import os # separator to use for lvm/dm tools _SEP = '=' def _filter_lvm_info(lvtool, match_field, query_field, match_key, args=None): """ filter output of pv/vg/lvdisplay tools """ if args is None: args = [] (out, _) = util.subp([lvtool, '-C', '--separator', _SEP, '--noheadings', '-o', ','.join([match_field, query_field])] + args, capture=True) return [qf for (mf, qf) in [line.strip().split(_SEP) for line in out.strip().splitlines()] if mf == match_key] def get_pvols_in_volgroup(vg_name): """ get physical volumes used by volgroup """ return _filter_lvm_info('pvdisplay', 'vg_name', 'pv_name', vg_name) def get_lvols_in_volgroup(vg_name): """ get logical volumes in volgroup """ return _filter_lvm_info('lvdisplay', 'vg_name', 'lv_name', vg_name) def get_lv_size_bytes(lv_name): """ get the size in bytes of a logical volume specified by lv_name.""" result = _filter_lvm_info('lvdisplay', 'lv_name', 'lv_size', lv_name, args=['--units=B']) if result: return util.human2bytes(result[0]) def split_lvm_name(full): """ split full lvm name into tuple of (volgroup, lv_name) """ # 'dmsetup splitname' is the authoratative source for lvm name parsing (out, _) = util.subp(['dmsetup', 'splitname', full, '-c', '--noheadings', '--separator', _SEP, '-o', 'vg_name,lv_name'], capture=True) return out.strip().split(_SEP) def lvmetad_running(): """ check if lvmetad is running """ return os.path.exists(os.environ.get('LVM_LVMETAD_PIDFILE', '/run/lvmetad.pid')) def activate_volgroups(multipath=False): """ Activate available volgroups and logical volumes within. # found % vgchange -ay 1 logical volume(s) in volume group "vg1sdd" now active # none found (no output) % vgchange -ay """ cmd = ['vgchange', '--activate=y'] if multipath: # only operate on mp devices or encrypted volumes mp_filter = generate_multipath_dev_mapper_filter() cmd.extend(['--config', 'devices{ %s }' % mp_filter]) # vgchange handles syncing with udev by default # see man 8 vgchange and flag --noudevsync out, _ = util.subp(cmd, capture=True) if out: LOG.info(out) def _generate_multipath_filter(accept=None): if not accept: raise ValueError('Missing list of accept patterns') prefix = ", ".join(['"a|%s|"' % p for p in accept]) return 'filter = [ {prefix}, "r|.*|" ]'.format(prefix=prefix) def generate_multipath_dev_mapper_filter(): return _generate_multipath_filter( accept=['/dev/mapper/mpath.*', '/dev/mapper/dm_crypt-.*']) def generate_multipath_dm_uuid_filter(): return _generate_multipath_filter(accept=[ '/dev/disk/by-id/dm-uuid-.*mpath-.*', '/dev/disk/by-id/.*dm_crypt-.*']) def lvm_scan(activate=True, multipath=False): """ run full scan for volgroups, logical volumes and physical volumes """ # prior to xenial, lvmetad is not packaged, so even if a tool supports # flag --cache it has no effect. In Xenial and newer the --cache flag is # used (if lvmetad is running) to ensure that the data cached by # lvmetad is updated. # before appending the cache flag though, check if lvmetad is running. this # ensures that we do the right thing even if lvmetad is supported but is # not running release = distro.lsb_release().get('codename') if release in [None, 'UNAVAILABLE']: LOG.warning('unable to find release number, assuming xenial or later') release = 'xenial' if multipath: # only operate on mp devices or encrypted volumes mponly = 'devices{ filter = [ "a|%s|", "a|%s|", "r|.*|" ] }' % ( '/dev/mapper/mpath.*', '/dev/mapper/dm_crypt-.*') for cmd in [['pvscan'], ['vgscan']]: if release != 'precise' and lvmetad_running(): cmd.append('--cache') if multipath: cmd.extend(['--config', mponly]) util.subp(cmd, capture=True) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/mdadm.py000066400000000000000000000674521415350476600170260ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. # This module wraps calls to the mdadm utility for examing Linux SoftRAID # virtual devices. Functions prefixed with 'mdadm_' involve executing # the 'mdadm' command in a subprocess. The remaining functions handle # manipulation of the mdadm output. import os import re import shlex import time from curtin.block import ( dev_path, dev_short, get_holders, is_valid_device, md_get_devices_list, md_get_spares_list, sys_block_path, zero_file_at_offsets, ) from curtin.distro import lsb_release from curtin import (util, udev) from curtin.log import LOG NOSPARE_RAID_LEVELS = [ 'linear', 'raid0', '0', 0, 'container' ] SPARE_RAID_LEVELS = [ 'raid1', 'stripe', 'mirror', '1', 1, 'raid4', '4', 4, 'raid5', '5', 5, 'raid6', '6', 6, 'raid10', '10', 10, ] VALID_RAID_LEVELS = NOSPARE_RAID_LEVELS + SPARE_RAID_LEVELS # https://www.kernel.org/doc/Documentation/md.txt ''' clear No devices, no size, no level Writing is equivalent to STOP_ARRAY ioctl inactive May have some settings, but array is not active all IO results in error When written, doesn't tear down array, but just stops it suspended (not supported yet) All IO requests will block. The array can be reconfigured. Writing this, if accepted, will block until array is quiessent readonly no resync can happen. no superblocks get written. write requests fail read-auto like readonly, but behaves like 'clean' on a write request. clean - no pending writes, but otherwise active. When written to inactive array, starts without resync If a write request arrives then if metadata is known, mark 'dirty' and switch to 'active'. if not known, block and switch to write-pending If written to an active array that has pending writes, then fails. active fully active: IO and resync can be happening. When written to inactive array, starts with resync write-pending clean, but writes are blocked waiting for 'active' to be written. active-idle like active, but no writes have been seen for a while (safe_mode_delay). ''' ERROR_RAID_STATES = [ 'clear', 'inactive', 'suspended', ] READONLY_RAID_STATES = [ 'readonly', ] READWRITE_RAID_STATES = [ 'read-auto', 'clean', 'active', 'active-idle', 'write-pending', ] VALID_RAID_ARRAY_STATES = ( ERROR_RAID_STATES + READONLY_RAID_STATES + READWRITE_RAID_STATES ) # need a on-import check of version and set the value for later reference ''' mdadm version < 3.3 doesn't include enough info when using --export and we must use --detail and parse out information. This method checks the mdadm version and will return True if we can use --export for key=value list with enough info, false if version is less than ''' MDADM_USE_EXPORT = lsb_release()['codename'] not in ['precise', 'trusty'] # # mdadm executors # def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False, ignore_errors=False): # md_devname is a /dev/XXXX # devices is non-empty list of /dev/xxx # if spares is non-empt list append of /dev/xxx cmd = ["mdadm", "--assemble"] if scan: cmd += ['--scan', '-v'] else: valid_mdname(md_devname) cmd += [md_devname, "--run"] + devices if spares: cmd += spares try: # mdadm assemble returns 1 when no arrays are found. this might not be # an error depending on the situation this function was called in, so # accept a return code of 1 # mdadm assemble returns 2 when called on an array that is already # assembled. this is not an error, so accept return code of 2 # all other return codes can be accepted with ignore_error set to true scan, err = util.subp(cmd, capture=True, rcs=[0, 1, 2]) LOG.debug('mdadm assemble scan results:\n%s\n%s', scan, err) scan, err = util.subp(['mdadm', '--detail', '--scan', '-v'], capture=True, rcs=[0, 1]) LOG.debug('mdadm detail scan after assemble:\n%s\n%s', scan, err) except util.ProcessExecutionError: LOG.warning("mdadm_assemble had unexpected return code") if not ignore_errors: raise udev.udevadm_settle() def mdadm_create(md_devname, raidlevel, devices, spares=None, container=None, md_name="", metadata=None): LOG.debug('mdadm_create: ' + 'md_name=%s raidlevel=%s ' % (md_devname, raidlevel) + ' devices=%s spares=%s name=%s' % (devices, spares, md_name)) assert_valid_devpath(md_devname) if not metadata: metadata = 'default' if raidlevel not in VALID_RAID_LEVELS: raise ValueError('Invalid raidlevel: [{}]'.format(raidlevel)) min_devices = md_minimum_devices(raidlevel) devcnt = len(devices) if not container else \ len(md_get_devices_list(container)) if devcnt < min_devices: err = 'Not enough devices (' + str(devcnt) + ') ' err += 'for raidlevel: ' + str(raidlevel) err += ' minimum devices needed: ' + str(min_devices) raise ValueError(err) if spares and raidlevel not in SPARE_RAID_LEVELS: err = ('Raidlevel does not support spare devices: ' + str(raidlevel)) raise ValueError(err) (hostname, _err) = util.subp(["hostname", "-s"], rcs=[0], capture=True) cmd = ["mdadm", "--create", md_devname, "--run", "--homehost=%s" % hostname.strip(), "--raid-devices=%s" % devcnt] if not container: cmd.append("--metadata=%s" % metadata) if raidlevel != 'container': cmd.append("--level=%s" % raidlevel) if md_name: cmd.append("--name=%s" % md_name) if container: cmd.append(container) for device in devices: holders = get_holders(device) if len(holders) > 0: LOG.warning('Detected holders during mdadm creation: %s', holders) raise OSError('Failed to remove holders from %s', device) zero_device(device) cmd.append(device) if spares: cmd.append("--spare-devices=%s" % len(spares)) for device in spares: zero_device(device) cmd.append(device) # Create the raid device udev.udevadm_settle() util.subp(["udevadm", "control", "--stop-exec-queue"]) try: util.subp(cmd, capture=True) except util.ProcessExecutionError: # frequent issues by modules being missing (LP: #1519470) - add debug LOG.debug('mdadm_create failed - extra debug regarding md modules') (out, _err) = util.subp(["lsmod"], capture=True) if not _err: LOG.debug('modules loaded: \n%s' % out) raidmodpath = '/lib/modules/%s/kernel/drivers/md' % os.uname()[2] (out, _err) = util.subp(["find", raidmodpath], rcs=[0, 1], capture=True) if out: LOG.debug('available md modules: \n%s' % out) else: LOG.debug('no available md modules found') for dev in devices + spares: h = get_holders(dev) LOG.debug('Device %s has holders: %s', dev, h) raise util.subp(["udevadm", "control", "--start-exec-queue"]) udev.udevadm_settle(exists=md_devname) def mdadm_examine(devpath, export=MDADM_USE_EXPORT): ''' exectute mdadm --examine, and optionally append --export. Parse and return dict of key=val from output''' assert_valid_devpath(devpath) cmd = ["mdadm", "--examine"] if export: cmd.extend(["--export"]) cmd.extend([devpath]) try: (out, _err) = util.subp(cmd, capture=True) except util.ProcessExecutionError: LOG.debug('not a valid md member device: ' + devpath) return {} if export: data = __mdadm_export_to_dict(out) else: data = __mdadm_detail_to_dict(out) return data def set_sync_action(devpath, action=None, retries=None): assert_valid_devpath(devpath) if not action: return if not retries: retries = [0.2] * 60 sync_action = md_sysfs_attr_path(devpath, 'sync_action') if not os.path.exists(sync_action): # arrays without sync_action can't set values return LOG.info("mdadm set sync_action=%s on array %s", action, devpath) for (attempt, wait) in enumerate(retries): try: LOG.debug('mdadm: set sync_action %s attempt %s', devpath, attempt) val = md_sysfs_attr(devpath, 'sync_action').strip() LOG.debug('sync_action = "%s" ? "%s"', val, action) if val != action: LOG.debug("mdadm: setting array sync_action=%s", action) try: util.write_file(sync_action, content=action) except (IOError, OSError) as e: LOG.debug("mdadm: (non-fatal) write to %s failed %s", sync_action, e) else: LOG.debug("mdadm: set array sync_action=%s SUCCESS", action) return except util.ProcessExecutionError: LOG.debug( "mdadm: set sync_action failed, retrying in %s seconds", wait) time.sleep(wait) pass def mdadm_stop(devpath, retries=None): assert_valid_devpath(devpath) if not retries: retries = [0.2] * 60 sync_action = md_sysfs_attr_path(devpath, 'sync_action') sync_max = md_sysfs_attr_path(devpath, 'sync_max') sync_min = md_sysfs_attr_path(devpath, 'sync_min') LOG.info("mdadm stopping: %s" % devpath) for (attempt, wait) in enumerate(retries): try: LOG.debug('mdadm: stop on %s attempt %s', devpath, attempt) # An array in 'resync' state may not be stoppable, attempt to # cancel an ongoing resync val = md_sysfs_attr(devpath, 'sync_action') LOG.debug('%s/sync_max = %s', sync_action, val) if val != "idle": LOG.debug("mdadm: setting array sync_action=idle") try: util.write_file(sync_action, content="idle") except (IOError, OSError) as e: LOG.debug("mdadm: (non-fatal) write to %s failed %s", sync_action, e) # Setting the sync_{max,min} may can help prevent the array from # changing back to 'resync' which may prevent the array from being # stopped val = md_sysfs_attr(devpath, 'sync_max') LOG.debug('%s/sync_max = %s', sync_max, val) if val != "0": LOG.debug("mdadm: setting array sync_{min,max}=0") try: for sync_file in [sync_max, sync_min]: util.write_file(sync_file, content="0") except (IOError, OSError) as e: LOG.debug('mdadm: (non-fatal) write to %s failed %s', sync_file, e) # one wonders why this command doesn't do any of the above itself? out, err = util.subp(["mdadm", "--manage", "--stop", devpath], capture=True) LOG.debug("mdadm stop command output:\n%s\n%s", out, err) LOG.info("mdadm: successfully stopped %s after %s attempt(s)", devpath, attempt+1) return except util.ProcessExecutionError: LOG.warning("mdadm stop failed, retrying ") if os.path.isfile('/proc/mdstat'): LOG.critical("/proc/mdstat:\n%s", util.load_file('/proc/mdstat')) LOG.debug("mdadm: stop failed, retrying in %s seconds", wait) time.sleep(wait) pass raise OSError('Failed to stop mdadm device %s', devpath) def mdadm_remove(devpath): assert_valid_devpath(devpath) LOG.info("mdadm removing: %s" % devpath) out, err = util.subp(["mdadm", "--remove", devpath], rcs=[0], capture=True) LOG.debug("mdadm remove:\n%s\n%s", out, err) def fail_device(mddev, arraydev): assert_valid_devpath(mddev) LOG.info("mdadm mark faulty: %s in array %s", arraydev, mddev) out, err = util.subp(["mdadm", "--fail", mddev, arraydev], rcs=[0], capture=True) LOG.debug("mdadm mark faulty:\n%s\n%s", out, err) def remove_device(mddev, arraydev): assert_valid_devpath(mddev) LOG.info("mdadm remove %s from array %s", arraydev, mddev) out, err = util.subp(["mdadm", "--remove", mddev, arraydev], rcs=[0], capture=True) LOG.debug("mdadm remove:\n%s\n%s", out, err) def zero_device(devpath, force=False): """ Wipe mdadm member device at data offset. For mdadm devices with metadata version 1.1 or newer location of the data offset is provided. This value is used to determine the location to start wiping data to clear data. If metadata version is older then fallback to wiping 1MB at start and end of device; metadata was at end of device. """ assert_valid_devpath(devpath) metadata = mdadm_examine(devpath, export=False) if not metadata and not force: LOG.debug('%s not mdadm member, force=False so skiping zeroing', devpath) return LOG.debug('mdadm.examine metadata:\n%s', util.json_dumps(metadata)) version = metadata.get('version') offsets = [] # wipe at start, end of device for metadata older than 1.1 if version and version in ["1.1", "1.2"]: LOG.debug('mdadm %s has metadata version=%s, extracting offsets', devpath, version) for field in ['super_offset', 'data_offset']: offset, unit = metadata[field].split() if unit == "sectors": offsets.append(int(offset) * 512) else: LOG.warning('Unexpected offset unit: %s', unit) if not offsets: offsets = [0, -(1024 * 1024)] LOG.info('mdadm: wiping md member %s @ offsets %s', devpath, offsets) zero_file_at_offsets(devpath, offsets, buflen=1024, count=1024, strict=True) LOG.info('mdadm: successfully wiped %s' % devpath) def mdadm_query_detail(md_devname, export=MDADM_USE_EXPORT, rawoutput=False): valid_mdname(md_devname) cmd = ["mdadm", "--query", "--detail"] if export: cmd.extend(["--export"]) cmd.extend([md_devname]) (out, err) = util.subp(cmd, capture=True) if rawoutput: return (out, err) if export: data = __mdadm_export_to_dict(out) else: data = __mdadm_detail_to_dict(out) return data def mdadm_detail_scan(): (out, _err) = util.subp(["mdadm", "--detail", "--scan"], capture=True) if not _err: return out def mdadm_run(md_device): return util.subp(["mdadm", "--run", md_device], capture=True) def md_present(mdname): """Check if mdname is present in /proc/mdstat""" if not mdname: raise ValueError('md_present requires a valid md name') try: mdstat = util.load_file('/proc/mdstat') except IOError as e: if util.is_file_not_found_exc(e): LOG.warning('Failed to read /proc/mdstat; ' 'md modules might not be loaded') return False else: raise e md_kname = dev_short(mdname) # Find lines like: # md10 : active raid1 vdc1[1] vda2[0] present = [line for line in mdstat.splitlines() if line.split(":")[0].rstrip() == md_kname] if len(present) > 0: return True return False # ------------------------------ # def valid_mdname(md_devname): assert_valid_devpath(md_devname) if not is_valid_device(md_devname): raise ValueError('Specified md device does not exist: ' + md_devname) return False return True def valid_devpath(devpath): if devpath: return devpath.startswith('/dev') return False def assert_valid_devpath(devpath): if not valid_devpath(devpath): raise ValueError("Invalid devpath: '%s'" % devpath) def md_sysfs_attr_path(md_devname, attrname): """ Return the path to a md device attribute under the 'md' dir """ # build /sys/class/block//md sysmd = sys_block_path(md_devname, "md") # append attrname return os.path.join(sysmd, attrname) def md_sysfs_attr(md_devname, attrname, default=''): """ Return the attribute str of an md device found under the 'md' dir """ attrdata = default if not valid_mdname(md_devname): raise ValueError('Invalid md devicename: [{}]'.format(md_devname)) sysfs_attr_path = md_sysfs_attr_path(md_devname, attrname) if os.path.isfile(sysfs_attr_path): attrdata = util.load_file(sysfs_attr_path).strip() return attrdata def md_raidlevel_short(raidlevel): if isinstance(raidlevel, int) or \ raidlevel in ['linear', 'stripe', 'container']: return raidlevel return int(raidlevel.replace('raid', '')) def md_minimum_devices(raidlevel): ''' return the minimum number of devices for a given raid level ''' rl = md_raidlevel_short(raidlevel) if rl in [0, 1, 'linear', 'stripe', 'container']: return 2 if rl in [5]: return 3 if rl in [6, 10]: return 4 return -1 def __md_check_array_state(md_devname, mode='READWRITE'): modes = { 'READWRITE': READWRITE_RAID_STATES, 'READONLY': READONLY_RAID_STATES, 'ERROR': ERROR_RAID_STATES, } if mode not in modes: raise ValueError('Invalid Array State mode: ' + mode) array_state = md_sysfs_attr(md_devname, 'array_state') if array_state in modes[mode]: return True return False def md_check_array_state_rw(md_devname): return __md_check_array_state(md_devname, mode='READWRITE') def md_check_array_state_ro(md_devname): return __md_check_array_state(md_devname, mode='READONLY') def md_check_array_state_error(md_devname): return __md_check_array_state(md_devname, mode='ERROR') def __mdadm_export_to_dict(output): ''' convert Key=Value text output into dictionary ''' return dict(tok.split('=', 1) for tok in shlex.split(output)) def __mdadm_detail_to_dict(input): ''' Convert mdadm --detail/--export output to dictionary /dev/vde: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 93a73e10:427f280b:b7076c02:204b8f7a Name : wily-foobar:0 (local to host wily-foobar) Creation Time : Sat Dec 12 16:06:05 2015 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 20955136 (9.99 GiB 10.73 GB) Used Dev Size : 20955136 (9.99 GiB 10.73 GB) Array Size : 10477568 (9.99 GiB 10.73 GB) Data Offset : 16384 sectors Super Offset : 8 sectors Unused Space : before=16296 sectors, after=0 sectors State : clean Device UUID : 8fcd62e6:991acc6e:6cb71ee3:7c956919 Update Time : Sat Dec 12 16:09:09 2015 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 65b57c2e - correct Events : 17 Device Role : spare Array State : AA ('A' == active, '.' == missing, 'R' == replacing) ''' data = {} # first line, trim trailing : device = input.splitlines()[0][:-1] if device: data.update({'device': device}) else: raise ValueError('Failed to determine device from input:\n%s', input) # start after the first newline remainder = input[input.find('\n')+1:] # keep only the first section (imsm container) arraysection = remainder.find('\n[') if arraysection != -1: remainder = remainder[:arraysection] # FIXME: probably could do a better regex to match the LHS which # has one, two or three words rem = r'(\w+|\w+\ \w+|\w+\ \w+\ \w+)\ \:\ ([a-zA-Z0-9\-\.,: \(\)=\']+)' for f in re.findall(rem, remainder, re.MULTILINE): key = f[0].replace(' ', '_').lower() val = f[1] if key in data: raise ValueError('Duplicate key in mdadm regex parsing: ' + key) data.update({key: val}) return data def md_device_key_role(devname): if not devname: raise ValueError('Missing parameter devname') return 'MD_DEVICE_' + dev_short(devname) + '_ROLE' def md_device_key_dev(devname): if not devname: raise ValueError('Missing parameter devname') return 'MD_DEVICE_' + dev_short(devname) + '_DEV' def md_read_run_mdadm_map(): ''' md1 1.2 59beb40f:4c202f67:088e702b:efdf577a /dev/md1 md0 0.90 077e6a9e:edf92012:e2a6e712:b193f786 /dev/md0 return # md_shortname = (metaversion, md_uuid, md_devpath) data = { 'md1': (1.2, 59beb40f:4c202f67:088e702b:efdf577a, /dev/md1) 'md0': (0.90, 077e6a9e:edf92012:e2a6e712:b193f786, /dev/md0) ''' mdadm_map = {} run_mdadm_map = '/run/mdadm/map' if os.path.exists(run_mdadm_map): with open(run_mdadm_map, 'r') as fp: data = fp.read().strip() for entry in data.split('\n'): (key, meta, md_uuid, dev) = entry.split() mdadm_map.update({key: (meta, md_uuid, dev)}) return mdadm_map def md_check_array_uuid(md_devname, md_uuid): valid_mdname(md_devname) # confirm we have /dev/{mdname} by following the udev symlink mduuid_path = ('/dev/disk/by-id/md-uuid-' + md_uuid) mdlink_devname = dev_path(os.path.realpath(mduuid_path)) if md_devname != mdlink_devname: err = ('Mismatch between devname and md-uuid symlink: ' + '%s -> %s != %s' % (mduuid_path, mdlink_devname, md_devname)) raise ValueError(err) def md_get_uuid(md_devname): valid_mdname(md_devname) md_query = mdadm_query_detail(md_devname) return md_query.get('MD_UUID', None) def _compare_devlist(expected, found): LOG.debug('comparing device lists: ' 'expected: {} found: {}'.format(expected, found)) expected = set(expected) found = set(found) if expected != found: missing = expected.difference(found) extra = found.difference(expected) raise ValueError("RAID array device list does not match." " Missing: {} Extra: {}".format(missing, extra)) def md_check_raidlevel(md_devname, detail, raidlevel): # Validate raidlevel against what curtin supports configuring if raidlevel not in VALID_RAID_LEVELS: err = ('Invalid raidlevel: ' + raidlevel + ' Must be one of: ' + str(VALID_RAID_LEVELS)) raise ValueError(err) # normalize raidlevel to the values mdadm prints. if isinstance(raidlevel, int) or len(raidlevel) <= 2: raidlevel = 'raid' + str(raidlevel) elif raidlevel == 'stripe': raidlevel = 'raid0' elif raidlevel == 'mirror': raidlevel = 'raid1' actual_level = detail.get("MD_LEVEL") if actual_level != raidlevel: raise ValueError( "raid device %s should have level %r but has level %r" % ( md_devname, raidlevel, actual_level)) def md_block_until_in_sync(md_devname): ''' sync_completed This shows the number of sectors that have been completed of whatever the current sync_action is, followed by the number of sectors in total that could need to be processed. The two numbers are separated by a '/' thus effectively showing one value, a fraction of the process that is complete. A 'select' on this attribute will return when resync completes, when it reaches the current sync_max (below) and possibly at other times. ''' # FIXME: use selectors to block on: /sys/class/block/mdX/md/sync_completed pass def md_check_array_state(md_devname): # check array state writable = md_check_array_state_rw(md_devname) # Raid 0 arrays do not have degraded or sync_action sysfs # attributes. degraded = md_sysfs_attr(md_devname, 'degraded', None) sync_action = md_sysfs_attr(md_devname, 'sync_action', None) if not writable: raise ValueError('Array not in writable state: ' + md_devname) if degraded is not None and degraded != "0": raise ValueError('Array in degraded state: ' + md_devname) if degraded is not None and sync_action not in ("idle", "resync"): raise ValueError( 'Array is %s, not idle: %s' % (sync_action, md_devname)) def md_check_uuid(md_devname): md_uuid = md_get_uuid(md_devname) if not md_uuid: raise ValueError('Failed to get md UUID from device: ' + md_devname) md_check_array_uuid(md_devname, md_uuid) def md_check_devices(md_devname, devices): if not devices or len(devices) == 0: raise ValueError('Cannot verify raid array with empty device list') # collect and compare raid devices based on md name versus # expected device list. # # NB: In some cases, a device might report as a spare until # md has finished syncing it into the array. Currently # we fail the check since the specified raid device is not # yet in its proper role. Callers can check mdadm_sync_action # state to see if the array is currently recovering, which would # explain the failure. Also mdadm_degraded will indicate if the # raid is currently degraded or not, which would also explain the # failure. md_raid_devices = md_get_devices_list(md_devname) LOG.debug('md_check_devices: md_raid_devs: ' + str(md_raid_devices)) _compare_devlist(devices, md_raid_devices) def md_check_spares(md_devname, spares): # collect and compare spare devices based on md name versus # expected device list. md_raid_spares = md_get_spares_list(md_devname) _compare_devlist(spares, md_raid_spares) def md_check_array_membership(md_devname, devices): # validate that all devices are members of the correct array md_uuid = md_get_uuid(md_devname) for device in devices: dev_examine = mdadm_examine(device, export=True) if 'MD_UUID' not in dev_examine: raise ValueError('Device is not part of an array: ' + device) dev_uuid = dev_examine['MD_UUID'] if dev_uuid != md_uuid: err = "Device {} is not part of {} array. ".format(device, md_devname) err += "MD_UUID mismatch: device:{} != array:{}".format(dev_uuid, md_uuid) raise ValueError(err) def md_check(md_devname, raidlevel, devices, spares, container): ''' Check passed in variables from storage configuration against the system we're running upon. ''' LOG.debug('RAID validation: ' + 'name={} raidlevel={} devices={} spares={} container={}'.format( md_devname, raidlevel, devices, spares, container)) assert_valid_devpath(md_devname) detail = mdadm_query_detail(md_devname) if raidlevel != "container": md_check_array_state(md_devname) md_check_raidlevel(md_devname, detail, raidlevel) md_check_uuid(md_devname) if container is None: md_check_devices(md_devname, devices) md_check_spares(md_devname, spares) md_check_array_membership(md_devname, devices + spares) else: if 'MD_CONTAINER' not in detail: raise ValueError("%s is not in a container" % ( md_devname)) actual_container = os.path.realpath(detail['MD_CONTAINER']) if actual_container != container: raise ValueError("%s is in container %r, not %r" % ( md_devname, actual_container, container)) LOG.debug('RAID array OK: ' + md_devname) def md_is_in_container(md_devname): return 'MD_CONTAINER' in mdadm_query_detail(md_devname) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/mkfs.py000066400000000000000000000174771415350476600167060ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. # This module wraps calls to mkfs. and determines the appropriate flags # for each filesystem type from curtin import block from curtin import distro from curtin import util import string import os from uuid import uuid4 mkfs_commands = { "btrfs": "mkfs.btrfs", "ext2": "mkfs.ext2", "ext3": "mkfs.ext3", "ext4": "mkfs.ext4", "fat": "mkfs.vfat", "fat12": "mkfs.vfat", "fat16": "mkfs.vfat", "fat32": "mkfs.vfat", "vfat": "mkfs.vfat", "jfs": "jfs_mkfs", "ntfs": "mkntfs", "reiserfs": "mkfs.reiserfs", "swap": "mkswap", "xfs": "mkfs.xfs" } specific_to_family = { "ext2": "ext", "ext3": "ext", "ext4": "ext", "fat12": "fat", "fat16": "fat", "fat32": "fat", "vfat": "fat", } label_length_limits = { "btrfs": 256, "ext": 16, "fat": 11, "jfs": 16, # see jfs_tune manpage "ntfs": 32, "reiserfs": 16, "swap": 15, # not in manpages, found experimentally "xfs": 12 } family_flag_mappings = { "fatsize": {"fat": ("-F", "{fatsize}")}, # flag with no parameter "force": {"btrfs": "--force", "ext": "-F", "fat": "-I", "jfs": "-q", "ntfs": "--force", "reiserfs": "-f", "swap": "--force", "xfs": "-f"}, "label": {"btrfs": ("--label", "{label}"), "ext": ("-L", "{label}"), "fat": ("-n", "{label}"), "jfs": ("-L", "{label}"), "ntfs": ("--label", "{label}"), "reiserfs": ("--label", "{label}"), "swap": ("--label", "{label}"), "xfs": ("-L", "{label}")}, # flag with no parameter, N.B: this isn't used/exposed "quiet": {"ext": "-q", "ntfs": "-q", "reiserfs": "-q", "xfs": "--quiet"}, "sectorsize": { "btrfs": ("--sectorsize", "{sectorsize}",), "ext": ("-b", "{sectorsize}"), "fat": ("-S", "{sectorsize}"), "ntfs": ("--sector-size", "{sectorsize}"), "reiserfs": ("--block-size", "{sectorsize}"), "xfs": ("-s", "{sectorsize}")}, "uuid": {"btrfs": ("--uuid", "{uuid}"), "ext": ("-U", "{uuid}"), "reiserfs": ("--uuid", "{uuid}"), "swap": ("--uuid", "{uuid}"), "xfs": ("-m", "uuid={uuid}")}, } release_flag_mapping_overrides = { "precise": { "force": {"btrfs": None}, "uuid": {"btrfs": None}}, "trusty": { "uuid": {"btrfs": None, "xfs": None}}, } def valid_fstypes(): return list(mkfs_commands.keys()) def get_flag_mapping(flag_name, fs_family, param=None, strict=False): ret = [] release = distro.lsb_release()['codename'] overrides = release_flag_mapping_overrides.get(release, {}) if flag_name in overrides and fs_family in overrides[flag_name]: flag_sym = overrides[flag_name][fs_family] else: flag_sym_families = family_flag_mappings.get(flag_name) if flag_sym_families is None: raise ValueError("unsupported flag '%s'" % flag_name) flag_sym = flag_sym_families.get(fs_family) if flag_sym is None: if strict: raise ValueError("flag '%s' not supported by fs family '%s'" % flag_name, fs_family) else: return ret if param is None: ret.append(flag_sym) else: params = [k.format(**{flag_name: param}) for k in flag_sym] if list(params) == list(flag_sym): raise ValueError("Param %s not used for flag_name=%s and " "fs_family=%s." % (param, flag_name, fs_family)) ret.extend(params) return ret def mkfs(path, fstype, strict=False, label=None, uuid=None, force=False, extra_options=None): """Make filesystem on block device with given path using given fstype and appropriate flags for filesystem family. Filesystem uuid and label can be passed in as kwargs. By default no label or uuid will be used. If a filesystem label is too long curtin will raise a ValueError if the strict flag is true or will truncate it to the maximum possible length. If a flag is not supported by a filesystem family mkfs will raise a ValueError if the strict flag is true or silently ignore it otherwise. Force can be specified to force the mkfs command to continue even if it finds old data or filesystems on the partition. If extra_options are supplied they are appended to mkfs command. """ if path is None: raise ValueError("invalid block dev path '%s'" % path) if not os.path.exists(path): raise ValueError("'%s': no such file or directory" % path) fs_family = specific_to_family.get(fstype, fstype) mkfs_cmd = mkfs_commands.get(fstype) if not mkfs_cmd: raise ValueError("unsupported fs type '%s'" % fstype) if util.which(mkfs_cmd) is None: raise ValueError("need '%s' but it could not be found" % mkfs_cmd) cmd = [mkfs_cmd] # use device logical block size to ensure properly formated filesystems (logical_bsize, physical_bsize) = block.get_blockdev_sector_size(path) if logical_bsize > 512: lbs_str = ('size={}'.format(logical_bsize) if fs_family == "xfs" else str(logical_bsize)) cmd.extend(get_flag_mapping("sectorsize", fs_family, param=lbs_str, strict=strict)) if fs_family == 'fat': # mkfs.vfat doesn't calculate this right for non-512b sector size # lp:1569576 , d-i uses the same setting. cmd.extend(["-s", "1"]) if force: cmd.extend(get_flag_mapping("force", fs_family, strict=strict)) if label is not None: limit = label_length_limits.get(fs_family) if len(label) > limit: if strict: raise ValueError("length of fs label for '%s' exceeds max \ allowed for fstype '%s'. max is '%s'" % (path, fstype, limit)) else: label = label[:limit] cmd.extend(get_flag_mapping("label", fs_family, param=label, strict=strict)) # If uuid is not specified, generate one and try to use it if uuid is None: uuid = str(uuid4()) cmd.extend(get_flag_mapping("uuid", fs_family, param=uuid, strict=strict)) if fs_family == "fat": fat_size = fstype.strip(string.ascii_letters) if fat_size in ["12", "16", "32"]: cmd.extend(get_flag_mapping("fatsize", fs_family, param=fat_size, strict=strict)) if extra_options: cmd.extend(extra_options) cmd.append(path) util.subp(cmd, capture=True) # if fs_family does not support specifying uuid then use blkid to find it # if blkid is unable to then just return None for uuid if fs_family not in family_flag_mappings['uuid']: try: uuid = block.blkid()[path]['UUID'] except Exception: pass # return uuid, may be none if it could not be specified and blkid could not # find it return uuid def mkfs_from_config(path, info, strict=False): """Make filesystem on block device with given path according to storage config given""" fstype = info.get('fstype') if fstype is None: raise ValueError("fstype must be specified") # NOTE: Since old metadata on partitions that have not been wiped can cause # some mkfs commands to refuse to work, it's best to use force=True mkfs(path, fstype, strict=strict, force=True, uuid=info.get('uuid'), label=info.get('label'), extra_options=info.get('extra_options')) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/multipath.py000066400000000000000000000201501415350476600177330ustar00rootroot00000000000000import os from curtin.log import LOG from curtin import util from curtin import udev SHOW_PATHS_FMT = ("device='%d' serial='%z' multipath='%m' host_wwpn='%N' " "target_wwnn='%n' host_wwpn='%R' target_wwpn='%r' " "host_adapter='%a'") SHOW_MAPS_FMT = "name='%n' multipath='%w' sysfs='%d' paths='%N'" def _extract_mpath_data(cmd, show_verb): """ Parse output from specifed command output via load_shell_content.""" data, _err = util.subp(cmd, capture=True) result = [] for line in data.splitlines(): mp_dict = util.load_shell_content(line, add_empty=True) LOG.debug('Extracted multipath %s fields: %s', show_verb, mp_dict) if mp_dict: result.append(mp_dict) return result def show_paths(): """ Query multipathd for paths output and return a dict of the values.""" cmd = ['multipathd', 'show', 'paths', 'raw', 'format', SHOW_PATHS_FMT] return _extract_mpath_data(cmd, 'paths') def show_maps(): """ Query multipathd for maps output and return a dict of the values.""" cmd = ['multipathd', 'show', 'maps', 'raw', 'format', SHOW_MAPS_FMT] return _extract_mpath_data(cmd, 'maps') def dmname_to_blkdev_mapping(): """ Use dmsetup ls output to build a dict of DM_NAME, /dev/dm-x values.""" data, _err = util.subp(['dmsetup', 'ls', '-o', 'blkdevname'], capture=True) mapping = {} if data and data.strip() != "No devices found": LOG.debug('multipath: dmsetup ls output:\n%s', data) for line in data.splitlines(): if line: dm_name, blkdev = line.split('\t') # (dm-1) -> /dev/dm-1 mapping[dm_name] = '/dev/' + blkdev.strip('()') return mapping def is_mpath_device(devpath, info=None): """ Check if devpath is a multipath device, returns boolean. """ result = False if not info: info = udev.udevadm_info(devpath) if info.get('DM_UUID', '').startswith('mpath-'): result = True LOG.debug('%s is multipath device? %s', devpath, result) return result def is_mpath_member(devpath, info=None): """ Check if a device is a multipath member (a path), returns boolean. """ result = False if not info: info = udev.udevadm_info(devpath) if info.get("DM_MULTIPATH_DEVICE_PATH") == "1": result = True LOG.debug('%s is multipath device member? %s', devpath, result) return result def is_mpath_partition(devpath, info=None): """ Check if a device is a multipath partition, returns boolean. """ result = False if devpath.startswith('/dev/dm-'): if not info: info = udev.udevadm_info(devpath) if 'DM_PART' in info and 'DM_MPATH' in info: result = True LOG.debug("%s is multipath device partition? %s", devpath, result) return result def mpath_partition_to_mpath_id_and_partnumber(devpath): """ Return the mpath id and partition number of a multipath partition. """ info = udev.udevadm_info(devpath) if 'DM_MPATH' in info and 'DM_PART' in info: return info['DM_MPATH'], info['DM_PART'] return None def remove_partition(devpath, retries=10): """ Remove a multipath partition mapping. """ LOG.debug('multipath: removing multipath partition: %s', devpath) for _ in range(0, retries): util.subp(['dmsetup', 'remove', '--force', '--retry', devpath]) udev.udevadm_settle() if not os.path.exists(devpath): return util.wait_for_removal(devpath) def remove_map(map_id, retries=10): """ Remove a multipath device mapping. """ LOG.debug('multipath: removing multipath map: %s', map_id) devpath = '/dev/mapper/%s' % map_id for _ in range(0, retries): util.subp(['multipath', '-v3', '-R3', '-f', map_id], rcs=[0, 1]) udev.udevadm_settle() if not os.path.exists(devpath): return util.wait_for_removal(devpath) def find_mpath_members(multipath_id, paths=None): """ Return a list of device path for each member of aspecified mpath_id.""" if not paths: paths = show_paths() for retry in range(0, 5): orphans = [path for path in paths if 'orphan' in path['multipath']] if len(orphans): udev.udevadm_settle() paths = show_paths() else: break members = ['/dev/' + path['device'] for path in paths if path['multipath'] == multipath_id] return members def find_mpath_id(devpath): """ Return the mpath_id associated with a specified device path. """ info = udev.udevadm_info(devpath) return info.get('DM_NAME') def find_mpath_id_by_path(devpath, paths=None): """ Return the mpath_id associated with a specified device path. """ if not paths: paths = show_paths() if devpath.startswith('/dev/dm-'): raise ValueError('find_mpath_id_by_path does not handle ' 'device-mapper devices: %s' % devpath) for path in paths: if devpath == '/dev/' + path['device']: return path['multipath'] return None def find_mpath_id_by_parent(multipath_id, partnum=None): """ Return the mpath_id associated with a specified device path. """ devmap = dmname_to_blkdev_mapping() LOG.debug('multipath: dm_name blk map: %s', devmap) dm_name = multipath_id if partnum: dm_name += "-part%d" % int(partnum) return (dm_name, devmap.get(dm_name)) def find_mpath_partitions(mpath_id): """ Return a generator of multipath ids which are partitions of 'mpath-id' """ # {'mpatha': '/dev/dm-0', # 'mpatha-part1': '/dev/dm-3', # 'mpatha-part2': '/dev/dm-4', # 'mpathb': '/dev/dm-12'} if not mpath_id: raise ValueError('Invalid mpath_id parameter: %s' % mpath_id) return (mp_id for (mp_id, _dm_dev) in dmname_to_blkdev_mapping().items() if mp_id.startswith(mpath_id + '-')) def get_mpath_id_from_device(device): # /dev/dm-X if is_mpath_device(device) or is_mpath_partition(device): info = udev.udevadm_info(device) return info.get('DM_NAME') # /dev/sdX if is_mpath_member(device): return find_mpath_id_by_path(device) return None def force_devmapper_symlinks(): """Check if /dev/mapper/mpath* files are symlinks, if not trigger udev.""" LOG.debug('Verifying /dev/mapper/mpath* files are symlinks') needs_trigger = [] for mp_id, dm_dev in dmname_to_blkdev_mapping().items(): if mp_id.startswith('mpath'): mapper_path = '/dev/mapper/' + mp_id if not os.path.islink(mapper_path): LOG.warning( 'Found invalid device mapper mp path: %s, removing', mapper_path) util.del_file(mapper_path) needs_trigger.append((mapper_path, dm_dev)) if len(needs_trigger): for (mapper_path, dm_dev) in needs_trigger: LOG.debug('multipath: regenerating symlink for %s (%s)', mapper_path, dm_dev) util.subp(['udevadm', 'trigger', '--subsystem-match=block', '--action=add', '/sys/class/block/' + os.path.basename(dm_dev)]) udev.udevadm_settle(exists=mapper_path) if not os.path.islink(mapper_path): LOG.error('Failed to regenerate udev symlink %s', mapper_path) def reload(): """ Request multipath to force reload devmaps. """ util.subp(['multipath', '-r']) def multipath_supported(): """Return a boolean indicating if multipath is supported.""" try: multipath_assert_supported() return True except RuntimeError: return False def multipath_assert_supported(): """ Determine if the runtime system supports multipath. returns: True if system supports multipath raises: RuntimeError: if system does not support multipath """ missing_progs = [p for p in ('multipath', 'multipathd') if not util.which(p)] if missing_progs: raise RuntimeError( "Missing multipath utils: %s" % ','.join(missing_progs)) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/schemas.py000066400000000000000000000337151415350476600173620ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. _uuid_pattern = ( r'[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}') _path_dev = r'^/dev/[^/]+(/[^/]+)*$' _path_nondev = r'(^/$|^(/[^/]+)+$)' _fstypes = ['btrfs', 'ext2', 'ext3', 'ext4', 'fat', 'fat12', 'fat16', 'fat32', 'iso9660', 'vfat', 'jfs', 'ntfs', 'reiserfs', 'swap', 'xfs', 'zfsroot'] _ptable_unsupported = 'unsupported' _ptables = ['dos', 'gpt', 'msdos', 'vtoc'] _ptables_valid = _ptables + [_ptable_unsupported] definitions = { 'id': {'type': 'string'}, 'ref_id': {'type': 'string'}, 'devices': {'type': 'array', 'items': {'$ref': '#/definitions/ref_id'}}, 'name': {'type': 'string'}, 'preserve': {'type': 'boolean'}, 'ptable': {'type': 'string', 'enum': _ptables_valid}, 'size': {'type': ['string', 'number'], 'minimum': 1, 'pattern': r'^([1-9]\d*(.\d+)?|\d+.\d+)(K|M|G|T)?B?'}, 'wipe': { 'type': 'string', 'enum': ['random', 'superblock', 'superblock-recursive', 'zero'], }, 'uuid': { 'type': 'string', 'pattern': _uuid_pattern, }, 'params': { 'type': 'object', 'patternProperties': { r'^.*$': { 'oneOf': [ {'type': 'boolean'}, {'type': 'integer'}, {'type': 'null'}, {'type': 'string'}, ], }, }, }, } BCACHE = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-BCACHE', 'title': 'curtin storage configuration for a bcache device.', 'description': ('Declarative syntax for specifying bcache device.'), 'definitions': definitions, 'required': ['id', 'type'], 'anyOf': [ {'required': ['backing_device']}, {'required': ['cache_device']}], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'backing_device': {'$ref': '#/definitions/ref_id'}, 'cache_device': {'$ref': '#/definitions/ref_id'}, 'name': {'$ref': '#/definitions/name'}, 'preserve': {'$ref': '#/definitions/preserve'}, 'type': {'const': 'bcache'}, 'cache_mode': { 'type': ['string'], 'enum': ['writethrough', 'writeback', 'writearound', 'none'], }, }, } DASD = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-DASD', 'title': 'curtin storage configuration for dasds', 'description': ( 'Declarative syntax for specifying a dasd device.'), 'definitions': definitions, 'required': ['id', 'type', 'device_id'], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'name': {'$ref': '#/definitions/name'}, 'preserve': {'$ref': '#/definitions/preserve'}, 'type': {'const': 'dasd'}, 'blocksize': { 'type': ['integer', 'string'], 'oneOf': [{'enum': [512, 1024, 2048, 4096]}, {'enum': ['512', '1024', '2048', '4096']}], }, 'device_id': {'type': 'string'}, 'label': {'type': 'string', 'maxLength': 6}, 'mode': { 'type': ['string'], 'enum': ['expand', 'full', 'quick'], }, 'disk_layout': { 'type': ['string'], 'enum': ['cdl', 'ldl', 'not-formatted'], }, }, } DISK = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-DISK', 'title': 'curtin storage configuration for disks', 'description': ( 'Declarative syntax for specifying disks and partition format.'), 'definitions': definitions, 'required': ['id', 'type'], 'anyOf': [ {'required': ['serial']}, {'required': ['wwn']}, {'required': ['path']}], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'name': {'$ref': '#/definitions/name'}, 'multipath': {'type': 'string'}, 'device_id': {'type': 'string'}, 'preserve': {'$ref': '#/definitions/preserve'}, 'wipe': {'$ref': '#/definitions/wipe'}, 'type': {'const': 'disk'}, 'ptable': {'$ref': '#/definitions/ptable'}, 'serial': {'type': 'string'}, 'path': { 'type': 'string', 'oneOf': [ {'pattern': _path_dev}, {'pattern': r'^iscsi:.*'}], }, 'model': {'type': 'string'}, 'wwn': { 'type': 'string', 'oneOf': [ {'pattern': r'^0x(\d|[a-zA-Z])+'}, {'pattern': r'^(nvme|eui|uuid)\.([-0-9a-zA-Z])+'}], }, 'grub_device': { 'type': ['boolean', 'integer'], 'minimum': 0, 'maximum': 1 }, }, } DM_CRYPT = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-DMCRYPT', 'title': 'curtin storage configuration for creating encrypted volumes', 'description': ('Declarative syntax for specifying encrypted volumes.'), 'definitions': definitions, 'required': ['id', 'type', 'volume', 'dm_name'], 'oneOf': [ {'required': ['key']}, {'required': ['keyfile']}], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'dm_name': {'$ref': '#/definitions/name'}, 'volume': {'$ref': '#/definitions/ref_id'}, 'key': {'$ref': '#/definitions/id'}, 'keyfile': {'$ref': '#/definitions/id'}, 'preserve': {'$ref': '#/definitions/preserve'}, 'type': {'const': 'dm_crypt'}, }, } FORMAT = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-FORMAT', 'title': 'curtin storage configuration for formatting filesystems', 'description': ('Declarative syntax for specifying filesystem layout.'), 'definitions': definitions, 'required': ['id', 'type', 'volume', 'fstype'], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'name': {'$ref': '#/definitions/name'}, 'preserve': {'$ref': '#/definitions/preserve'}, 'uuid': {'$ref': '#/definitions/uuid'}, # XXX: This is not used 'type': {'const': 'format'}, 'fstype': {'type': 'string'}, 'label': {'type': 'string'}, 'volume': {'$ref': '#/definitions/ref_id'}, 'extra_options': {'type': 'array', 'items': {'type': 'string'}}, }, 'anyOf': [ # XXX: Accept vmtest values? {'properties': {'fstype': {'pattern': r'^__.*__$'}}}, {'properties': {'fstype': {'enum': _fstypes}}}, { 'properties': {'preserve': {'enum': [True]}}, 'required': ['preserve'] # this looks redundant but isn't } ] } LVM_PARTITION = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-LVMPARTITION', 'title': 'curtin storage configuration for formatting lvm logical vols', 'description': ('Declarative syntax for specifying lvm logical vols.'), 'definitions': definitions, 'required': ['id', 'type', 'volgroup', 'name'], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'name': {'$ref': '#/definitions/name'}, 'preserve': {'type': 'boolean'}, 'size': {'$ref': '#/definitions/size'}, # XXX: This is not used 'type': {'const': 'lvm_partition'}, 'volgroup': {'$ref': '#/definitions/ref_id'}, }, } LVM_VOLGROUP = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-LVMVOLGROUP', 'title': 'curtin storage configuration for formatting lvm volume groups', 'description': ('Declarative syntax for specifying lvm volgroup layout.'), 'definitions': definitions, 'required': ['id', 'type', 'devices', 'name'], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'devices': {'$ref': '#/definitions/devices'}, 'name': {'$ref': '#/definitions/name'}, 'preserve': {'type': 'boolean'}, 'uuid': {'$ref': '#/definitions/uuid'}, # XXX: This is not used 'type': {'const': 'lvm_volgroup'}, }, } MOUNT = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-MOUNT', 'title': 'curtin storage configuration for mounts', 'description': ('Declarative syntax for specifying devices mounts.'), 'definitions': definitions, 'required': ['id', 'type'], 'anyOf': [ {'required': ['path']}, {'required': ['device']}, {'required': ['spec']}], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'type': {'const': 'mount'}, 'path': { 'type': 'string', 'oneOf': [ {'pattern': _path_nondev}, {'enum': ['none']}, ], }, 'device': {'$ref': '#/definitions/ref_id'}, 'fstype': {'type': 'string'}, 'options': { 'type': 'string', 'oneOf': [ {'pattern': r'\S+(,\S+)*'}, {'enum': ['']}, ], }, 'spec': {'type': 'string'}, # XXX: Tighten this to fstab fs_spec 'freq': {'type': ['integer', 'string'], 'pattern': r'[0-9]'}, 'passno': {'type': ['integer', 'string'], 'pattern': r'[0-9]'}, }, } PARTITION = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-PARTITION', 'title': 'curtin storage configuration for partitions', 'description': ('Declarative syntax for specifying partition layout.'), 'definitions': definitions, 'required': ['id', 'type', 'device', 'size'], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'multipath': {'type': 'string'}, 'name': {'$ref': '#/definitions/name'}, 'offset': {'$ref': '#/definitions/size'}, # XXX: This is not used 'preserve': {'$ref': '#/definitions/preserve'}, 'size': {'$ref': '#/definitions/size'}, 'uuid': {'$ref': '#/definitions/uuid'}, # XXX: This is not used 'wipe': {'$ref': '#/definitions/wipe'}, 'type': {'const': 'partition'}, 'number': {'type': ['integer', 'string'], 'pattern': r'[1-9][0-9]*', 'minimum': 1}, 'device': {'$ref': '#/definitions/ref_id'}, 'flag': {'type': 'string', 'enum': ['bios_grub', 'boot', 'extended', 'home', 'linux', 'logical', 'lvm', 'mbr', 'prep', 'raid', 'swap', '']}, 'grub_device': { 'type': ['boolean', 'integer'], 'minimum': 0, 'maximum': 1 }, } } RAID = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-RAID', 'title': 'curtin storage configuration for a RAID.', 'description': ('Declarative syntax for specifying RAID.'), 'definitions': definitions, 'required': ['id', 'type', 'name', 'raidlevel'], 'type': 'object', 'additionalProperties': False, 'oneOf': [ {'required': ['devices']}, {'required': ['container']}, ], 'properties': { 'id': {'$ref': '#/definitions/id'}, 'devices': {'$ref': '#/definitions/devices'}, 'name': {'$ref': '#/definitions/name'}, 'mdname': {'$ref': '#/definitions/name'}, # XXX: Docs need updating 'metadata': {'type': ['string', 'number']}, 'preserve': {'$ref': '#/definitions/preserve'}, 'ptable': {'$ref': '#/definitions/ptable'}, 'wipe': {'$ref': '#/definitions/wipe'}, 'spare_devices': {'$ref': '#/definitions/devices'}, 'container': {'$ref': '#/definitions/id'}, 'type': {'const': 'raid'}, 'raidlevel': { 'type': ['integer', 'string'], 'oneOf': [ {'enum': [0, 1, 4, 5, 6, 10]}, {'enum': ['container', 'raid0', 'linear', '0', 'raid1', 'mirror', 'stripe', '1', 'raid4', '4', 'raid5', '5', 'raid6', '6', 'raid10', '10']}, # XXX: Docs need updating ], }, }, } ZFS = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-ZFS', 'title': 'curtin storage configuration for a ZFS dataset.', 'description': ('Declarative syntax for specifying a ZFS dataset.'), 'definitions': definitions, 'required': ['id', 'type', 'pool', 'volume'], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'pool': {'$ref': '#/definitions/ref_id'}, 'properties': {'$ref': '#/definitions/params'}, 'volume': {'$ref': '#/definitions/name'}, 'type': {'const': 'zfs'}, }, } ZPOOL = { '$schema': 'http://json-schema.org/draft-07/schema#', 'name': 'CURTIN-ZPOOL', 'title': 'curtin storage configuration for a ZFS pool.', 'description': ('Declarative syntax for specifying a ZFS pool.'), 'definitions': definitions, 'required': ['id', 'type', 'pool', 'vdevs'], 'type': 'object', 'additionalProperties': False, 'properties': { 'id': {'$ref': '#/definitions/id'}, 'vdevs': {'$ref': '#/definitions/devices'}, 'pool': {'$ref': '#/definitions/name'}, 'pool_properties': {'$ref': '#/definitions/params'}, 'fs_properties': {'$ref': '#/definitions/params'}, 'mountpoint': { 'type': 'string', 'oneOf': [ {'pattern': _path_nondev}, {'enum': ['none']}, ], }, 'type': {'const': 'zpool'}, }, } # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/block/zfs.py000066400000000000000000000236041415350476600165350ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ Wrap calls to the zfsutils-linux package (zpool, zfs) for creating zpools and volumes.""" import os from curtin.config import merge_config from curtin import distro from curtin import util from . import blkid, get_supported_filesystems ZPOOL_DEFAULT_PROPERTIES = { 'ashift': 12, 'version': 28, } ZFS_DEFAULT_PROPERTIES = { 'atime': 'off', 'canmount': 'off', 'normalization': 'formD', } ZFS_UNSUPPORTED_ARCHES = ['i386'] ZFS_UNSUPPORTED_RELEASES = ['precise', 'trusty'] def _join_flags(optflag, params): """ Insert optflag for each param in params and return combined list. :param optflag: String of the optional flag, like '-o' :param params: dictionary of parameter names and values :returns: List of strings :raises: ValueError: if params are of incorrect type Example: optflag='-o', params={'foo': 1, 'bar': 2} => ['-o', 'foo=1', '-o', 'bar=2'] """ if not isinstance(optflag, str) or not optflag: raise ValueError("Invalid optflag: %s", optflag) if not isinstance(params, dict): raise ValueError("Invalid params: %s", params) # zfs flags and params require string booleans ('on', 'off') # yaml implicity converts those and others to booleans, we # revert that here def _b2s(value): if not isinstance(value, bool): return value if value: return 'on' return 'off' return [] if not params else ( [param for opt in zip([optflag] * len(params), ["%s=%s" % (k, _b2s(v)) for (k, v) in params.items()]) for param in opt]) def _join_pool_volume(poolname, volume): """ Combine poolname and volume. """ if not poolname or not volume: raise ValueError('Invalid pool (%s) or volume (%s)', poolname, volume) return os.path.normpath("%s/%s" % (poolname, volume)) def zfs_supported(): """Return a boolean indicating if zfs is supported.""" try: zfs_assert_supported() return True except RuntimeError: return False def zfs_assert_supported(): """ Determine if the runtime system supports zfs. returns: True if system supports zfs raises: RuntimeError: if system does not support zfs """ arch = util.get_platform_arch() if arch in ZFS_UNSUPPORTED_ARCHES: raise RuntimeError("zfs is not supported on architecture: %s" % arch) release = distro.lsb_release()['codename'] if release in ZFS_UNSUPPORTED_RELEASES: raise RuntimeError("zfs is not supported on release: %s" % release) if 'zfs' not in get_supported_filesystems(): try: util.load_kernel_module('zfs') except util.ProcessExecutionError as err: raise RuntimeError("Failed to load 'zfs' kernel module: %s" % err) missing_progs = [p for p in ('zpool', 'zfs') if not util.which(p)] if missing_progs: raise RuntimeError("Missing zfs utils: %s" % ','.join(missing_progs)) def zpool_create(poolname, vdevs, mountpoint=None, altroot=None, pool_properties=None, zfs_properties=None): """ Create a zpool called comprised of devices specified in . :param poolname: String used to name the pool. :param vdevs: An iterable of strings of block devices paths which *should* start with '/dev/disk/by-id/' to follow best practices. :param pool_properties: A dictionary of key, value pairs to be passed to `zpool create` with the `-o` flag as properties of the zpool. If value is None, then ZPOOL_DEFAULT_PROPERTIES will be used. :param zfs_properties: A dictionary of key, value pairs to be passed to `zpool create` with the `-O` flag as properties of the filesystems created under the pool. If the value is None, then ZFS_DEFAULT_PROPERTIES will be used. :returns: None on success. :raises: ValueError: raises exceptions on missing/badd input :raises: ProcessExecutionError: raised on unhandled exceptions from invoking `zpool create`. """ if not isinstance(poolname, util.string_types) or not poolname: raise ValueError("Invalid poolname: %s", poolname) if isinstance(vdevs, util.string_types) or isinstance(vdevs, dict): raise TypeError("Invalid vdevs: expected list-like iterable") else: try: vdevs = list(vdevs) except TypeError: raise TypeError("vdevs must be iterable, not: %s" % str(vdevs)) pool_cfg = ZPOOL_DEFAULT_PROPERTIES.copy() if pool_properties: merge_config(pool_cfg, pool_properties) zfs_cfg = ZFS_DEFAULT_PROPERTIES.copy() if zfs_properties: merge_config(zfs_cfg, zfs_properties) options = _join_flags('-o', pool_cfg) options.extend(_join_flags('-O', zfs_cfg)) if mountpoint: options.extend(_join_flags('-O', {'mountpoint': mountpoint})) if altroot: options.extend(['-R', altroot]) cmd = ["zpool", "create"] + options + [poolname] + vdevs util.subp(cmd, capture=True) # Trigger generation of zpool.cache file cmd = ["zpool", "set", "cachefile=/etc/zfs/zpool.cache", poolname] util.subp(cmd, capture=True) def zfs_create(poolname, volume, zfs_properties=None): """ Create a filesystem dataset within the specified zpool. :param poolname: String used to specify the pool in which to create the filesystem. :param volume: String used as the name of the filesystem. :param zfs_properties: A dict of properties to be passed to `zfs create` with the `-o` flag as properties of the filesystems created under the pool. If value is None then no properties will be set on the filesystem. :returns: None :raises: ValueError: raises exceptions on missing/bad input. :raises: ProcessExecutionError: raised on unhandled exceptions from invoking `zfs create`. """ if not isinstance(poolname, util.string_types) or not poolname: raise ValueError("Invalid poolname: %s", poolname) if not isinstance(volume, util.string_types) or not volume: raise ValueError("Invalid volume: %s", volume) zfs_cfg = {} if zfs_properties: merge_config(zfs_cfg, zfs_properties) options = _join_flags('-o', zfs_cfg) cmd = ["zfs", "create"] + options + [_join_pool_volume(poolname, volume)] util.subp(cmd, capture=True) # mount volume if it canmount=noauto if zfs_cfg.get('canmount') == 'noauto': zfs_mount(poolname, volume) def zfs_mount(poolname, volume): """ Mount zfs pool/volume :param poolname: String used to specify the pool in which to create the filesystem. :param volume: String used as the name of the filesystem. :returns: None :raises: ValueError: raises exceptions on missing/bad input. :raises: ProcessExecutionError: raised on unhandled exceptions from invoking `zfs mount`. """ if not isinstance(poolname, util.string_types) or not poolname: raise ValueError("Invalid poolname: %s", poolname) if not isinstance(volume, util.string_types) or not volume: raise ValueError("Invalid volume: %s", volume) cmd = ['zfs', 'mount', _join_pool_volume(poolname, volume)] util.subp(cmd, capture=True) def zpool_list(): """ Return a list of zfs pool names which have been imported :returns: List of strings """ # -H drops the header, -o specifies an attribute to fetch out, _err = util.subp(['zpool', 'list', '-H', '-o', 'name'], capture=True) return out.splitlines() def zpool_export(poolname): """ Export specified zpool :param poolname: String used to specify the pool to export. :returns: None """ if not isinstance(poolname, util.string_types) or not poolname: raise ValueError("Invalid poolname: %s", poolname) util.subp(['zpool', 'export', poolname]) def device_to_poolname(devname): """ Use blkid information to map a devname to a zpool poolname stored in in 'LABEL' if devname is a zfs_member and LABEL is set. :param devname: A block device name :returns: String Example blkid output on a zfs vdev: {'/dev/vdb1': {'LABEL': 'rpool', 'PARTUUID': '52dff41a-49be-44b3-a36a-1b499e570e69', 'TYPE': 'zfs_member', 'UUID': '12590398935543668673', 'UUID_SUB': '7809435738165038086'}} device_to_poolname('/dev/vdb1') would return 'rpool' """ if not isinstance(devname, util.string_types) or not devname: raise ValueError("device_to_poolname: invalid devname: '%s'" % devname) blkid_info = blkid(devs=[devname]) if not blkid_info or devname not in blkid_info: return vdev = blkid_info.get(devname) vdev_type = vdev.get('TYPE') label = vdev.get('LABEL') if vdev_type == 'zfs_member' and label: return label def get_zpool_from_config(cfg): """Parse a curtin storage config and return a list of zpools that were created. """ if not cfg: return [] if 'storage' not in cfg: return [] zpools = [] sconfig = cfg['storage']['config'] for item in sconfig: if item['type'] == 'zpool': zpools.append(item['pool']) elif item['type'] == 'format': if item['fstype'] == 'zfsroot': # curtin.commands.blockmeta sets pool='rpool' for zfsroot zpools.append('rpool') return zpools # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/000077500000000000000000000000001415350476600160635ustar00rootroot00000000000000curtin-21.3/curtin/commands/__init__.py000066400000000000000000000005771415350476600202050ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. def populate_one_subcmd(parser, options_dict, handler): for ent in options_dict: args = ent[0] if not isinstance(args, (list, tuple)): args = (args,) parser.add_argument(*args, **ent[1]) parser.set_defaults(func=handler) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/__main__.py000066400000000000000000000001321415350476600201510ustar00rootroot00000000000000if __name__ == '__main__': from .main import main import sys sys.exit(main()) curtin-21.3/curtin/commands/apply_net.py000066400000000000000000000225651415350476600204420ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys from .. import log import curtin.net as net import curtin.util as util from curtin import config from curtin import paths from . import populate_one_subcmd LOG = log.LOG IFUPDOWN_IPV6_MTU_PRE_HOOK = """#!/bin/bash -e # injected by curtin installer [ "${IFACE}" != "lo" ] || exit 0 # Trigger only if MTU configured [ -n "${IF_MTU}" ] || exit 0 read CUR_DEV_MTU /run/network/${IFACE}_dev.mtu [ -n "${CUR_IPV6_MTU}" ] && echo ${CUR_IPV6_MTU} > /run/network/${IFACE}_ipv6.mtu exit 0 """ IFUPDOWN_IPV6_MTU_POST_HOOK = """#!/bin/bash -e # injected by curtin installer [ "${IFACE}" != "lo" ] || exit 0 # Trigger only if MTU configured [ -n "${IF_MTU}" ] || exit 0 read PRE_DEV_MTU /proc/sys/net/ipv6/conf/${IFACE}/mtu ||: elif [ "${ADDRFAM}" = "inet" ]; then # handle the clobber case where inet mtu changes v6 mtu. # ifupdown will already have set dev mtu, so lower mtu # if needed. If v6 mtu was larger, it get's clamped down # to the dev MTU value. if [ ${PRE_IPV6_MTU} -lt ${CUR_IPV6_MTU} ]; then # sysctl -q -e -w net.ipv6.conf.${IFACE}.mtu=${PRE_IPV6_MTU} echo ${PRE_IPV6_MTU} >/proc/sys/net/ipv6/conf/${IFACE}/mtu ||: fi fi exit 0 """ def apply_net(target, network_state=None, network_config=None): if network_state is None and network_config is None: raise ValueError("Must provide network_config or network_state") if target is None: raise ValueError("target cannot be None.") passthrough = False if network_state: # NB: we cannot support passthrough until curtin can convert from # network_state to network-config yaml ns = net.network_state.from_state_file(network_state) raise ValueError('Not Supported; curtin lacks a network_state to ' 'network_config converter.') elif network_config: netcfg = config.load_config(network_config) # curtin will pass-through the netconfig into the target # for rendering at runtime unless the target OS does not # support NETWORK_CONFIG_V2 feature. LOG.info('Checking cloud-init in target [%s] for network ' 'configuration passthrough support.', target) try: passthrough = net.netconfig_passthrough_available(target) except util.ProcessExecutionError: LOG.warning('Failed to determine if passthrough is available') if passthrough: LOG.info('Passing network configuration through to target: %s', target) net.render_netconfig_passthrough(target, netconfig=netcfg) else: ns = net.parse_net_config_data(netcfg.get('network', {})) if ns is None: return if not passthrough: LOG.info('Rendering network configuration in target') net.render_network_state(target=target, network_state=ns) _maybe_remove_legacy_eth0(target) _disable_ipv6_privacy_extensions(target) _patch_ifupdown_ipv6_mtu_hook(target) def _patch_ifupdown_ipv6_mtu_hook(target, prehookfn="etc/network/if-pre-up.d/mtuipv6", posthookfn="etc/network/if-up.d/mtuipv6"): contents = { 'prehook': IFUPDOWN_IPV6_MTU_PRE_HOOK, 'posthook': IFUPDOWN_IPV6_MTU_POST_HOOK, } hookfn = { 'prehook': prehookfn, 'posthook': posthookfn, } for hook in ['prehook', 'posthook']: fn = hookfn[hook] cfg = paths.target_path(target, path=fn) LOG.info('Injecting fix for ipv6 mtu settings: %s', cfg) util.write_file(cfg, contents[hook], mode=0o755) def _disable_ipv6_privacy_extensions(target, path="etc/sysctl.d/10-ipv6-privacy.conf"): """Ubuntu server image sets a preference to use IPv6 privacy extensions by default; this races with the cloud-image desire to disable them. Resolve this by allowing the cloud-image setting to win. """ LOG.debug('Attempting to remove ipv6 privacy extensions') cfg = paths.target_path(target, path=path) if not os.path.exists(cfg): LOG.warn('Failed to find ipv6 privacy conf file %s', cfg) return bmsg = "Disabling IPv6 privacy extensions config may not apply." try: contents = util.load_file(cfg) known_contents = ["net.ipv6.conf.all.use_tempaddr = 2", "net.ipv6.conf.default.use_tempaddr = 2"] lines = [f.strip() for f in contents.splitlines() if not f.startswith("#")] if lines == known_contents: LOG.info('Removing ipv6 privacy extension config file: %s', cfg) util.del_file(cfg) msg = "removed %s with known contents" % cfg curtin_contents = '\n'.join( ["# IPv6 Privacy Extensions (RFC 4941)", "# Disabled by curtin", "# net.ipv6.conf.all.use_tempaddr = 2", "# net.ipv6.conf.default.use_tempaddr = 2"]) util.write_file(cfg, curtin_contents) else: LOG.debug('skipping removal of %s, expected content not found', cfg) LOG.debug("Found content in file %s:\n%s", cfg, lines) LOG.debug("Expected contents in file %s:\n%s", cfg, known_contents) msg = (bmsg + " '%s' exists with user configured content." % cfg) except Exception as e: msg = bmsg + " %s exists, but could not be read. %s" % (cfg, e) LOG.exception(msg) raise def _maybe_remove_legacy_eth0(target, path="etc/network/interfaces.d/eth0.cfg"): """Ubuntu cloud images previously included a 'eth0.cfg' that had hard coded content. That file would interfere with the rendered configuration if it was present. if the file does not exist do nothing. If the file exists: - with known content, remove it and warn - with unknown content, leave it and warn """ cfg = paths.target_path(target, path=path) if not os.path.exists(cfg): LOG.warn('Failed to find legacy network conf file %s', cfg) return bmsg = "Dynamic networking config may not apply." try: contents = util.load_file(cfg) known_contents = ["auto eth0", "iface eth0 inet dhcp"] lines = [f.strip() for f in contents.splitlines() if not f.startswith("#")] if lines == known_contents: util.del_file(cfg) msg = "removed %s with known contents" % cfg else: msg = (bmsg + " '%s' exists with user configured content." % cfg) except Exception: msg = bmsg + " %s exists, but could not be read." % cfg LOG.exception(msg) raise LOG.warn(msg) def apply_net_main(args): # curtin apply_net [--net-state=/config/netstate.yml] [--target=/] # [--net-config=/config/maas_net.yml] state = util.load_command_environment() log.basicConfig(stream=args.log_file, verbosity=1) if args.target is not None: state['target'] = args.target if args.net_state is not None: state['network_state'] = args.net_state if args.net_config is not None: state['network_config'] = args.net_config if state['target'] is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) if not state['network_config'] and not state['network_state']: sys.stderr.write("Must provide at least config or state\n") sys.exit(2) LOG.info('Applying network configuration') apply_net(target=state['target'], network_state=state['network_state'], network_config=state['network_config']) LOG.info('Applied network configuration successfully') sys.exit(0) CMD_ARGUMENTS = ( ((('-s', '--net-state'), {'help': ('file to read containing network state. ' 'defaults to env["OUTPUT_NETWORK_STATE"]'), 'metavar': 'NETSTATE', 'action': 'store', 'default': os.environ.get('OUTPUT_NETWORK_STATE')}), (('-t', '--target'), {'help': ('target filesystem root to configure networking to. ' 'default is env["TARGET_MOUNT_POINT"]'), 'metavar': 'TARGET', 'action': 'store', 'default': os.environ.get('TARGET_MOUNT_POINT')}), (('-c', '--net-config'), {'help': ('file to read containing curtin network config.' 'defaults to env["OUTPUT_NETWORK_CONFIG"]'), 'metavar': 'NETCONFIG', 'action': 'store', 'default': os.environ.get('OUTPUT_NETWORK_CONFIG')}))) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, apply_net_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/apt_config.py000066400000000000000000000601641415350476600205550ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ apt.py Handle the setup of apt related tasks like proxies, mirrors, repositories. """ import argparse import glob import os import re import sys from aptsources.sourceslist import SourceEntry from curtin.log import LOG from curtin import (config, distro, gpg, paths, util) from . import populate_one_subcmd # this will match 'XXX:YYY' (ie, 'cloud-archive:foo' or 'ppa:bar') ADD_APT_REPO_MATCH = r"^[\w-]+:\w" # place where apt stores cached repository data APT_LISTS = "/var/lib/apt/lists" # Files to store proxy information APT_CONFIG_FN = "/etc/apt/apt.conf.d/94curtin-config" APT_PROXY_FN = "/etc/apt/apt.conf.d/90curtin-aptproxy" # Default keyserver to use DEFAULT_KEYSERVER = "keyserver.ubuntu.com" # Default archive mirrors PRIMARY_ARCH_MIRRORS = {"PRIMARY": "http://archive.ubuntu.com/ubuntu/", "SECURITY": "http://security.ubuntu.com/ubuntu/"} PORTS_MIRRORS = {"PRIMARY": "http://ports.ubuntu.com/ubuntu-ports", "SECURITY": "http://ports.ubuntu.com/ubuntu-ports"} PRIMARY_ARCHES = ['amd64', 'i386'] PORTS_ARCHES = ['s390x', 'arm64', 'armhf', 'powerpc', 'ppc64el'] APT_SOURCES_PROPOSED = ( "deb $MIRROR $RELEASE-proposed main restricted universe multiverse") def get_default_mirrors(arch=None): """returns the default mirrors for the target. These depend on the architecture, for more see: https://wiki.ubuntu.com/UbuntuDevelopment/PackageArchive#Ports""" if arch is None: arch = distro.get_architecture() if arch in PRIMARY_ARCHES: return PRIMARY_ARCH_MIRRORS.copy() if arch in PORTS_ARCHES: return PORTS_MIRRORS.copy() raise ValueError("No default mirror known for arch %s" % arch) def handle_apt(cfg, target=None): """ handle_apt process the config for apt_config. This can be called from curthooks if a global apt config was provided or via the "apt" standalone command. """ release = distro.lsb_release(target=target)['codename'] arch = distro.get_architecture(target) mirrors = find_apt_mirror_info(cfg, arch) LOG.debug("Apt Mirror info: %s", mirrors) apply_debconf_selections(cfg, target) if not config.value_as_boolean(cfg.get('preserve_sources_list', True)): generate_sources_list(cfg, release, mirrors, target) apply_preserve_sources_list(target) rename_apt_lists(mirrors, target) try: apply_apt_proxy_config(cfg, target + APT_PROXY_FN, target + APT_CONFIG_FN) except (IOError, OSError): LOG.exception("Failed to apply proxy or apt config info:") # Process 'apt_source -> sources {dict}' if 'sources' in cfg: params = mirrors params['RELEASE'] = release params['MIRROR'] = mirrors["MIRROR"] matcher = None matchcfg = cfg.get('add_apt_repo_match', ADD_APT_REPO_MATCH) if matchcfg: matcher = re.compile(matchcfg).search add_apt_sources(cfg['sources'], target, template_params=params, aa_repo_match=matcher) def debconf_set_selections(selections, target=None): util.subp(['debconf-set-selections'], data=selections, target=target, capture=True) def dpkg_reconfigure(packages, target=None): # For any packages that are already installed, but have preseed data # we populate the debconf database, but the filesystem configuration # would be preferred on a subsequent dpkg-reconfigure. # so, what we have to do is "know" information about certain packages # to unconfigure them. unhandled = [] to_config = [] for pkg in packages: if pkg in CONFIG_CLEANERS: LOG.debug("unconfiguring %s", pkg) CONFIG_CLEANERS[pkg](target) to_config.append(pkg) else: unhandled.append(pkg) if len(unhandled): LOG.warn("The following packages were installed and preseeded, " "but cannot be unconfigured: %s", unhandled) if len(to_config): util.subp(['dpkg-reconfigure', '--frontend=noninteractive'] + list(to_config), data=None, target=target, capture=True) def apply_debconf_selections(cfg, target=None): """apply_debconf_selections - push content to debconf""" # debconf_selections: # set1: | # cloud-init cloud-init/datasources multiselect MAAS # set2: pkg pkg/value string bar selsets = cfg.get('debconf_selections') if not selsets: LOG.debug("debconf_selections was not set in config") return LOG.debug('Applying debconf selections') selections = '\n'.join( [selsets[key] for key in sorted(selsets.keys())]) debconf_set_selections(selections.encode() + b"\n", target=target) # get a complete list of packages listed in input pkgs_cfgd = set() for key, content in selsets.items(): for line in content.splitlines(): if line.startswith("#"): continue pkg = re.sub(r"[:\s].*", "", line) pkgs_cfgd.add(pkg) pkgs_installed = distro.get_installed_packages(target) need_reconfig = pkgs_cfgd.intersection(pkgs_installed) if len(need_reconfig) == 0: return dpkg_reconfigure(need_reconfig, target=target) def clean_cloud_init(target): """clean out any local cloud-init config""" flist = glob.glob( paths.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*")) LOG.debug("cleaning cloud-init config from: %s", flist) for dpkg_cfg in flist: os.unlink(dpkg_cfg) def mirrorurl_to_apt_fileprefix(mirror): """ mirrorurl_to_apt_fileprefix Convert a mirror url to the file prefix used by apt on disk to store cache information for that mirror. To do so do: - take off ???:// - drop tailing / - convert in string / to _ """ string = mirror if string.endswith("/"): string = string[0:-1] pos = string.find("://") if pos >= 0: string = string[pos + 3:] string = string.replace("/", "_") return string def rename_apt_lists(new_mirrors, target=None, arch=None): """rename_apt_lists - rename apt lists to preserve old cache data""" if arch is None: arch = distro.get_architecture(target) default_mirrors = get_default_mirrors(arch) pre = paths.target_path(target, APT_LISTS) for (name, omirror) in default_mirrors.items(): nmirror = new_mirrors.get(name) if not nmirror: continue oprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(omirror) nprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(nmirror) if oprefix == nprefix: continue olen = len(oprefix) for filename in glob.glob("%s_*" % oprefix): newname = "%s%s" % (nprefix, filename[olen:]) LOG.debug("Renaming apt list %s to %s", filename, newname) try: os.rename(filename, newname) except OSError: # since this is a best effort task, warn with but don't fail LOG.warn("Failed to rename apt list:", exc_info=True) def update_default_mirrors(entries, mirrors, target, arch=None): """replace existing default repos with the configured mirror""" if arch is None: arch = distro.get_architecture(target) defaults = get_default_mirrors(arch) mirrors_replacement = { defaults['PRIMARY']: mirrors["MIRROR"], defaults['SECURITY']: mirrors["SECURITY"], } # allow original file URIs without the trailing slash to match mirror # specifications that have it noslash = {} for key in mirrors_replacement.keys(): if key[-1] == '/': noslash[key[:-1]] = mirrors_replacement[key] mirrors_replacement.update(noslash) for entry in entries: entry.uri = mirrors_replacement.get(entry.uri, entry.uri) return entries def update_mirrors(entries, mirrors): """perform template replacement of mirror placeholders with configured values""" for entry in entries: entry.uri = util.render_string(entry.uri, mirrors) return entries def map_known_suites(suite, release): """there are a few default names which will be auto-extended. This comes at the inability to use those names literally as suites, but on the other hand increases readability of the cfg quite a lot""" mapping = {'updates': '$RELEASE-updates', 'backports': '$RELEASE-backports', 'security': '$RELEASE-security', 'proposed': '$RELEASE-proposed', 'release': '$RELEASE'} try: template_suite = mapping[suite] except KeyError: template_suite = suite return util.render_string(template_suite, {'RELEASE': release}) def commentify(entry): # handle commenting ourselves - it handles lines with # options better return SourceEntry('# ' + str(entry)) def disable_suites(disabled, entries, release): """reads the config for suites to be disabled and removes those from the template""" if not disabled: return entries suites_to_disable = [] for suite in disabled: release_suite = map_known_suites(suite, release) LOG.debug("Disabling suite %s as %s", suite, release_suite) suites_to_disable.append(release_suite) output = [] for entry in entries: if not entry.disabled and entry.dist in suites_to_disable: entry = commentify(entry) output.append(entry) return output def disable_components(disabled, entries): """reads the config for components to be disabled and remove those from the entries""" if not disabled: return entries # purposefully skip disabling the main component comps_to_disable = {comp for comp in disabled if comp != 'main'} output = [] for entry in entries: if not entry.disabled and comps_to_disable.intersection(entry.comps): output.append(commentify(entry)) entry.comps = [comp for comp in entry.comps if comp not in comps_to_disable] if entry.comps: output.append(entry) else: output.append(entry) return output def update_dist(entries, release): for entry in entries: entry.dist = util.render_string(entry.dist, {'RELEASE': release}) return entries def entries_to_str(entries): return ''.join([str(entry) + '\n' for entry in entries]) def generate_sources_list(cfg, release, mirrors, target=None, arch=None): """ generate_sources_list create a source.list file based on a custom or default template by replacing mirrors and release in the template """ aptsrc = "/etc/apt/sources.list" tmpl = cfg.get('sources_list', None) from_file = False if tmpl is None: LOG.info("No custom template provided, fall back to modify" "mirrors in %s on the target system", aptsrc) tmpl = util.load_file(paths.target_path(target, aptsrc)) from_file = True entries = [SourceEntry(line) for line in tmpl.splitlines(True)] if from_file: # when loading from an existing file, we also replace default # URIs with configured mirrors entries = update_default_mirrors(entries, mirrors, target, arch) entries = update_mirrors(entries, mirrors) entries = update_dist(entries, release) entries = disable_suites(cfg.get('disable_suites'), entries, release) entries = disable_components(cfg.get('disable_components'), entries) output = entries_to_str(entries) orig = paths.target_path(target, aptsrc) if os.path.exists(orig): os.rename(orig, orig + ".curtin.old") util.write_file(paths.target_path(target, aptsrc), output, mode=0o644) def apply_preserve_sources_list(target): # protect the just generated sources.list from cloud-init cloudfile = "/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg" target_ver = distro.get_package_version('cloud-init', target=target) if not target_ver: LOG.info("Attempt to read cloud-init version from target returned " "'%s', not writing preserve_sources_list config.", target_ver) return cfg = {'apt': {'preserve_sources_list': True}} if target_ver['major'] < 1: # anything cloud-init 0.X.X will get the old config key. cfg = {'apt_preserve_sources_list': True} try: util.write_file(paths.target_path(target, cloudfile), config.dump_config(cfg), mode=0o644) LOG.debug("Set preserve_sources_list to True in %s with: %s", cloudfile, cfg) except IOError: LOG.exception( "Failed to protect /etc/apt/sources.list from cloud-init in '%s'", cloudfile) raise def add_apt_key_raw(key, target=None): """ actual adding of a key as defined in key argument to the system """ LOG.debug("Adding key:\n'%s'", key) try: util.subp(['apt-key', 'add', '-'], data=key.encode(), target=target) except util.ProcessExecutionError: LOG.exception("failed to add apt GPG Key to apt keyring") raise def add_apt_key(ent, target=None): """ Add key to the system as defined in ent (if any). Supports raw keys or keyid's The latter will as a first step fetched to get the raw key """ if 'keyid' in ent and 'key' not in ent: keyserver = DEFAULT_KEYSERVER if 'keyserver' in ent: keyserver = ent['keyserver'] ent['key'] = gpg.getkeybyid(ent['keyid'], keyserver, retries=(1, 2, 5, 10)) if 'key' in ent: add_apt_key_raw(ent['key'], target) def add_apt_sources(srcdict, target=None, template_params=None, aa_repo_match=None): """ add entries in /etc/apt/sources.list.d for each abbreviated sources.list entry in 'srcdict'. When rendering template, also include the values in dictionary searchList """ if template_params is None: template_params = {} if aa_repo_match is None: raise ValueError('did not get a valid repo matcher') if not isinstance(srcdict, dict): raise TypeError('unknown apt format: %s' % (srcdict)) for filename in srcdict: ent = srcdict[filename] if 'filename' not in ent: ent['filename'] = filename add_apt_key(ent, target) if 'source' not in ent: continue source = ent['source'] if source == 'proposed': source = APT_SOURCES_PROPOSED source = util.render_string(source, template_params) if not ent['filename'].startswith("/"): ent['filename'] = os.path.join("/etc/apt/sources.list.d/", ent['filename']) if not ent['filename'].endswith(".list"): ent['filename'] += ".list" if aa_repo_match(source): with util.ChrootableTarget( target, sys_resolvconf=True) as in_chroot: try: in_chroot.subp(["add-apt-repository", source], retries=(1, 2, 5, 10)) except util.ProcessExecutionError: LOG.exception("add-apt-repository failed.") raise continue sourcefn = paths.target_path(target, ent['filename']) try: contents = "%s\n" % (source) util.write_file(sourcefn, contents, omode="a") except IOError as detail: LOG.exception("failed write to file %s: %s", sourcefn, detail) raise distro.apt_update(target=target, force=True, comment="apt-source changed config") return def search_for_mirror(candidates): """ Search through a list of mirror urls for one that works This needs to return quickly. """ if candidates is None: return None LOG.debug("search for mirror in candidates: '%s'", candidates) for cand in candidates: try: if util.is_resolvable_url(cand): LOG.debug("found working mirror: '%s'", cand) return cand except Exception: pass return None def update_mirror_info(pmirror, smirror, arch): """sets security mirror to primary if not defined. returns defaults if no mirrors are defined""" if pmirror is not None: if smirror is None: smirror = pmirror return {'PRIMARY': pmirror, 'SECURITY': smirror} return get_default_mirrors(arch) def get_arch_mirrorconfig(cfg, mirrortype, arch): """out of a list of potential mirror configurations select and return the one matching the architecture (or default)""" # select the mirror specification (if-any) mirror_cfg_list = cfg.get(mirrortype, None) if mirror_cfg_list is None: return None # select the specification matching the target arch default = None for mirror_cfg_elem in mirror_cfg_list: arches = mirror_cfg_elem.get("arches") if arch in arches: return mirror_cfg_elem if "default" in arches: default = mirror_cfg_elem return default def get_mirror(cfg, mirrortype, arch): """pass the three potential stages of mirror specification returns None is neither of them found anything otherwise the first hit is returned""" mcfg = get_arch_mirrorconfig(cfg, mirrortype, arch) if mcfg is None: return None # directly specified mirror = mcfg.get("uri", None) # fallback to search if specified if mirror is None: # list of mirrors to try to resolve mirror = search_for_mirror(mcfg.get("search", None)) return mirror def find_apt_mirror_info(cfg, arch=None): """find_apt_mirror_info find an apt_mirror given the cfg provided. It can check for separate config of primary and security mirrors If only primary is given security is assumed to be equal to primary If the generic apt_mirror is given that is defining for both """ if arch is None: arch = distro.get_architecture() LOG.debug("got arch for mirror selection: %s", arch) pmirror = get_mirror(cfg, "primary", arch) LOG.debug("got primary mirror: %s", pmirror) smirror = get_mirror(cfg, "security", arch) LOG.debug("got security mirror: %s", smirror) # Note: curtin has no cloud-datasource fallback mirror_info = update_mirror_info(pmirror, smirror, arch) # less complex replacements use only MIRROR, derive from primary mirror_info["MIRROR"] = mirror_info["PRIMARY"] return mirror_info def apply_apt_proxy_config(cfg, proxy_fname, config_fname): """apply_apt_proxy_config Applies any apt*proxy config from if specified """ # Set up any apt proxy cfgs = (('proxy', 'Acquire::http::Proxy "%s";'), ('http_proxy', 'Acquire::http::Proxy "%s";'), ('ftp_proxy', 'Acquire::ftp::Proxy "%s";'), ('https_proxy', 'Acquire::https::Proxy "%s";')) proxies = [fmt % cfg.get(name) for (name, fmt) in cfgs if cfg.get(name)] if len(proxies): LOG.debug("write apt proxy info to %s", proxy_fname) util.write_file(proxy_fname, '\n'.join(proxies) + '\n') elif os.path.isfile(proxy_fname): util.del_file(proxy_fname) LOG.debug("no apt proxy configured, removed %s", proxy_fname) if cfg.get('conf', None): LOG.debug("write apt config info to %s", config_fname) util.write_file(config_fname, cfg.get('conf')) elif os.path.isfile(config_fname): util.del_file(config_fname) LOG.debug("no apt config configured, removed %s", config_fname) def apt_command(args): """ Main entry point for curtin apt-config standalone command This does not read the global config as handled by curthooks, but instead one can specify a different "target" and a new cfg via --config """ cfg = config.load_command_config(args, {}) if args.target is not None: target = args.target else: state = util.load_command_environment() target = state['target'] if target is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) apt_cfg = cfg.get("apt") # if no apt config section is available, do nothing if apt_cfg is not None: LOG.debug("Handling apt to target %s with config %s", target, apt_cfg) try: with util.ChrootableTarget(target, sys_resolvconf=True): handle_apt(apt_cfg, target) except (RuntimeError, TypeError, ValueError, IOError): LOG.exception("Failed to configure apt features '%s'", apt_cfg) sys.exit(1) else: LOG.info("No apt config provided, skipping") sys.exit(0) def translate_old_apt_features(cfg): """translate the few old apt related features into the new config format""" predef_apt_cfg = cfg.get("apt") if predef_apt_cfg is None: cfg['apt'] = {} predef_apt_cfg = cfg.get("apt") if cfg.get('apt_proxy') is not None: if predef_apt_cfg.get('proxy') is not None: msg = ("Error in apt_proxy configuration: " "old and new format of apt features " "are mutually exclusive") LOG.error(msg) raise ValueError(msg) cfg['apt']['proxy'] = cfg.get('apt_proxy') LOG.debug("Transferred %s into new format: %s", cfg.get('apt_proxy'), cfg.get('apte')) del cfg['apt_proxy'] if cfg.get('apt_mirrors') is not None: if predef_apt_cfg.get('mirrors') is not None: msg = ("Error in apt_mirror configuration: " "old and new format of apt features " "are mutually exclusive") LOG.error(msg) raise ValueError(msg) old = cfg.get('apt_mirrors') cfg['apt']['primary'] = [{"arches": ["default"], "uri": old.get('ubuntu_archive')}] cfg['apt']['security'] = [{"arches": ["default"], "uri": old.get('ubuntu_security')}] LOG.debug("Transferred %s into new format: %s", cfg.get('apt_mirror'), cfg.get('apt')) del cfg['apt_mirrors'] # to work this also needs to disable the default protection psl = predef_apt_cfg.get('preserve_sources_list') if psl is not None: if config.value_as_boolean(psl) is True: msg = ("Error in apt_mirror configuration: " "apt_mirrors and preserve_sources_list: True " "are mutually exclusive") LOG.error(msg) raise ValueError(msg) cfg['apt']['preserve_sources_list'] = False if cfg.get('debconf_selections') is not None: if predef_apt_cfg.get('debconf_selections') is not None: msg = ("Error in debconf_selections configuration: " "old and new format of apt features " "are mutually exclusive") LOG.error(msg) raise ValueError(msg) selsets = cfg.get('debconf_selections') cfg['apt']['debconf_selections'] = selsets LOG.info("Transferred %s into new format: %s", cfg.get('debconf_selections'), cfg.get('apt')) del cfg['debconf_selections'] return cfg CMD_ARGUMENTS = ( ((('-c', '--config'), {'help': 'read configuration from cfg', 'action': util.MergedCmdAppend, 'metavar': 'FILE', 'type': argparse.FileType("rb"), 'dest': 'cfgopts', 'default': []}), (('-t', '--target'), {'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}),) ) def POPULATE_SUBCMD(parser): """Populate subcommand option parsing for apt-config""" populate_one_subcmd(parser, CMD_ARGUMENTS, apt_command) CONFIG_CLEANERS = { 'cloud-init': clean_cloud_init, } # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/block_attach_iscsi.py000066400000000000000000000011701415350476600222440ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from . import populate_one_subcmd from curtin.block import iscsi def block_attach_iscsi_main(args): iscsi.ensure_disk_connected(args.disk, args.save_config) return 0 CMD_ARGUMENTS = ( ('disk', {'help': 'RFC4173 specification of iSCSI disk to attach'}), ('--save-config', {'help': 'save access configuration to local filesystem', 'default': False, 'action': 'store_true'}), ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, block_attach_iscsi_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/block_detach_iscsi.py000066400000000000000000000007521415350476600222350ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from . import populate_one_subcmd from curtin.block import iscsi def block_detach_iscsi_main(args): i = iscsi.IscsiDisk(args.disk) i.disconnect() return 0 CMD_ARGUMENTS = ( ('disk', {'help': 'RFC4173 specification of iSCSI disk to attach'}), ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, block_detach_iscsi_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/block_discover.py000066400000000000000000000013731415350476600214310ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import json from . import populate_one_subcmd from curtin import block def block_discover_main(args): """probe for existing devices and emit Curtin storage config output.""" if args.probe_data: probe_data = block._discover_get_probert_data() else: probe_data = block.discover() print(json.dumps(probe_data, indent=2, sort_keys=True)) CMD_ARGUMENTS = ( (('-p', '--probe-data'), {'help': 'dump probert probe-data to stdout emitting storage config.', 'action': 'store_true', 'default': False}), ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, block_discover_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/block_info.py000066400000000000000000000041651415350476600205500ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os from . import populate_one_subcmd from curtin import (block, util) def block_info_main(args): """get information about block devices, similar to lsblk""" if not args.devices: raise ValueError('devices to scan must be specified') if not all(block.is_block_device(d) for d in args.devices): raise ValueError('invalid device(s)') def add_size_to_holders_tree(tree): """add size information to generated holders trees""" size_file = os.path.join(tree['device'], 'size') # size file is always represented in 512 byte sectors even if # underlying disk uses a larger logical_block_size size = ((512 * int(util.load_file(size_file))) if os.path.exists(size_file) else None) tree['size'] = util.bytes2human(size) if args.human else str(size) for holder in tree['holders']: add_size_to_holders_tree(holder) return tree def format_name(tree): """format information for human readable display""" res = { 'name': ' - '.join((tree['name'], tree['dev_type'], tree['size'])), 'holders': [] } for holder in tree['holders']: res['holders'].append(format_name(holder)) return res trees = [add_size_to_holders_tree(t) for t in [block.clear_holders.gen_holders_tree(d) for d in args.devices]] print(util.json_dumps(trees) if args.json else '\n'.join(block.clear_holders.format_holders_tree(t) for t in [format_name(tree) for tree in trees])) return 0 CMD_ARGUMENTS = ( ('devices', {'help': 'devices to get info for', 'default': [], 'nargs': '+'}), ('--human', {'help': 'output size in human readable format', 'default': False, 'action': 'store_true'}), (('-j', '--json'), {'help': 'output data in json format', 'default': False, 'action': 'store_true'}), ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, block_info_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/block_meta.py000066400000000000000000002472231415350476600205470ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from collections import OrderedDict, namedtuple from curtin import (block, config, paths, util) from curtin.block import schemas from curtin.block import (bcache, clear_holders, dasd, iscsi, lvm, mdadm, mkfs, multipath, zfs) from curtin import distro from curtin.log import LOG, logged_time from curtin.reporter import events from curtin.storage_config import (extract_storage_ordered_dict, ptable_uuid_to_flag_entry) from . import populate_one_subcmd from curtin.udev import (compose_udev_equality, udevadm_settle, udevadm_trigger, udevadm_info) import glob import os import platform import string import sys import tempfile import time FstabData = namedtuple( "FstabData", ('spec', 'path', 'fstype', 'options', 'freq', 'passno', 'device')) FstabData.__new__.__defaults__ = (None, None, None, "", "0", "-1", None) SIMPLE = 'simple' SIMPLE_BOOT = 'simple-boot' CUSTOM = 'custom' PTABLE_UNSUPPORTED = schemas._ptable_unsupported PTABLES_SUPPORTED = schemas._ptables PTABLES_VALID = schemas._ptables_valid SGDISK_FLAGS = { "boot": 'ef00', "lvm": '8e00', "raid": 'fd00', "bios_grub": 'ef02', "prep": '4100', "swap": '8200', "home": '8302', "linux": '8300' } MSDOS_FLAGS = { 'boot': 'boot', 'extended': 'extended', 'logical': 'logical', } DNAME_BYID_KEYS = ['DM_UUID', 'ID_WWN_WITH_EXTENSION', 'ID_WWN', 'ID_SERIAL', 'ID_SERIAL_SHORT'] CMD_ARGUMENTS = ( ((('-D', '--devices'), {'help': 'which devices to operate on', 'action': 'append', 'metavar': 'DEVICE', 'default': None, }), ('--fstype', {'help': 'root partition filesystem type', 'choices': ['ext4', 'ext3'], 'default': 'ext4'}), ('--force-mode', {'help': 'force mode, disable mode detection', 'action': 'store_true', 'default': False}), (('-t', '--target'), {'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), ('--boot-fstype', {'help': 'boot partition filesystem type', 'choices': ['ext4', 'ext3'], 'default': None}), ('--umount', {'help': 'unmount any mounted filesystems before exit', 'action': 'store_true', 'default': False}), ('--testmode', {'help': 'enable some test actions', 'action': 'store_true', 'default': False}), ('mode', {'help': 'meta-mode to use', 'choices': [CUSTOM, SIMPLE, SIMPLE_BOOT]}), ) ) @logged_time("BLOCK_META") def block_meta(args): # main entry point for the block-meta command. if args.testmode: state = {} else: state = util.load_command_environment(strict=True) cfg = config.load_command_config(args, state) dd_images = util.get_dd_images(cfg.get('sources', {})) # run clear holders on potential devices devices = args.devices if devices is None: devices = [] if 'storage' in cfg: devices = get_device_paths_from_storage_config( extract_storage_ordered_dict(cfg)) LOG.debug('block-meta: extracted devices to clear: %s', devices) if len(devices) == 0: devices = cfg.get('block-meta', {}).get('devices', []) LOG.debug('Declared block devices: %s', devices) args.devices = devices LOG.debug('clearing devices=%s', devices) if devices: meta_clear(devices, state.get('report_stack_prefix', '')) # dd-images requires use of meta_simple if len(dd_images) > 0 and args.force_mode is False: LOG.info('blockmeta: detected dd-images, using mode=simple') return meta_simple(args) if cfg.get("storage") and args.force_mode is False: LOG.info('blockmeta: detected storage config, using mode=custom') return meta_custom(args) LOG.info('blockmeta: mode=%s force=%s', args.mode, args.force_mode) if args.mode == CUSTOM: return meta_custom(args) elif args.mode in (SIMPLE, SIMPLE_BOOT): return meta_simple(args) else: raise NotImplementedError("mode=%s is not implemented" % args.mode) def logtime(msg, func, *args, **kwargs): with util.LogTimer(LOG.debug, msg): return func(*args, **kwargs) def write_image_to_disk(source, dev): """ Write disk image to block device """ LOG.info('writing image to disk %s, %s', source, dev) extractor = { 'dd-tgz': '|tar -xOzf -', 'dd-txz': '|tar -xOJf -', 'dd-tbz': '|tar -xOjf -', 'dd-tar': '|smtar -xOf -', 'dd-bz2': '|bzcat', 'dd-gz': '|zcat', 'dd-xz': '|xzcat', 'dd-raw': '' } (devname, devnode) = block.get_dev_name_entry(dev) util.subp(args=['sh', '-c', ('wget "$1" --progress=dot:mega -O - ' + extractor[source['type']] + '| dd bs=4M of="$2"'), '--', source['uri'], devnode]) util.subp(['partprobe', devnode]) udevadm_trigger([devnode]) try: lvm.activate_volgroups() except util.ProcessExecutionError: # partial vg may not come up due to missing members, that's OK pass udevadm_settle() # Images from MAAS have well-known/required paths present # on the rootfs partition. Use these values to select the # root (target) partition to complete installation. # # /curtin -> Most Ubuntu Images # /system-data/var/lib/snapd -> UbuntuCore 16 or 18 # /snaps -> UbuntuCore20 paths = ["curtin", "system-data/var/lib/snapd", "snaps"] return block.get_root_device([devname], paths=paths) def get_bootpt_cfg(cfg, enabled=False, fstype=None, root_fstype=None): # 'cfg' looks like: # enabled: boolean # fstype: filesystem type (default to 'fstype') # label: filesystem label (default to 'boot') # parm enable can enable, but not disable # parm fstype overrides cfg['fstype'] def_boot = (platform.machine() in ('aarch64') and not util.is_uefi_bootable()) ret = {'enabled': def_boot, 'fstype': None, 'label': 'boot'} ret.update(cfg) if enabled: ret['enabled'] = True if ret['enabled'] and not ret['fstype']: if root_fstype: ret['fstype'] = root_fstype if fstype: ret['fstype'] = fstype return ret def get_partition_format_type(cfg, machine=None, uefi_bootable=None): if machine is None: machine = platform.machine() if uefi_bootable is None: uefi_bootable = util.is_uefi_bootable() cfgval = cfg.get('format', None) if cfgval: return cfgval if uefi_bootable: return 'uefi' if machine in ['aarch64']: return 'gpt' elif machine.startswith('ppc64'): return 'prep' return "mbr" def devsync(devpath): util.subp(['partprobe', devpath], rcs=[0, 1]) udevadm_settle() for x in range(0, 10): if os.path.exists(devpath): LOG.debug('devsync happy - path %s now exists', devpath) return else: LOG.debug('Waiting on device path: %s', devpath) time.sleep(1) raise OSError('Failed to find device at path: %s', devpath) def determine_partition_number(partition_id, storage_config): vol = storage_config.get(partition_id) partnumber = vol.get('number') if vol.get('flag') == "logical": if not partnumber: LOG.warn('partition \'number\' key not set in config:\n%s', util.json_dumps(vol)) partnumber = 5 for key, item in storage_config.items(): if item.get('type') == "partition" and \ item.get('device') == vol.get('device') and\ item.get('flag') == "logical": if item.get('id') == vol.get('id'): break else: partnumber += 1 else: if not partnumber: LOG.warn('partition \'number\' key not set in config:\n%s', util.json_dumps(vol)) partnumber = 1 for key, item in storage_config.items(): if item.get('type') == "partition" and \ item.get('device') == vol.get('device'): if item.get('id') == vol.get('id'): break else: partnumber += 1 return partnumber def sanitize_dname(dname): """ dnames should be sanitized before writing rule files, in case maas has emitted a dname with a special character only letters, numbers and '-' and '_' are permitted, as this will be used for a device path. spaces are also not permitted """ valid = string.digits + string.ascii_letters + '-_' return ''.join(c if c in valid else '-' for c in dname) def make_dname_byid(path, error_msg=None, info=None): """ Returns a list of udev equalities for a given disk path :param path: string of a kernel device path to a block device :param error_msg: more information about path for log/errors :param info: dict of udevadm info key, value pairs of device specified by path. :returns: list of udev equalities (lists) :raises: ValueError if path is not a disk. :raises: RuntimeError if there is no serial or wwn. """ error_msg = str(path) + ("" if not error_msg else " [%s]" % error_msg) if info is None: info = udevadm_info(path=path) devtype = info.get('DEVTYPE') if devtype != "disk": raise ValueError( "Disk tag udev rules are only for disks, %s has devtype=%s" % (error_msg, devtype)) present = [k for k in DNAME_BYID_KEYS if info.get(k)] if not present: LOG.warning( "Cannot create disk tag udev rule for %s, " "missing 'serial' or 'wwn' value", error_msg) return [] return [[compose_udev_equality('ENV{%s}' % k, info[k]) for k in present]] def make_dname(volume, storage_config): state = util.load_command_environment(strict=True) rules_dir = os.path.join(state['scratch'], "rules.d") vol = storage_config.get(volume) path = get_path_to_storage_volume(volume, storage_config) ptuuid = None byid = None dname = vol.get('name') if vol.get('type') in ["partition", "disk"]: (out, _err) = util.subp(["blkid", "-o", "export", path], capture=True, rcs=[0, 2], retries=[1, 1, 1]) for line in out.splitlines(): if "PTUUID" in line or "PARTUUID" in line: ptuuid = line.split('=')[-1] break if vol.get('type') == 'disk': byid = make_dname_byid(path, error_msg="id=%s" % vol.get('id')) # we may not always be able to find a uniq identifier on devices with names if (not ptuuid and not byid) and vol.get('type') in ["disk", "partition"]: LOG.warning("Can't find a uuid for volume: %s. Skipping dname.", volume) return matches = [] base_rule = [ compose_udev_equality("SUBSYSTEM", "block"), compose_udev_equality("ACTION", "add|change"), ] if vol.get('type') == "disk": if ptuuid: matches += [[compose_udev_equality('ENV{DEVTYPE}', "disk"), compose_udev_equality('ENV{ID_PART_TABLE_UUID}', ptuuid)]] for rule in byid: matches += [ [compose_udev_equality('ENV{DEVTYPE}', "disk")] + rule] elif vol.get('type') == "partition": # if partition has its own name, bind that to the existing PTUUID if dname: matches += [[compose_udev_equality('ENV{DEVTYPE}', "partition"), compose_udev_equality('ENV{ID_PART_ENTRY_UUID}', ptuuid)]] else: # disks generate dname-part%n rules automatically LOG.debug('No partition-specific dname') return elif vol.get('type') == "raid": md_data = mdadm.mdadm_query_detail(path) md_uuid = md_data.get('MD_UUID') matches += [[compose_udev_equality("ENV{MD_UUID}", md_uuid)]] elif vol.get('type') == "bcache": # bind dname to bcache backing device's dev.uuid as the bcache minor # device numbers are not stable across reboots. backing_dev = get_path_to_storage_volume(vol.get('backing_device'), storage_config) bcache_super = bcache.superblock_asdict(device=backing_dev) if bcache_super and bcache_super['sb.version'].startswith('1'): bdev_uuid = bcache_super['dev.uuid'] matches += [[compose_udev_equality("ENV{CACHED_UUID}", bdev_uuid)]] bcache.write_label(sanitize_dname(dname), backing_dev) elif vol.get('type') == "lvm_partition": info = udevadm_info(path=path) dname = info['DM_NAME'] matches += [[compose_udev_equality("ENV{DM_NAME}", dname)]] else: raise ValueError('cannot make dname for device with type: {}' .format(vol.get('type'))) # note: this sanitization is done here instead of for all name attributes # at the beginning of storage configuration, as some devices, such as # lvm devices may use the name attribute and may permit special chars sanitized = sanitize_dname(dname) if sanitized != dname: LOG.warning("dname modified to remove invalid chars. old:" "'%s' new: '%s'", dname, sanitized) content = ['# Written by curtin'] for match in matches: rule = (base_rule + match + ["SYMLINK+=\"disk/by-dname/%s\"\n" % sanitized]) LOG.debug("Creating dname udev rule '%s'", str(rule)) content.append(', '.join(rule)) if vol.get('type') == 'disk': for brule in byid: part_rule = None for env_rule in brule: # multipath partitions prefix partN- to DM_UUID for fun! # and partitions are "disks" yay \o/ /sarcasm if 'ENV{DM_UUID}=="mpath' not in env_rule: continue dm_uuid = env_rule.split("==")[1].replace('"', '') part_dm_uuid = 'part*-' + dm_uuid part_rule = ( [compose_udev_equality('ENV{DEVTYPE}', 'disk')] + [compose_udev_equality('ENV{DM_UUID}', part_dm_uuid)]) # non-multipath partition rule if not part_rule: part_rule = ( [compose_udev_equality('ENV{DEVTYPE}', 'partition')] + brule) rule = (base_rule + part_rule + ['SYMLINK+="disk/by-dname/%s-part%%n"\n' % sanitized]) LOG.debug("Creating dname udev rule '%s'", str(rule)) content.append(', '.join(rule)) util.ensure_dir(rules_dir) rule_file = os.path.join(rules_dir, '{}.rules'.format(sanitized)) util.write_file(rule_file, '\n'.join(content)) def get_poolname(info, storage_config): """ Resolve pool name from zfs info """ LOG.debug('get_poolname for volume %s', info) if info.get('type') == 'zfs': pool_id = info.get('pool') poolname = get_poolname(storage_config.get(pool_id), storage_config) elif info.get('type') == 'zpool': poolname = info.get('pool') else: msg = 'volume is not type zfs or zpool: %s' % info LOG.error(msg) raise ValueError(msg) return poolname def get_path_to_storage_volume(volume, storage_config): # Get path to block device for volume. Volume param should refer to id of # volume in storage config devsync_vol = None vol = storage_config.get(volume) LOG.debug('get_path_to_storage_volume for volume %s(%s)', volume, vol) if not vol: raise ValueError("volume with id '%s' not found" % volume) # Find path to block device if vol.get('type') == "partition": partnumber = determine_partition_number(vol.get('id'), storage_config) disk_block_path = get_path_to_storage_volume(vol.get('device'), storage_config) if disk_block_path.startswith('/dev/mapper/mpath'): volume_path = disk_block_path + '-part%s' % partnumber else: disk_kname = block.path_to_kname(disk_block_path) partition_kname = block.partition_kname(disk_kname, partnumber) volume_path = block.kname_to_path(partition_kname) devsync_vol = os.path.join(disk_block_path) elif vol.get('type') == "disk": # Get path to block device for disk. Device_id param should refer # to id of device in storage config volume_path = None for disk_key in ['wwn', 'serial', 'device_id', 'path']: vol_value = vol.get(disk_key) try: if not vol_value: continue if disk_key in ['wwn', 'serial']: volume_path = block.lookup_disk(vol_value) elif disk_key == 'path': if vol_value.startswith('iscsi:'): i = iscsi.ensure_disk_connected(vol_value) volume_path = os.path.realpath(i.devdisk_path) else: # resolve any symlinks to the dev_kname so # sys/class/block access is valid. ie, there are no # udev generated values in sysfs volume_path = os.path.realpath(vol_value) # convert /dev/sdX to /dev/mapper/mpathX value if multipath.is_mpath_member(volume_path): volume_path = '/dev/mapper/' + ( multipath.get_mpath_id_from_device(volume_path)) elif disk_key == 'device_id': dasd_device = dasd.DasdDevice(vol_value) volume_path = dasd_device.devname except ValueError: continue # verify path exists otherwise try the next key if os.path.exists(volume_path): break else: volume_path = None if volume_path is None: raise ValueError("Failed to find storage volume id='%s' config: %s" % (vol['id'], vol)) elif vol.get('type') == "lvm_partition": # For lvm partitions, a directory in /dev/ should be present with the # name of the volgroup the partition belongs to. We can simply append # the id of the lvm partition to the path of that directory volgroup = storage_config.get(vol.get('volgroup')) if not volgroup: raise ValueError("lvm volume group '%s' could not be found" % vol.get('volgroup')) volume_path = os.path.join("/dev/", volgroup.get('name'), vol.get('name')) elif vol.get('type') == "dm_crypt": # For dm_crypted partitions, unencrypted block device is at # /dev/mapper/ dm_name = vol.get('dm_name') if not dm_name: dm_name = vol.get('id') volume_path = os.path.join("/dev", "mapper", dm_name) elif vol.get('type') == "raid": # For raid partitions, block device is at /dev/mdX name = vol.get('name') volume_path = block.md_path(name) elif vol.get('type') == "bcache": # For bcache setups, the only reliable way to determine the name of the # block device is to look in all /sys/block/bcacheX/ dirs and see what # block devs are in the slaves dir there. Then, those blockdevs can be # checked against the kname of the devs in the config for the desired # bcache device. This is not very elegant though backing_device_path = get_path_to_storage_volume( vol.get('backing_device'), storage_config) backing_device_kname = block.path_to_kname(backing_device_path) sys_path = list(filter(lambda x: backing_device_kname in x, glob.glob("/sys/block/bcache*/slaves/*")))[0] while "bcache" not in os.path.split(sys_path)[-1]: sys_path = os.path.split(sys_path)[0] bcache_kname = block.path_to_kname(sys_path) volume_path = block.kname_to_path(bcache_kname) LOG.debug('got bcache volume path %s', volume_path) elif vol.get('type') == 'image': volume_path = vol['dev'] else: raise NotImplementedError("cannot determine the path to storage \ volume '%s' with type '%s'" % (volume, vol.get('type'))) # sync devices if not devsync_vol: devsync_vol = volume_path devsync(devsync_vol) LOG.debug('return volume path %s', volume_path) return volume_path DEVS = set() def image_handler(info, storage_config, handlers): path = info['path'] if os.path.exists(path): os.unlink(path) try: with open(path, 'wb') as fp: fp.truncate(int(util.human2bytes(info['size']))) dev = util.subp([ 'losetup', '--show', '--find', path], capture=True)[0].strip() except BaseException: if os.path.exists(path): os.unlink(path) raise info['dev'] = dev DEVS.add(dev) handlers['disk'](info, storage_config, handlers) def dasd_handler(info, storage_config, handlers): """ Prepare the specified dasd device per configuration params: info: dictionary of configuration, required keys are: type, id, device_id params: storage_config: ordered dictionary of entire storage config example: { 'type': 'dasd', 'id': 'dasd_142f', 'device_id': '0.0.142f', 'blocksize': 4096, 'label': 'cloudimg-rootfs', 'mode': 'quick', 'disk_layout': 'cdl', } """ device_id = info.get('device_id') blocksize = info.get('blocksize') disk_layout = info.get('disk_layout') label = info.get('label') mode = info.get('mode') force_format = config.value_as_boolean(info.get('wipe')) dasd_device = dasd.DasdDevice(device_id) if (force_format or dasd_device.needs_formatting(blocksize, disk_layout, label)): if config.value_as_boolean(info.get('preserve')): raise ValueError( "dasd '%s' does not match configured properties and" "preserve is set to true. The dasd needs formatting" "with the specified parameters to continue." % info.get('id')) LOG.debug('Formatting dasd id=%s device_id=%s devname=%s', info.get('id'), device_id, dasd_device.devname) dasd_device.format(blksize=blocksize, layout=disk_layout, set_label=label, mode=mode) # check post-format to ensure values match if dasd_device.needs_formatting(blocksize, disk_layout, label): raise RuntimeError( "Dasd %s failed to format" % dasd_device.devname) def disk_handler(info, storage_config, handlers): _dos_names = ['dos', 'msdos'] ptable = info.get('ptable') if ptable and ptable not in PTABLES_VALID: raise ValueError( 'Invalid partition table type: %s in %s' % (ptable, info)) disk = get_path_to_storage_volume(info.get('id'), storage_config) # For disks, 'preserve' is what indicates whether the partition # table should be reused or recreated but for compound devices # such as raids, it indicates if the raid should be created or # assumed to already exist. So to allow a pre-existing raid to get # a new partition table, we use presence of 'wipe' field to # indicate if the disk should be reformatted or not. if info['type'] == 'disk': preserve_ptable = config.value_as_boolean(info.get('preserve')) else: preserve_ptable = config.value_as_boolean(info.get('preserve')) \ and not config.value_as_boolean(info.get('wipe')) if preserve_ptable: # Handle preserve flag, verifying if ptable specified in config if ptable and ptable != PTABLE_UNSUPPORTED: current_ptable = block.get_part_table_type(disk) LOG.debug('disk: current ptable type: %s', current_ptable) if current_ptable not in PTABLES_SUPPORTED: raise ValueError( "disk '%s' does not have correct partition table or " "cannot be read, but preserve is set to true (or wipe is " "not set). cannot continue installation." % info.get('id')) LOG.info("disk '%s' marked to be preserved, so keeping partition " "table" % disk) else: # wipe the disk and create the partition table if instructed to do so if config.value_as_boolean(info.get('wipe')): block.wipe_volume(disk, mode=info.get('wipe')) if config.value_as_boolean(ptable): LOG.info("labeling device: '%s' with '%s' partition table", disk, ptable) if ptable == "gpt": # Wipe both MBR and GPT that may be present on the disk. # N.B.: wipe_volume wipes 1M at front and end of the disk. # This could destroy disk data in filesystems that lived # there. block.wipe_volume(disk, mode='superblock') elif ptable in _dos_names: util.subp(["parted", disk, "--script", "mklabel", "msdos"]) elif ptable == "vtoc": util.subp(["fdasd", "-c", "/dev/null", disk]) holders = clear_holders.get_holders(disk) if len(holders) > 0: LOG.info('Detected block holders on disk %s: %s', disk, holders) clear_holders.clear_holders(disk) clear_holders.assert_clear(disk) # Make the name if needed if info.get('name'): make_dname(info.get('id'), storage_config) def getnumberoflogicaldisks(device, storage_config): logicaldisks = 0 for key, item in storage_config.items(): if item.get('device') == device and item.get('flag') == "logical": logicaldisks = logicaldisks + 1 return logicaldisks def find_previous_partition(disk_id, part_id, storage_config): last_partnum = None for item_id, command in storage_config.items(): if item_id == part_id: break # skip anything not on this disk, not a 'partition' or 'extended' if command['type'] != 'partition' or command['device'] != disk_id: continue if command.get('flag') == "extended": continue last_partnum = determine_partition_number(item_id, storage_config) return last_partnum def find_extended_partition(part_device, storage_config): """ Scan storage config for a partition entry from the same device with the 'extended' flag set. :param: part_device: string specifiying the device id to match :param: storage_config: Ordered dict of storage configation :returns: string: item_id if found or None """ for item_id, item in storage_config.items(): if item.get('type') == "partition" and \ item.get('device') == part_device and \ item.get('flag') == "extended": return item_id def calc_dm_partition_info(partition_kname): # finding the start of a dm partition appears to only be possible # via parsing dmsetup output. the size is present in sysfs but # given that it is included in the dmsetup output anyway, we take # it from there too. # # dmsetup table # mpatha-part1: 0 6291456 linear 253:0 2048 # : mpath_id = multipath.get_mpath_id_from_device( block.dev_path(partition_kname)) if mpath_id is None: raise RuntimeError('Failed to find mpath_id for partition') table_cmd = ['dmsetup', 'table', '--target', 'linear', mpath_id] out, _err = util.subp(table_cmd, capture=True) if out: (_logical_start, previous_size_sectors, _table_type, _destination, previous_start_sectors) = out.split() return int(previous_start_sectors), int(previous_size_sectors) else: raise RuntimeError('Failed to find mpath_id for partition') def calc_partition_info(partition_kname, logical_block_size_bytes): if partition_kname.startswith('dm-'): p_start, p_size = calc_dm_partition_info(partition_kname) else: pdir = block.sys_block_path(partition_kname) p_size = int(util.load_file(os.path.join(pdir, "size"))) p_start = int(util.load_file(os.path.join(pdir, "start"))) # NB: sys/block/X/{size,start} and dmsetup output are both always # in 512b sectors p_size_sec = p_size * 512 // logical_block_size_bytes p_start_sec = p_start * 512 // logical_block_size_bytes LOG.debug("calc_partition_info: %s size_sectors=%s start_sectors=%s", partition_kname, p_size_sec, p_start_sec) if not all([p_size_sec, p_start_sec]): raise RuntimeError( 'Failed to determine partition %s info', partition_kname) return (p_start_sec, p_size_sec) def verify_exists(devpath): LOG.debug('Verifying %s exists', devpath) if not os.path.exists(devpath): raise RuntimeError("Device %s does not exist" % devpath) def verify_size(devpath, expected_size_bytes, part_info): (found_type, _code) = ptable_uuid_to_flag_entry(part_info.get('type')) if found_type == 'extended': found_size_bytes = int(part_info['size']) * 512 else: found_size_bytes = block.read_sys_block_size_bytes(devpath) msg = ( 'Verifying %s size, expecting %s bytes, found %s bytes' % ( devpath, expected_size_bytes, found_size_bytes)) LOG.debug(msg) if expected_size_bytes != found_size_bytes: raise RuntimeError(msg) def verify_ptable_flag(devpath, expected_flag, label, part_info): if (expected_flag not in SGDISK_FLAGS.keys()) and (expected_flag not in MSDOS_FLAGS.keys()): raise RuntimeError( 'Cannot verify unknown partition flag: %s' % expected_flag) found_flag = None if (label in ('dos', 'msdos')): if expected_flag == 'boot': found_flag = 'boot' if part_info.get('bootable') is True else None elif expected_flag == 'extended': (found_flag, _code) = ptable_uuid_to_flag_entry(part_info['type']) elif expected_flag == 'logical': (_parent, partnumber) = block.get_blockdev_for_partition(devpath) found_flag = 'logical' if int(partnumber) > 4 else None # gpt and msdos primary partitions look up flag by entry['type'] if found_flag is None: (found_flag, _code) = ptable_uuid_to_flag_entry(part_info['type']) msg = ( 'Verifying %s partition flag, expecting %s, found %s' % ( devpath, expected_flag, found_flag)) LOG.debug(msg) if expected_flag != found_flag: raise RuntimeError(msg) def partition_verify_sfdisk(part_action, label, sfdisk_part_info): devpath = sfdisk_part_info['node'] verify_size( devpath, int(util.human2bytes(part_action['size'])), sfdisk_part_info) expected_flag = part_action.get('flag') if expected_flag: verify_ptable_flag(devpath, expected_flag, label, sfdisk_part_info) def partition_verify_fdasd(disk_path, partnumber, info): verify_exists(disk_path) pt = dasd.DasdPartitionTable.from_fdasd(disk_path) pt_entry = pt.partitions[partnumber-1] expected_tracks = pt.tracks_needed(util.human2bytes(info['size'])) msg = ( 'Verifying %s part %s size, expecting %s tracks, found %s tracks' % ( disk_path, partnumber, expected_tracks, pt_entry.length)) LOG.debug(msg) if expected_tracks != pt_entry.length: raise RuntimeError(msg) if info.get('flag', 'linux') != 'linux': raise RuntimeError("dasd partitions do not support flags") def partition_handler(info, storage_config, handlers): device = info.get('device') size = info.get('size') flag = info.get('flag') disk_ptable = storage_config.get(device).get('ptable') partition_type = None if not device: raise ValueError("device must be set for partition to be created") if not size: raise ValueError("size must be specified for partition to be created") disk = get_path_to_storage_volume(device, storage_config) partnumber = determine_partition_number(info.get('id'), storage_config) disk_kname = block.path_to_kname(disk) # consider the disks logical sector size when calculating sectors try: (logical_block_size_bytes, _) = block.get_blockdev_sector_size(disk) LOG.debug("%s logical_block_size_bytes: %s", disk_kname, logical_block_size_bytes) except OSError as e: LOG.warning("Couldn't read block size, using default size 512: %s", e) logical_block_size_bytes = 512 if partnumber > 1: pnum = None if partnumber == 5 and disk_ptable == "msdos": extended_part_id = find_extended_partition(device, storage_config) if not extended_part_id: msg = ("Logical partition id=%s requires an extended partition" " and no extended partition '(type: partition, flag: " "extended)' was found in the storage config.") LOG.error(msg, info['id']) raise RuntimeError(msg, info['id']) pnum = determine_partition_number(extended_part_id, storage_config) else: pnum = find_previous_partition(device, info['id'], storage_config) # In case we fail to find previous partition let's error out now if pnum is None: raise RuntimeError( 'Cannot find previous partition on disk %s' % disk) LOG.debug("previous partition number for '%s' found to be '%s'", info.get('id'), pnum) partition_kname = block.partition_kname(disk_kname, pnum) LOG.debug('partition_kname=%s', partition_kname) (previous_start_sectors, previous_size_sectors) = ( calc_partition_info(partition_kname, logical_block_size_bytes)) # Align to 1M at the beginning of the disk and at logical partitions alignment_offset = int((1 << 20) / logical_block_size_bytes) if partnumber == 1: # start of disk offset_sectors = alignment_offset else: # further partitions if disk_ptable == "gpt" or flag != "logical": # msdos primary and any gpt part start after former partition end offset_sectors = previous_start_sectors + previous_size_sectors else: # msdos extended/logical partitions if flag == "logical": if partnumber == 5: # First logical partition # start at extended partition start + alignment_offset offset_sectors = (previous_start_sectors + alignment_offset) else: # Further logical partitions # start at former logical partition end + alignment_offset offset_sectors = (previous_start_sectors + previous_size_sectors + alignment_offset) length_bytes = util.human2bytes(size) # start sector is part of the sectors that define the partitions size # so length has to be "size in sectors - 1" length_sectors = int(length_bytes / logical_block_size_bytes) - 1 # logical partitions can't share their start sector with the extended # partition and logical partitions can't go head-to-head, so we have to # realign and for that increase size as required if info.get('flag') == "extended": logdisks = getnumberoflogicaldisks(device, storage_config) length_sectors = length_sectors + (logdisks * alignment_offset) # Handle preserve flag create_partition = True if config.value_as_boolean(info.get('preserve')): part_path = block.dev_path( block.partition_kname(disk_kname, partnumber)) if disk_ptable == 'vtoc': partition_verify_fdasd(disk, partnumber, info) else: sfdisk_info = block.sfdisk_info(disk) part_info = block.get_partition_sfdisk_info(part_path, sfdisk_info) partition_verify_sfdisk(info, sfdisk_info['label'], part_info) LOG.debug( '%s partition %s already present, skipping create', disk, partnumber) create_partition = False if create_partition: # Set flag # 'sgdisk --list-types' LOG.info("adding partition '%s' to disk '%s' (ptable: '%s')", info.get('id'), device, disk_ptable) LOG.debug("partnum: %s offset_sectors: %s length_sectors: %s", partnumber, offset_sectors, length_sectors) # Pre-Wipe the partition if told to do so, do not wipe dos extended # partitions as this may damage the extended partition table if config.value_as_boolean(info.get('wipe')): LOG.info("Preparing partition location on disk %s", disk) if info.get('flag') == "extended": LOG.warn("extended partitions do not need wiping, " "so skipping: '%s'" % info.get('id')) else: # wipe the start of the new partition first by zeroing 1M at # the length of the previous partition wipe_offset = int(offset_sectors * logical_block_size_bytes) LOG.debug('Wiping 1M on %s at offset %s', disk, wipe_offset) # We don't require exclusive access as we're wiping data at an # offset and the current holder maybe part of the current # storage configuration. block.zero_file_at_offsets(disk, [wipe_offset], exclusive=False) if disk_ptable == "msdos": if flag and flag == 'prep': raise ValueError( 'PReP partitions require a GPT partition table') if flag in ["extended", "logical", "primary"]: partition_type = flag else: partition_type = "primary" cmd = ["parted", disk, "--script", "mkpart", partition_type, "%ss" % offset_sectors, "%ss" % str(offset_sectors + length_sectors)] if flag == 'boot': cmd.extend(['set', str(partnumber), 'boot', 'on']) util.subp(cmd, capture=True) elif disk_ptable == "gpt": if flag and flag in SGDISK_FLAGS: typecode = SGDISK_FLAGS[flag] else: typecode = SGDISK_FLAGS['linux'] cmd = ["sgdisk", "--new", "%s:%s:%s" % (partnumber, offset_sectors, length_sectors + offset_sectors), "--typecode=%s:%s" % (partnumber, typecode), disk] util.subp(cmd, capture=True) elif disk_ptable == "vtoc": dasd_pt = dasd.DasdPartitionTable.from_fdasd(disk) dasd_pt.add_partition(partnumber, length_bytes) else: raise ValueError("parent partition has invalid partition table") # ensure partition exists if multipath.is_mpath_device(disk): udevadm_settle() # allow partition creation to happen # update device mapper table mapping to mpathX-partN part_path = disk + "-part%s" % partnumber # sometimes multipath lib creates a block device instead of # a udev symlink, remove this and allow kpartx to create it if os.path.exists(part_path) and not os.path.islink(part_path): util.del_file(part_path) util.subp(['kpartx', '-v', '-a', '-s', '-p', '-part', disk]) else: part_path = block.dev_path(block.partition_kname(disk_kname, partnumber)) block.rescan_block_devices([disk]) udevadm_settle(exists=part_path) wipe_mode = info.get('wipe') if wipe_mode: if wipe_mode == 'superblock' and create_partition: # partition creation pre-wipes partition superblock locations pass else: LOG.debug('Wiping partition %s mode=%s', part_path, wipe_mode) block.wipe_volume(part_path, mode=wipe_mode, exclusive=False) # Make the name if needed if storage_config.get(device).get('name') and partition_type != 'extended': make_dname(info.get('id'), storage_config) def format_handler(info, storage_config, handlers): volume = info.get('volume') if not volume: raise ValueError("volume must be specified for partition '%s'" % info.get('id')) # Get path to volume volume_path = get_path_to_storage_volume(volume, storage_config) # Handle preserve flag if config.value_as_boolean(info.get('preserve')): # Volume marked to be preserved, not formatting return # Make filesystem using block library LOG.debug("mkfs %s info: %s", volume_path, info) mkfs.mkfs_from_config(volume_path, info) device_type = storage_config.get(volume).get('type') LOG.debug('Formated device type: %s', device_type) if device_type == 'bcache': # other devs have a udev watch on them. Not bcache (LP: #1680597). LOG.debug('Detected bcache device format, calling udevadm trigger to ' 'generate by-uuid symlinks on "%s"', volume_path) udevadm_trigger([volume_path]) def mount_data(info, storage_config): """Return information necessary for a mount or fstab entry. :param info: a 'mount' type from storage config. :param storage_config: related storage_config ordered dict by id. :return FstabData type.""" if info.get('type') != "mount": raise ValueError("entry is not type 'mount' (%s)" % info) spec = info.get('spec') fstype = info.get('fstype') path = info.get('path') freq = str(info.get('freq', 0)) passno = str(info.get('passno', -1)) # turn empty options into "defaults", which works in fstab and mount -o. if not info.get('options'): options = ["defaults"] else: options = info.get('options').split(",") volume_path = None if 'device' not in info: missing = [m for m in ('spec', 'fstype') if not info.get(m)] if not (fstype and spec): raise ValueError( "mount entry without 'device' missing: %s. (%s)" % (missing, info)) else: if info['device'] not in storage_config: raise ValueError( "mount entry refers to non-existant device %s: (%s)" % (info['device'], info)) if not (fstype and spec): format_info = storage_config.get(info['device']) if not fstype: fstype = format_info['fstype'] if not spec: if format_info.get('volume') not in storage_config: raise ValueError( "format type refers to non-existant id %s: (%s)" % (format_info.get('volume'), format_info)) volume_path = get_path_to_storage_volume( format_info['volume'], storage_config) if "_netdev" not in options: if iscsi.volpath_is_iscsi(volume_path): options.append("_netdev") if fstype in ("fat", "fat12", "fat16", "fat32", "fat64"): fstype = "vfat" return FstabData( spec, path, fstype, ",".join(options), freq, passno, volume_path) def _get_volume_type(device_path): lsblock = block._lsblock([device_path]) kname = block.path_to_kname(device_path) return lsblock[kname]['TYPE'] def get_volume_spec(device_path): """ Return the most reliable spec for a device per Ubuntu FSTAB wiki https://wiki.ubuntu.com/FSTAB """ info = udevadm_info(path=device_path) block_type = _get_volume_type(device_path) LOG.debug('volspec: path=%s type=%s', device_path, block_type) LOG.debug('info[DEVLINKS] = %s', info['DEVLINKS']) devlinks = [] # util-linux lsblk may return type=part or type=md for raid partitions # handle both by checking path (e.g. /dev/md0p1 should use md-uuid # https://github.com/karelzak/util-linux/commit/ef2ce68b1f if 'raid' in block_type or device_path.startswith('/dev/md'): devlinks = [link for link in info['DEVLINKS'] if os.path.basename(link).startswith('md-uuid-')] elif block_type in ['crypt', 'lvm', 'mpath']: devlinks = [link for link in info['DEVLINKS'] if os.path.basename(link).startswith('dm-uuid-')] elif block_type in ['disk', 'part']: if device_path.startswith('/dev/bcache'): devlinks = [link for link in info['DEVLINKS'] if link.startswith('/dev/bcache/by-uuid')] # on s390x prefer by-path links which are stable and unique. if platform.machine() == 's390x': devlinks = [link for link in info['DEVLINKS'] if link.startswith('/dev/disk/by-path')] # use device-mapper uuid if present if 'DM_UUID' in info: devlinks = [link for link in info['DEVLINKS'] if os.path.basename(link).startswith('dm-uuid-')] if len(devlinks) == 0: # use FS UUID if present devlinks = [link for link in info['DEVLINKS'] if '/by-uuid' in link] if len(devlinks) == 0 and block_type == 'part': devlinks = [link for link in info['DEVLINKS'] if '/by-partuuid' in link] return devlinks[0] if len(devlinks) else device_path def proc_filesystems_passno(fstype): """Examine /proc/filesystems - is this fstype listed and marked nodev? :param fstype: a filesystem name such as ext2 or tmpfs :return passno for fstype - nodev fs get 0, else 1""" if fstype in ('swap', 'none'): return "0" with open('/proc/filesystems', 'r') as procfs: for line in procfs.readlines(): tokens = line.strip('\n').split('\t') if len(tokens) < 2: continue devstatus, curfs = tokens[:2] if curfs != fstype: continue return "0" if devstatus == 'nodev' else "1" return "1" def fstab_line_for_data(fdata): """Return a string representing fdata in /etc/fstab format. :param fdata: a FstabData type :return a newline terminated string for /etc/fstab.""" path = fdata.path if not path: if fdata.fstype == "swap": path = "none" else: raise ValueError("empty path in %s." % str(fdata)) if fdata.spec is None: if not fdata.device: raise ValueError("FstabData missing both spec and device.") spec = get_volume_spec(fdata.device) else: spec = fdata.spec if fdata.options in (None, "", "defaults"): if fdata.fstype == "swap": options = "sw" else: options = "defaults" else: options = fdata.options if path != "none": # prefer provided spec over device device = fdata.spec if fdata.spec else None # if not provided a spec, derive device from calculated spec value if not device: device = fdata.device if fdata.device else spec comment = "# %s was on %s during curtin installation" % (path, device) else: comment = None passno = fdata.passno if int(passno) < 0: passno = proc_filesystems_passno(fdata.fstype) entry = ' '.join((spec, path, fdata.fstype, options, fdata.freq, passno)) + "\n" line = '\n'.join([comment, entry] if comment else [entry]) return line def mount_fstab_data(fdata, target=None): """mount the FstabData fdata with root at target. :param fdata: a FstabData type :return None.""" mp = paths.target_path(target, fdata.path) if fdata.device: device = fdata.device else: if fdata.spec.startswith("/") and not fdata.spec.startswith("/dev/"): device = paths.target_path(target, fdata.spec) else: device = fdata.spec options = fdata.options if fdata.options else "defaults" mcmd = ['mount'] if fdata.fstype not in ("bind", None, "none"): mcmd.extend(['-t', fdata.fstype]) mcmd.extend(['-o', options, device, mp]) if fdata.fstype == "bind" or "bind" in options.split(","): # for bind mounts, create the 'src' dir (mount -o bind src target) util.ensure_dir(device) util.ensure_dir(mp) try: util.subp(mcmd, capture=True) except util.ProcessExecutionError as e: LOG.exception(e) msg = 'Mount failed: %s @ %s with options %s' % (device, mp, options) LOG.error(msg) raise RuntimeError(msg) def mount_apply(fdata, target=None, fstab=None): if fdata.fstype != "swap": mount_fstab_data(fdata, target=target) # Add volume to fstab if fstab: util.write_file(fstab, fstab_line_for_data(fdata), omode="a") else: LOG.info("fstab not in environment, so not writing") def mount_handler(info, storage_config, handlers): """ Handle storage config type: mount info = { 'id': 'rootfs_mount', 'type': 'mount', 'path': '/', 'options': 'defaults,errors=remount-ro', 'device': 'rootfs', } Mount specified device under target at 'path' and generate fstab entry. """ state = util.load_command_environment(strict=True) mount_apply(mount_data(info, storage_config), target=state.get('target'), fstab=state.get('fstab')) def verify_volgroup_members(vg_name, pv_paths): # LVM may be offline, so start it lvm.activate_volgroups() # Verify that volgroup exists and contains all specified devices found_pvs = set(lvm.get_pvols_in_volgroup(vg_name)) expected_pvs = set(pv_paths) msg = ('Verifying lvm volgroup %s members, expected %s, found %s ' % ( vg_name, expected_pvs, found_pvs)) LOG.debug(msg) if expected_pvs != found_pvs: raise RuntimeError(msg) def lvm_volgroup_verify(vg_name, device_paths): verify_volgroup_members(vg_name, device_paths) def lvm_volgroup_handler(info, storage_config, handlers): devices = info.get('devices') device_paths = [] name = info.get('name') preserve = config.value_as_boolean(info.get('preserve')) if not devices: raise ValueError("devices for volgroup '%s' must be specified" % info.get('id')) if not name: raise ValueError("name for volgroups needs to be specified") for device_id in devices: device = storage_config.get(device_id) if not device: raise ValueError("device '%s' could not be found in storage config" % device_id) device_paths.append(get_path_to_storage_volume(device_id, storage_config)) create_vg = True if preserve: lvm_volgroup_verify(name, device_paths) LOG.debug('lvm_volgroup %s already present, skipping create', name) create_vg = False if create_vg: # Create vgrcreate command and run # capture output to avoid printing it to log # Use zero to clear target devices of any metadata util.subp(['vgcreate', '--force', '--zero=y', '--yes', name] + device_paths, capture=True) # refresh lvmetad lvm.lvm_scan() def verify_lv_in_vg(lv_name, vg_name): found_lvols = lvm.get_lvols_in_volgroup(vg_name) msg = ('Verifying %s logical volume is in %s volume ' 'group, found %s ' % (lv_name, vg_name, found_lvols)) LOG.debug(msg) if lv_name not in found_lvols: raise RuntimeError(msg) def verify_lv_size(lv_name, size): expected_size_bytes = util.human2bytes(size) found_size_bytes = lvm.get_lv_size_bytes(lv_name) msg = ('Verifying %s logical value is size bytes %s, found %s ' % (lv_name, expected_size_bytes, found_size_bytes)) LOG.debug(msg) if expected_size_bytes != found_size_bytes: raise RuntimeError(msg) def lvm_partition_verify(lv_name, vg_name, info): verify_lv_in_vg(lv_name, vg_name) if 'size' in info: verify_lv_size(lv_name, info['size']) def lvm_partition_handler(info, storage_config, handlers): volgroup = storage_config[info['volgroup']]['name'] name = info['name'] if not volgroup: raise ValueError("lvm volgroup for lvm partition must be specified") if not name: raise ValueError("lvm partition name must be specified") if info.get('ptable'): raise ValueError("Partition tables on top of lvm logical volumes is " "not supported") preserve = config.value_as_boolean(info.get('preserve')) create_lv = True if preserve: lvm_partition_verify(name, volgroup, info) LOG.debug('lvm_partition %s already present, skipping create', name) create_lv = False if create_lv: # Use 'wipesignatures' (if available) and 'zero' to clear target lv # of any fs metadata cmd = ["lvcreate", volgroup, "--name", name, "--zero=y"] release = distro.lsb_release()['codename'] if release not in ['precise', 'trusty']: cmd.extend(["--wipesignatures=y", "--yes"]) if info.get('size'): size = util.human2bytes(info["size"]) cmd.extend(["--size", "{}B".format(size)]) else: cmd.extend(["--extents", "100%FREE"]) util.subp(cmd) # refresh lvmetad lvm.lvm_scan() wipe_mode = info.get('wipe', 'superblock') if wipe_mode and create_lv: lv_path = get_path_to_storage_volume(info['id'], storage_config) LOG.debug('Wiping logical volume %s mode=%s', lv_path, wipe_mode) block.wipe_volume(lv_path, mode=wipe_mode, exclusive=False) make_dname(info['id'], storage_config) def verify_blkdev_used(dmcrypt_dev, expected_blkdev): dminfo = block.dmsetup_info(dmcrypt_dev) found_blkdev = dminfo['blkdevs_used'] msg = ( 'Verifying %s volume, expecting %s , found %s ' % ( dmcrypt_dev, expected_blkdev, found_blkdev)) LOG.debug(msg) if expected_blkdev != found_blkdev: raise RuntimeError(msg) def dm_crypt_verify(dmcrypt_dev, volume_path): verify_exists(dmcrypt_dev) verify_blkdev_used(dmcrypt_dev, volume_path) def dm_crypt_handler(info, storage_config, handlers): state = util.load_command_environment(strict=True) volume = info.get('volume') keysize = info.get('keysize') cipher = info.get('cipher') dm_name = info.get('dm_name') if not dm_name: dm_name = info.get('id') dmcrypt_dev = os.path.join("/dev", "mapper", dm_name) preserve = config.value_as_boolean(info.get('preserve')) if not volume: raise ValueError("volume for cryptsetup to operate on must be \ specified") volume_path = get_path_to_storage_volume(volume, storage_config) volume_byid_path = block.disk_to_byid_path(volume_path) if 'keyfile' in info: if 'key' in info: raise ValueError("cannot specify both key and keyfile") keyfile_is_tmp = False keyfile = info['keyfile'] elif 'key' in info: # TODO: this is insecure, find better way to do this key = info.get('key') keyfile = tempfile.mkstemp()[1] keyfile_is_tmp = True util.write_file(keyfile, key, mode=0o600) else: raise ValueError("encryption key or keyfile must be specified") create_dmcrypt = True if preserve: dm_crypt_verify(dmcrypt_dev, volume_path) LOG.debug('dm_crypt %s already present, skipping create', dmcrypt_dev) create_dmcrypt = False if create_dmcrypt: # if zkey is available, attempt to generate and use it; if it's not # available or fails to setup properly, fallback to normal cryptsetup # passing strict=False downgrades log messages to warnings zkey_used = None if block.zkey_supported(strict=False): volume_name = "%s:%s" % (volume_byid_path, dm_name) LOG.debug('Attempting to setup zkey for %s', volume_name) luks_type = 'luks2' gen_cmd = ['zkey', 'generate', '--xts', '--volume-type', luks_type, '--sector-size', '4096', '--name', dm_name, '--description', "curtin generated zkey for %s" % volume_name, '--volumes', volume_name] run_cmd = ['zkey', 'cryptsetup', '--run', '--volumes', volume_byid_path, '--batch-mode', '--key-file', keyfile] try: util.subp(gen_cmd, capture=True) util.subp(run_cmd, capture=True) zkey_used = os.path.join(os.path.split(state['fstab'])[0], "zkey_used") # mark in state that we used zkey util.write_file(zkey_used, "1") except util.ProcessExecutionError as e: LOG.exception(e) msg = 'Setup of zkey on %s failed, fallback to cryptsetup.' LOG.error(msg % volume_path) if not zkey_used: LOG.debug('Using cryptsetup on %s', volume_path) luks_type = "luks" cmd = ["cryptsetup"] if cipher: cmd.extend(["--cipher", cipher]) if keysize: cmd.extend(["--key-size", keysize]) cmd.extend(["luksFormat", volume_path, keyfile]) util.subp(cmd) cmd = ["cryptsetup", "open", "--type", luks_type, volume_path, dm_name, "--key-file", keyfile] util.subp(cmd) if keyfile_is_tmp: os.remove(keyfile) wipe_mode = info.get('wipe') if wipe_mode: if wipe_mode == 'superblock' and create_dmcrypt: # newly created dmcrypt volumes do not need superblock wiping pass else: LOG.debug('Wiping dm_crypt device %s mode=%s', dmcrypt_dev, wipe_mode) block.wipe_volume(dmcrypt_dev, mode=wipe_mode, exclusive=False) # A crypttab will be created in the same directory as the fstab in the # configuration. This will then be copied onto the system later if state['fstab']: state_dir = os.path.dirname(state['fstab']) crypt_tab_location = os.path.join(state_dir, "crypttab") uuid = block.get_volume_uuid(volume_path) util.write_file(crypt_tab_location, "%s UUID=%s none luks\n" % (dm_name, uuid), omode="a") else: LOG.info("fstab configuration is not present in environment, so \ cannot locate an appropriate directory to write crypttab in \ so not writing crypttab") def verify_md_components(md_devname, raidlevel, device_paths, spare_paths, container): # check if the array is already up, if not try to assemble errors = [] check_ok = False try: mdadm.md_check(md_devname, raidlevel, device_paths, spare_paths, container) check_ok = True except ValueError as err1: errors.append(err1) LOG.info("assembling preserved raid for %s", md_devname) mdadm.mdadm_assemble(md_devname, device_paths, spare_paths) try: mdadm.md_check(md_devname, raidlevel, device_paths, spare_paths, container) check_ok = True except ValueError as err2: errors.append(err2) msg = ('Verified %s raid composition, raid is %s' % (md_devname, 'OK' if check_ok else 'not OK')) LOG.debug(msg) if not check_ok: for err in errors: LOG.error("Error checking raid %s: %s", md_devname, err) raise ValueError(msg) def raid_verify(md_devname, raidlevel, device_paths, spare_paths, container): verify_md_components( md_devname, raidlevel, device_paths, spare_paths, container) def raid_handler(info, storage_config, handlers): state = util.load_command_environment(strict=True) devices = info.get('devices') raidlevel = info.get('raidlevel') spare_devices = info.get('spare_devices') md_devname = block.md_path(info.get('name')) container = info.get('container') metadata = info.get('metadata') preserve = config.value_as_boolean(info.get('preserve')) if not devices and not container: raise ValueError("devices or container for raid must be specified") if raidlevel not in ['linear', 'raid0', 0, 'stripe', 'raid1', 1, 'mirror', 'raid4', 4, 'raid5', 5, 'raid6', 6, 'raid10', 10, 'container']: raise ValueError("invalid raidlevel '%s'" % raidlevel) if raidlevel in ['linear', 'raid0', 0, 'stripe', 'container']: if spare_devices: raise ValueError("spareunsupported in raidlevel '%s'" % raidlevel) LOG.debug('raid: cfg: %s', util.json_dumps(info)) container_dev = None device_paths = [] if container: container_dev = get_path_to_storage_volume(container, storage_config) else: device_paths = list(get_path_to_storage_volume(dev, storage_config) for dev in devices) LOG.debug('raid: device path mapping: {}'.format( zip(devices, device_paths))) spare_device_paths = [] if spare_devices: spare_device_paths = list(get_path_to_storage_volume(dev, storage_config) for dev in spare_devices) LOG.debug('raid: spare device path mapping: %s', list(zip(spare_devices, spare_device_paths))) create_raid = True if preserve: raid_verify( md_devname, raidlevel, device_paths, spare_device_paths, container_dev) LOG.debug('raid %s already present, skipping create', md_devname) create_raid = False if create_raid: mdadm.mdadm_create(md_devname, raidlevel, device_paths, spare_device_paths, container_dev, info.get('mdname', ''), metadata) wipe_mode = info.get('wipe') if wipe_mode: if wipe_mode == 'superblock' and create_raid: # Newly created raid devices already wipe member superblocks at # their data offset (this is equivalent to wiping the assembled # device, see curtin.block.mdadm.zero_device for more details. pass else: LOG.debug('Wiping raid device %s mode=%s', md_devname, wipe_mode) block.wipe_volume(md_devname, mode=wipe_mode, exclusive=False) # Make dname rule for this dev make_dname(info.get('id'), storage_config) # A mdadm.conf will be created in the same directory as the fstab in the # configuration. This will then be copied onto the installed system later. # The file must also be written onto the running system to enable it to run # mdadm --assemble and continue installation if state['fstab']: state_dir = os.path.dirname(state['fstab']) mdadm_location = os.path.join(state_dir, "mdadm.conf") mdadm_scan_data = mdadm.mdadm_detail_scan() util.write_file(mdadm_location, mdadm_scan_data) else: LOG.info("fstab configuration is not present in the environment, so \ cannot locate an appropriate directory to write mdadm.conf in, \ so not writing mdadm.conf") # If ptable is specified, call disk_handler on this mdadm device to create # the table if info.get('ptable'): handlers['disk'](info, storage_config, handlers) def verify_bcache_cachedev(cachedev): """ verify that the specified cache_device is a bcache cache device.""" result = bcache.is_caching(cachedev) msg = ('Verifying %s is bcache cache device, found device is %s' % (cachedev, 'OK' if result else 'not OK')) LOG.debug(msg) if not result: raise RuntimeError(msg) def verify_bcache_backingdev(backingdev): """ verify that the specified backingdev is a bcache backing device.""" result = bcache.is_backing(backingdev) msg = ('Verifying %s is bcache backing device, found device is %s' % (backingdev, 'OK' if result else 'not OK')) LOG.debug(msg) if not result: raise RuntimeError(msg) def verify_cache_mode(backing_dev, backing_superblock, expected_mode): """ verify the backing device cache-mode is set as expected. """ found = backing_superblock.get('dev.data.cache_mode', '') msg = ('Verifying %s bcache cache-mode, expecting %s, found %s' % (backing_dev, expected_mode, found)) LOG.debug(msg) if expected_mode not in found: raise RuntimeError(msg) def verify_bcache_cset_uuid_match(backing_dev, cinfo, binfo): expected_cset_uuid = cinfo.get('cset.uuid') found_cset_uuid = binfo.get('cset.uuid') result = ((expected_cset_uuid == found_cset_uuid) if expected_cset_uuid else False) msg = ('Verifying bcache backing_device %s cset.uuid is %s, found %s' % (backing_dev, expected_cset_uuid, found_cset_uuid)) LOG.debug(msg) if not result: raise RuntimeError(msg) def bcache_verify_cachedev(cachedev): verify_bcache_cachedev(cachedev) return True def bcache_verify_backingdev(backingdev): verify_bcache_backingdev(backingdev) return True def bcache_verify(cachedev, backingdev, cache_mode): bcache_verify_cachedev(cachedev) bcache_verify_backingdev(backingdev) cache_info = bcache.superblock_asdict(cachedev) backing_info = bcache.superblock_asdict(backingdev) verify_bcache_cset_uuid_match(backingdev, cache_info, backing_info) if cache_mode: verify_cache_mode(backingdev, backing_info, cache_mode) return True def bcache_handler(info, storage_config, handlers): backing_device = get_path_to_storage_volume(info.get('backing_device'), storage_config) cache_device = get_path_to_storage_volume(info.get('cache_device'), storage_config) cache_mode = info.get('cache_mode', None) preserve = config.value_as_boolean(info.get('preserve')) if not backing_device or not cache_device: raise ValueError("backing device and cache device for bcache" " must be specified") create_bcache = True if preserve: if cache_device and backing_device: if bcache_verify(cache_device, backing_device, cache_mode): create_bcache = False elif cache_device: if bcache_verify_cachedev(cache_device): create_bcache = False elif backing_device: if bcache_verify_backingdev(backing_device): create_bcache = False if not create_bcache: LOG.debug('bcache %s already present, skipping create', info['id']) cset_uuid = bcache_dev = None if create_bcache and cache_device: cset_uuid = bcache.create_cache_device(cache_device) if create_bcache and backing_device: bcache_dev = bcache.create_backing_device(backing_device, cache_device, cache_mode, cset_uuid) if cache_mode and not backing_device: raise ValueError("cache mode specified which can only be set on " "backing devices, but none was specified") wipe_mode = info.get('wipe') if wipe_mode and bcache_dev: LOG.debug('Wiping bcache device %s mode=%s', bcache_dev, wipe_mode) block.wipe_volume(bcache_dev, mode=wipe_mode, exclusive=False) if info.get('name'): # Make dname rule for this dev make_dname(info.get('id'), storage_config) if info.get('ptable'): handlers['disk'](info, storage_config, handlers) LOG.debug('Finished bcache creation for backing %s or caching %s', backing_device, cache_device) def zpool_handler(info, storage_config, handlers): """ Create a zpool based in storage_configuration """ zfs.zfs_assert_supported() state = util.load_command_environment(strict=True) # extract /dev/disk/by-id paths for each volume used vdevs = [get_path_to_storage_volume(v, storage_config) for v in info.get('vdevs', [])] poolname = info.get('pool') mountpoint = info.get('mountpoint') pool_properties = info.get('pool_properties', {}) fs_properties = info.get('fs_properties', {}) altroot = state['target'] if not vdevs or not poolname: raise ValueError("pool and vdevs for zpool must be specified") # map storage volume to by-id path for persistent path vdevs_byid = [] for vdev in vdevs: byid = block.disk_to_byid_path(vdev) if not byid: msg = ('Cannot find by-id path to zpool device "%s". ' 'The zpool may fail to import of path names change.' % vdev) LOG.warning(msg) byid = vdev vdevs_byid.append(byid) LOG.info('Creating zpool %s with vdevs %s', poolname, vdevs_byid) zfs.zpool_create(poolname, vdevs_byid, mountpoint=mountpoint, altroot=altroot, pool_properties=pool_properties, zfs_properties=fs_properties) def zfs_handler(info, storage_config, handlers): """ Create a zfs filesystem """ zfs.zfs_assert_supported() state = util.load_command_environment(strict=True) poolname = get_poolname(info, storage_config) volume = info.get('volume') properties = info.get('properties', {}) LOG.info('Creating zfs dataset %s/%s with properties %s', poolname, volume, properties) zfs.zfs_create(poolname, volume, zfs_properties=properties) mountpoint = properties.get('mountpoint') if mountpoint: if state['fstab']: fstab_entry = ( "# Use `zfs list` for current zfs mount info\n" + "# %s %s defaults 0 0\n" % (poolname, mountpoint)) util.write_file(state['fstab'], fstab_entry, omode='a') def get_device_paths_from_storage_config(storage_config): """Returns a list of device paths in a storage config which have wipe config enabled filtering out constructed paths that do not exist. :param: storage_config: Ordered dict of storage configation """ dpaths = [] for (k, v) in storage_config.items(): if v.get('type') in ['disk', 'partition']: wipe = config.value_as_boolean(v.get('wipe')) preserve = config.value_as_boolean(v.get('preserve')) if v.get('type') == 'disk' and all([wipe, preserve]): msg = 'type:disk id=%s has both wipe and preserve' % v['id'] raise RuntimeError(msg) if wipe: try: # skip paths that do not exit, nothing to wipe dpath = get_path_to_storage_volume(k, storage_config) if os.path.exists(dpath): dpaths.append(dpath) except Exception: pass return dpaths def zfsroot_update_storage_config(storage_config): """Return an OrderedDict that has 'zfsroot' format expanded into zpool and zfs commands to enable ZFS on rootfs. """ zfsroots = [d for i, d in storage_config.items() if d.get('fstype') == "zfsroot"] if len(zfsroots) == 0: return storage_config if len(zfsroots) > 1: raise ValueError( "zfsroot found in two entries in storage config: %s" % zfsroots) root = zfsroots[0] vol = root.get('volume') if not vol: raise ValueError("zfsroot entry did not have 'volume'.") if vol not in storage_config: raise ValueError( "zfs volume '%s' not referenced in storage config" % vol) mounts = [d for i, d in storage_config.items() if d.get('type') == 'mount' and d.get('path') == "/"] if len(mounts) != 1: raise ValueError("Multiple 'mount' entries point to '/'") mount = mounts[0] if mount.get('device') != root['id']: raise ValueError( "zfsroot Mountpoint entry for / has device=%s, expected '%s'" % (mount.get("device"), root['id'])) # validate that the boot disk is GPT partitioned bootdevs = [d for i, d in storage_config.items() if d.get('grub_device')] bootdev = bootdevs[0] if bootdev.get('ptable') != 'gpt': raise ValueError( 'zfsroot requires bootdisk with GPT partition table' ' found "%s" on disk id="%s"' % (bootdev.get('ptable'), bootdev.get('id'))) LOG.info('Enabling experimental zfsroot!') ret = OrderedDict() for eid, info in storage_config.items(): if info.get('id') == mount['id']: continue if info.get('fstype') != "zfsroot": ret[eid] = info continue vdevs = [storage_config[info['volume']]['id']] baseid = info['id'] pool = { 'type': 'zpool', 'id': baseid + "_zfsroot_pool", 'pool': 'rpool', 'vdevs': vdevs, 'mountpoint': '/' } container = { 'type': 'zfs', 'id': baseid + "_zfsroot_container", 'pool': pool['id'], 'volume': '/ROOT', 'properties': { 'canmount': 'off', 'mountpoint': 'none', } } rootfs = { 'type': 'zfs', 'id': baseid + "_zfsroot_fs", 'pool': pool['id'], 'volume': '/ROOT/zfsroot', 'properties': { 'canmount': 'noauto', 'mountpoint': '/', } } for d in (pool, container, rootfs): if d['id'] in ret: raise RuntimeError( "Collided on id '%s' in storage config" % d['id']) ret[d['id']] = d return ret def meta_clear(devices, report_prefix=''): """ Run clear_holders on specified list of devices. :param: devices: a list of block devices (/dev/XXX) to be cleared :param: report_prefix: a string to pass to the ReportEventStack """ # shut down any already existing storage layers above any disks used in # config that have 'wipe' set with events.ReportEventStack( name=report_prefix + '/clear-holders', reporting_enabled=True, level='INFO', description="removing previous storage devices"): clear_holders.start_clear_holders_deps() clear_holders.clear_holders(devices) # if anything was not properly shut down, stop installation clear_holders.assert_clear(devices) def meta_custom(args): """Does custom partitioning based on the layout provided in the config file. Section with the name storage contains information on which partitions on which disks to create. It also contains information about overlays (raid, lvm, bcache) which need to be setup. """ command_handlers = { 'dasd': dasd_handler, 'disk': disk_handler, 'partition': partition_handler, 'format': format_handler, 'mount': mount_handler, 'lvm_volgroup': lvm_volgroup_handler, 'lvm_partition': lvm_partition_handler, 'dm_crypt': dm_crypt_handler, 'raid': raid_handler, 'bcache': bcache_handler, 'zfs': zfs_handler, 'zpool': zpool_handler, } if args.testmode: command_handlers['image'] = image_handler state = {} else: state = util.load_command_environment(strict=True) cfg = config.load_command_config(args, state) storage_config_dict = extract_storage_ordered_dict(cfg) storage_config_dict = zfsroot_update_storage_config(storage_config_dict) # set up reportstack stack_prefix = state.get('report_stack_prefix', '') for item_id, command in storage_config_dict.items(): handler = command_handlers.get(command['type']) if not handler: raise ValueError("unknown command type '%s'" % command['type']) with events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level="INFO", description="configuring %s: %s" % (command['type'], command['id'])): try: handler(command, storage_config_dict, command_handlers) except Exception as error: LOG.error("An error occured handling '%s': %s - %s" % (item_id, type(error).__name__, error)) raise if args.testmode: util.subp(['losetup', '--detach'] + list(DEVS)) if args.umount: util.do_umount(state['target'], recursive=True) return 0 def meta_simple(args): """Creates a root partition. If args.mode == SIMPLE_BOOT, it will also create a separate /boot partition. """ state = util.load_command_environment(strict=True) cfg = config.load_command_config(args, state) if args.target is not None: state['target'] = args.target if state['target'] is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) devpath = None if cfg.get("storage") is not None: for i in cfg["storage"]["config"]: serial = i.get("serial") if serial is None: continue try: diskPath = block.lookup_disk(serial) except ValueError as err: LOG.debug("Skipping disk '%s': %s", i.get("id"), err) continue if i.get("grub_device"): devpath = diskPath break devices = args.devices bootpt = get_bootpt_cfg( cfg.get('block-meta', {}).get('boot-partition', {}), enabled=args.mode == SIMPLE_BOOT, fstype=args.boot_fstype, root_fstype=args.fstype) ptfmt = get_partition_format_type(cfg.get('block-meta', {})) # Remove duplicates but maintain ordering. devices = list(OrderedDict.fromkeys(devices)) # Multipath devices might be automatically assembled if multipath-tools # package is available in the installation environment. We need to stop # all multipath devices to exclusively use one of paths as a target disk. block.stop_all_unused_multipath_devices() if len(devices) == 0 and devpath is None: devices = block.get_installable_blockdevs() LOG.warn("'%s' mode, no devices given. unused list: %s", args.mode, devices) # Check if the list of installable block devices is still empty after # checking for block devices and filtering out the removable ones. In # this case we may have a system which has its harddrives reported by # lsblk incorrectly. In this case we search for installable # blockdevices that are removable as a last resort before raising an # exception. if len(devices) == 0: devices = block.get_installable_blockdevs(include_removable=True) if len(devices) == 0: # Fail gracefully if no devices are found, still. raise Exception("No valid target devices found that curtin " "can install on.") else: LOG.warn("No non-removable, installable devices found. List " "populated with removable devices allowed: %s", devices) if devpath is not None: target = devpath elif len(devices) > 1: if args.devices is not None: LOG.warn("'%s' mode but multiple devices given. " "using first found", args.mode) available = [f for f in devices if block.is_valid_device(f)] target = sorted(available)[0] LOG.warn("mode is '%s'. multiple devices given. using '%s' " "(first available)", args.mode, target) else: target = devices[0] if not block.is_valid_device(target): raise Exception("target device '%s' is not a valid device" % target) (devname, devnode) = block.get_dev_name_entry(target) LOG.info("installing in '%s' mode to '%s'", args.mode, devname) sources = cfg.get('sources', {}) dd_images = util.get_dd_images(sources) if len(dd_images): # we have at least one dd-able image # we will only take the first one rootdev = write_image_to_disk(dd_images[0], devname) util.subp(['mount', rootdev, state['target']]) return 0 # helper partition will forcibly set up partition there ptcmd = ['partition', '--format=' + ptfmt] if bootpt['enabled']: ptcmd.append('--boot') ptcmd.append(devnode) if bootpt['enabled'] and ptfmt in ("uefi", "prep"): raise ValueError("format=%s with boot partition not supported" % ptfmt) bootdev_ptnum = None rootdev_ptnum = None bootdev = None if bootpt['enabled']: bootdev_ptnum = 1 rootdev_ptnum = 2 else: if ptfmt == "prep": rootdev_ptnum = 2 else: rootdev_ptnum = 1 logtime("creating partition with: %s" % ' '.join(ptcmd), util.subp, ptcmd) ptpre = "" if not os.path.exists("%s%s" % (devnode, rootdev_ptnum)): # perhaps the device is /dev/p if os.path.exists("%sp%s" % (devnode, rootdev_ptnum)): ptpre = "p" else: LOG.warn("root device %s%s did not exist, expecting failure", devnode, rootdev_ptnum) if bootdev_ptnum: bootdev = "%s%s%s" % (devnode, ptpre, bootdev_ptnum) if ptfmt == "uefi": # assumed / required from the partitioner pt_uefi uefi_ptnum = "15" uefi_label = "uefi-boot" uefi_dev = "%s%s%s" % (devnode, ptpre, uefi_ptnum) rootdev = "%s%s%s" % (devnode, ptpre, rootdev_ptnum) LOG.debug("rootdev=%s bootdev=%s fmt=%s bootpt=%s", rootdev, bootdev, ptfmt, bootpt) # mkfs for root partition first and mount cmd = ['mkfs.%s' % args.fstype, '-q', '-L', 'cloudimg-rootfs', rootdev] logtime(' '.join(cmd), util.subp, cmd) util.subp(['mount', rootdev, state['target']]) if bootpt['enabled']: # create 'boot' directory in state['target'] boot_dir = os.path.join(state['target'], 'boot') util.subp(['mkdir', boot_dir]) # mkfs for boot partition and mount cmd = ['mkfs.%s' % bootpt['fstype'], '-q', '-L', bootpt['label'], bootdev] logtime(' '.join(cmd), util.subp, cmd) util.subp(['mount', bootdev, boot_dir]) if ptfmt == "uefi": uefi_dir = os.path.join(state['target'], 'boot', 'efi') util.ensure_dir(uefi_dir) util.subp(['mount', uefi_dev, uefi_dir]) if state['fstab']: with open(state['fstab'], "w") as fp: if bootpt['enabled']: fp.write("LABEL=%s /boot %s defaults 0 0\n" % (bootpt['label'], bootpt['fstype'])) if ptfmt == "uefi": # label created in helpers/partition for uefi fp.write("LABEL=%s /boot/efi vfat defaults 0 0\n" % uefi_label) fp.write("LABEL=%s / %s defaults 0 0\n" % ('cloudimg-rootfs', args.fstype)) else: LOG.info("fstab not in environment, so not writing") if args.umount: util.do_umount(state['target'], recursive=True) return 0 def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, block_meta) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/block_wipe.py000066400000000000000000000017701415350476600205600ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import sys import curtin.block as block from . import populate_one_subcmd from .. import log LOG = log.LOG def wipe_main(args): for blockdev in args.devices: try: LOG.debug('Wiping volume %s with mode=%s', blockdev, args.mode) block.wipe_volume(blockdev, mode=args.mode) except Exception as e: sys.stderr.write( "Failed to wipe volume %s in mode %s: %s" % (blockdev, args.mode, e)) sys.exit(1) sys.exit(0) CMD_ARGUMENTS = ( ((('-m', '--mode'), {'help': 'mode for wipe.', 'action': 'store', 'default': 'superblock', 'choices': ['zero', 'superblock', 'superblock-recursive', 'random']}), ('devices', {'help': 'devices to wipe', 'default': [], 'nargs': '+'}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, wipe_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/clear_holders.py000066400000000000000000000045441415350476600212520ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from .block_meta import ( extract_storage_ordered_dict, get_device_paths_from_storage_config, ) from curtin import block from curtin.log import LOG from .import populate_one_subcmd def clear_holders_main(args): """ wrapper for clear_holders accepting cli args """ cfg = {} if args.config: cfg = args.config # run clear holders on potential devices devices = args.devices if not devices: if 'storage' in cfg: devices = get_device_paths_from_storage_config( extract_storage_ordered_dict(cfg)) if len(devices) == 0: devices = cfg.get('block-meta', {}).get('devices', []) if (not all(block.is_block_device(device) for device in devices) or len(devices) == 0): raise ValueError('invalid devices specified') block.clear_holders.start_clear_holders_deps() if args.shutdown_plan: # get current holders and plan how to shut them down holder_trees = [block.clear_holders.gen_holders_tree(path) for path in devices] LOG.info('Current device storage tree:\n%s', '\n'.join(block.clear_holders.format_holders_tree(tree) for tree in holder_trees)) ordered_devs = ( block.clear_holders.plan_shutdown_holder_trees(holder_trees)) LOG.info('Shutdown Plan:\n%s', "\n".join(map(str, ordered_devs))) else: block.clear_holders.clear_holders(devices, try_preserve=args.preserve) if args.preserve: print('ran clear_holders attempting to preserve data. however, ' 'hotplug support for some devices may cause holders to ' 'restart ') block.clear_holders.assert_clear(devices) CMD_ARGUMENTS = ( (('devices', {'help': 'devices to free', 'default': [], 'nargs': '*'}), (('-P', '--shutdown-plan'), {'help': 'Print the clear-holder shutdown plan only', 'default': False, 'action': 'store_true'}), (('-p', '--preserve'), {'help': 'try to shut down holders without erasing anything', 'default': False, 'action': 'store_true'}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, clear_holders_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/collect_logs.py000066400000000000000000000144221415350476600211110ustar00rootroot00000000000000# Copyright (C) 2017 Canonical Ltd. # # Curtin is free software: you can redistribute it and/or modify it under # the terms of the GNU Affero General Public License as published by the # Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Curtin is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for # more details. # # You should have received a copy of the GNU Affero General Public License # along with Curtin. If not, see . from datetime import datetime import json import os import re import shutil import sys import tempfile from .. import util from .. import version from ..config import load_config, merge_config from . import populate_one_subcmd from .install import CONFIG_BUILTIN, SAVE_INSTALL_CONFIG CURTIN_PACK_CONFIG_DIR = '/curtin/configs' def collect_logs_main(args): """Collect all configured curtin logs and into a tarfile.""" if os.path.exists(SAVE_INSTALL_CONFIG): cfg = load_config(SAVE_INSTALL_CONFIG) elif os.path.isdir(CURTIN_PACK_CONFIG_DIR): cfg = CONFIG_BUILTIN.copy() for _file in sorted(os.listdir(CURTIN_PACK_CONFIG_DIR)): merge_config( cfg, load_config(os.path.join(CURTIN_PACK_CONFIG_DIR, _file))) else: sys.stderr.write( 'Warning: no configuration file found in %s or %s.\n' 'Using builtin configuration.' % ( SAVE_INSTALL_CONFIG, CURTIN_PACK_CONFIG_DIR)) cfg = CONFIG_BUILTIN.copy() create_log_tarfile(args.output, cfg) sys.exit(0) def create_log_tarfile(tarfile, config): """Create curtin logs tarfile within a temporary directory. A subdirectory curtin- is created in the tar containing the specified logs. Duplicates are skipped, paths which don't exist are skipped. @param tarfile: Path of the tarfile we want to create. @param config: Dictionary of curtin's configuration. """ if not (isinstance(tarfile, util.string_types) and tarfile): raise ValueError("Invalid value '%s' for tarfile" % tarfile) target_dir = os.path.dirname(tarfile) if target_dir and not os.path.exists(target_dir): util.ensure_dir(target_dir) instcfg = config.get('install', {}) logfile = instcfg.get('log_file') alllogs = instcfg.get('post_files', []) if logfile: alllogs.append(logfile) # Prune duplicates and files which do not exist stderr = sys.stderr valid_logs = [] for logfile in set(alllogs): if os.path.exists(logfile): valid_logs.append(logfile) else: stderr.write( 'Skipping logfile %s: file does not exist\n' % logfile) maascfg = instcfg.get('maas', {}) redact_values = [] for key in ('consumer_key', 'token_key', 'token_secret'): redact_value = maascfg.get(key) if redact_value: redact_values.append(redact_value) date = datetime.utcnow().strftime('%Y-%m-%d-%H-%M') tmp_dir = tempfile.mkdtemp() # The tar will contain a dated subdirectory containing all logs tar_dir = 'curtin-logs-{date}'.format(date=date) cmd = ['tar', '-cvf', os.path.join(os.getcwd(), tarfile), tar_dir] try: with util.chdir(tmp_dir): os.mkdir(tar_dir) _collect_system_info(tar_dir, config) for logfile in valid_logs: shutil.copy(logfile, tar_dir) _redact_sensitive_information(tar_dir, redact_values) util.subp(cmd, capture=True) finally: if os.path.exists(tmp_dir): shutil.rmtree(tmp_dir) sys.stderr.write('Wrote: %s\n' % tarfile) def _collect_system_info(target_dir, config): """Copy and create system information files in the provided target_dir.""" util.write_file( os.path.join(target_dir, 'version'), version.version_string()) if os.path.isdir(CURTIN_PACK_CONFIG_DIR): shutil.copytree( CURTIN_PACK_CONFIG_DIR, os.path.join(target_dir, 'configs')) util.write_file( os.path.join(target_dir, 'curtin-config'), json.dumps(config, indent=1, sort_keys=True, separators=(',', ': '))) for fpath in ('/etc/os-release', '/proc/cmdline', '/proc/partitions'): shutil.copy(fpath, target_dir) os.chmod(os.path.join(target_dir, os.path.basename(fpath)), 0o644) _out, _ = util.subp(['uname', '-a'], capture=True) util.write_file(os.path.join(target_dir, 'uname'), _out) lshw_out, _ = util.subp(['sudo', 'lshw'], capture=True) util.write_file(os.path.join(target_dir, 'lshw'), lshw_out) network_cmds = [ ['ip', '--oneline', 'address', 'list'], ['ip', '--oneline', '-6', 'address', 'list'], ['ip', '--oneline', 'route', 'list'], ['ip', '--oneline', '-6', 'route', 'list'], ] content = [] for cmd in network_cmds: content.append('=== {cmd} ==='.format(cmd=' '.join(cmd))) out, err = util.subp(cmd, combine_capture=True) content.append(out) util.write_file(os.path.join(target_dir, 'network'), '\n'.join(content)) def _redact_sensitive_information(target_dir, redact_values): """Redact sensitive information from any files found in target_dir. Perform inline replacement of any matching redact_values with in all files found in target_dir. @param target_dir: The directory in which to redact file content. @param redact_values: List of strings which need redacting from all files in target_dir. """ for root, _, files in os.walk(target_dir): for fname in files: fpath = os.path.join(root, fname) with open(fpath) as stream: content = stream.read() for redact_value in redact_values: content = re.sub(redact_value, '', content) util.write_file(fpath, content, mode=0o666) CMD_ARGUMENTS = ( ((('-o', '--output'), {'help': 'The output tarfile created from logs.', 'action': 'store', 'default': "curtin-logs.tar"}),) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, collect_logs_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/curthooks.py000066400000000000000000002173031415350476600204640ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import copy import glob import os import platform import re import sys import shutil import textwrap from curtin import config from curtin import block from curtin import distro from curtin.block import iscsi from curtin.block import lvm from curtin import net from curtin import futil from curtin.log import LOG from curtin import paths from curtin import swap from curtin import util from curtin import version as curtin_version from curtin.block import deps as bdeps from curtin.distro import DISTROS from curtin.net import deps as ndeps from curtin.reporter import events from curtin.commands import apply_net, apt_config from curtin.commands.install_grub import install_grub from curtin.url_helper import get_maas_version from . import populate_one_subcmd write_files = futil._legacy_write_files # LP: #1731709 CMD_ARGUMENTS = ( ((('-t', '--target'), {'help': 'operate on target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': None}), (('-c', '--config'), {'help': 'operate on config. default is env[CONFIG]', 'action': 'store', 'metavar': 'CONFIG', 'default': None}), ) ) KERNEL_MAPPING = { 'precise': { '3.2.0': '', '3.5.0': '-lts-quantal', '3.8.0': '-lts-raring', '3.11.0': '-lts-saucy', '3.13.0': '-lts-trusty', }, 'trusty': { '3.13.0': '', '3.16.0': '-lts-utopic', '3.19.0': '-lts-vivid', '4.2.0': '-lts-wily', '4.4.0': '-lts-xenial', }, 'xenial': { '4.3.0': '', # development release has 4.3, release will have 4.4 '4.4.0': '', } } CLOUD_INIT_YUM_REPO_TEMPLATE = """ [group_cloud-init-el-stable] name=Copr repo for el-stable owned by @cloud-init baseurl=https://copr-be.cloud.fedoraproject.org/results/@cloud-init/el-stable/epel-%s-$basearch/ type=rpm-md skip_if_unavailable=True gpgcheck=1 gpgkey=https://copr-be.cloud.fedoraproject.org/results/@cloud-init/el-stable/pubkey.gpg repo_gpgcheck=0 enabled=1 enabled_metadata=1 """ KERNEL_IMG_CONF_TEMPLATE = """# Kernel image management overrides # See kernel-img.conf(5) for details do_symlinks = yes do_bootloader = {bootloader} do_initrd = yes link_in_boot = {inboot} """ UEFI_BOOT_ENTRY_IS_NETWORK = r'.*(Network|PXE|NIC|Ethernet|LAN|IP4|IP6)+.*' def do_apt_config(cfg, target): cfg = apt_config.translate_old_apt_features(cfg) apt_cfg = cfg.get("apt") if apt_cfg is not None: LOG.info("curthooks handling apt to target %s with config %s", target, apt_cfg) apt_config.handle_apt(apt_cfg, target) else: LOG.info("No apt config provided, skipping") def disable_overlayroot(cfg, target): # cloud images come with overlayroot, but installed systems need disabled disable = cfg.get('disable_overlayroot', True) local_conf = os.path.sep.join([target, 'etc/overlayroot.local.conf']) if disable and os.path.exists(local_conf): LOG.debug("renaming %s to %s", local_conf, local_conf + ".old") shutil.move(local_conf, local_conf + ".old") def _update_initramfs_tools(machine=None): """ Return a list of binary names used to update an initramfs. On some architectures there are helper binaries that are also used and will be included in the list. """ tools = ['update-initramfs'] if not machine: machine = platform.machine() if machine == 's390x': tools.append('zipl') elif machine == 'aarch64': tools.append('flash-kernel') return tools def disable_update_initramfs(cfg, target, machine=None): """ Find update-initramfs tools in target and change their name. """ with util.ChrootableTarget(target) as in_chroot: for tool in _update_initramfs_tools(machine=machine): found = util.which(tool, target=target) if found: LOG.debug('Diverting original %s in target.', tool) rename = found + '.curtin-disabled' divert = ['dpkg-divert', '--add', '--rename', '--divert', rename, found] in_chroot.subp(divert) # create a dummy update-initramfs which just returns true; # this handles postinstall scripts which make invoke $tool # directly util.write_file(target + found, content="#!/bin/true\n# diverted by curtin", mode=0o755) def update_initramfs_is_disabled(target): """ Return a bool indicating if initramfs tooling is disabled. """ disabled = [] with util.ChrootableTarget(target) as in_chroot: out, _err = in_chroot.subp(['dpkg-divert', '--list'], capture=True) disabled = [divert for divert in out.splitlines() if divert.endswith('.curtin-disabled')] return len(disabled) > 0 def enable_update_initramfs(cfg, target, machine=None): """ Enable initramfs update tools by restoring their original name. """ if update_initramfs_is_disabled(target): with util.ChrootableTarget(target) as in_chroot: for tool in _update_initramfs_tools(machine=machine): LOG.info('Restoring %s in target for initrd updates.', tool) found = util.which(tool, target=target) if not found: continue # remove the diverted util.del_file(target + found) # un-divert and restore original file in_chroot.subp( ['dpkg-divert', '--rename', '--remove', found]) def setup_zipl(cfg, target): if platform.machine() != 's390x': return # assuming that below gives the "/" rootfs target_dev = block.get_devices_for_mp(target)[0] # get preferred device path, according to https://wiki.ubuntu.com/FSTAB from curtin.commands.block_meta import get_volume_spec root_arg = get_volume_spec(target_dev) if not root_arg: msg = "Failed to identify root= for %s at %s." % (target, target_dev) LOG.warn(msg) raise ValueError(msg) zipl_conf = """ # This has been modified by the MAAS curtin installer [defaultboot] default=ubuntu [ubuntu] target = /boot image = /boot/vmlinuz ramdisk = /boot/initrd.img parameters = root=%s """ % root_arg futil.write_files( files={"zipl_conf": {"path": "/etc/zipl.conf", "content": zipl_conf}}, base_dir=target) def run_zipl(cfg, target): if platform.machine() != 's390x': return with util.ChrootableTarget(target) as in_chroot: in_chroot.subp(['zipl']) def chzdev_persist_active_online(cfg, target): """Use chzdev to export active|online zdevices into target.""" if platform.machine() != 's390x': return LOG.info('Persisting zdevice configuration in target') target_etc = paths.target_path(target, 'etc') (chzdev_conf, _) = chzdev_export(active=True, online=True) chzdev_persistent = chzdev_prepare_for_import(chzdev_conf) chzdev_import(data=chzdev_persistent, persistent=True, noroot=True, base={'/etc': target_etc}) def chzdev_export(active=True, online=True, persistent=False, export_file=None): """Use chzdev to export zdevice configuration.""" if not export_file: # write to stdout export_file = "-" cmd = ['chzdev', '--quiet'] if active: cmd.extend(['--active']) if online: cmd.extend(['--online']) if persistent: cmd.extend(['--persistent']) cmd.extend(['--export', export_file]) return util.subp(cmd, capture=True) def chzdev_import(data=None, persistent=True, noroot=True, base=None, import_file=None): """Use chzdev to import zdevice configuration.""" if not any([data, import_file]): raise ValueError('Must provide data or input_file value.') if all([data, import_file]): raise ValueError('Cannot provide both data and input_file value.') if not import_file: import_file = "-" cmd = ['chzdev', '--quiet'] if persistent: cmd.extend(['--persistent']) if noroot: cmd.extend(['--no-root-update']) if base: if type(base) == dict: cmd.extend( ['--base'] + ["%s=%s" % (k, v) for k, v in base.items()]) else: cmd.extend(['--base', base]) if data: data = data.encode() cmd.extend(['--import', import_file]) return util.subp(cmd, data=data, capture=True) def chzdev_prepare_for_import(chzdev_conf): """ Transform chzdev --export output into an importable form by replacing 'active' with 'persistent' and dropping any options set to 'n/a' which chzdev --import cannot handle. :param chzdev_conf: string output from calling chzdev --export :returns: string of transformed configuration """ if not chzdev_conf or not isinstance(chzdev_conf, util.string_types): raise ValueError("Input value invalid: '%s'" % chzdev_conf) # transform [active] -> [persistent] and drop .*=n/a\n transform = re.compile(r'^\[active|^.*=n/a\n', re.MULTILINE) def replacements(match): if '[active' in match: return '[persistent' if '=n/a' in match: return '' # Note, we add a trailing newline match the final .*=n/a\n and trim # any trailing newlines after transforming if '=n/a' in chzdev_conf: chzdev_conf += '\n' return transform.sub(lambda match: replacements(match.group(0)), chzdev_conf).strip() def get_flash_kernel_pkgs(arch=None, uefi=None): if arch is None: arch = distro.get_architecture() if uefi is None: uefi = util.is_uefi_bootable() if uefi: return None if not arch.startswith('arm'): return None try: fk_packages, _ = util.subp( ['list-flash-kernel-packages'], capture=True) return fk_packages except util.ProcessExecutionError: # Ignore errors return None def setup_kernel_img_conf(target): # kernel-img.conf only needed on release prior to 19.10 lsb_info = distro.lsb_release(target=target) if tuple(map(int, lsb_info['release'].split('.'))) >= (19, 10): return kernel_img_conf_vars = { 'bootloader': 'no', 'inboot': 'yes', } # see zipl-installer if platform.machine() == 's390x': kernel_img_conf_vars['bootloader'] = 'yes' # see base-installer/debian/templates-arch if util.get_platform_arch() in ['amd64', 'i386']: kernel_img_conf_vars['inboot'] = 'no' kernel_img_conf_path = os.path.sep.join([target, '/etc/kernel-img.conf']) content = KERNEL_IMG_CONF_TEMPLATE.format(**kernel_img_conf_vars) util.write_file(kernel_img_conf_path, content=content) def install_kernel(cfg, target): kernel_cfg = cfg.get('kernel', {'package': None, 'fallback-package': "linux-generic", 'mapping': {}}) if kernel_cfg is not None: kernel_package = kernel_cfg.get('package') kernel_fallback = kernel_cfg.get('fallback-package') else: kernel_package = None kernel_fallback = None mapping = copy.deepcopy(KERNEL_MAPPING) config.merge_config(mapping, kernel_cfg.get('mapping', {})) # Machines using flash-kernel may need additional dependencies installed # before running. Run those checks in the ephemeral environment so the # target only has required packages installed. See LP:1640519 fk_packages = get_flash_kernel_pkgs() if fk_packages: distro.install_packages(fk_packages.split(), target=target) if kernel_package: distro.install_packages([kernel_package], target=target) return # uname[2] is kernel name (ie: 3.16.0-7-generic) # version gets X.Y.Z, flavor gets anything after second '-'. kernel = os.uname()[2] codename, _ = util.subp(['lsb_release', '--codename', '--short'], capture=True, target=target) codename = codename.strip() version, abi, flavor = kernel.split('-', 2) try: map_suffix = mapping[codename][version] except KeyError: LOG.warn("Couldn't detect kernel package to install for %s." % kernel) if kernel_fallback is not None: distro.install_packages([kernel_fallback], target=target) return package = "linux-{flavor}{map_suffix}".format( flavor=flavor, map_suffix=map_suffix) if distro.has_pkg_available(package, target): if distro.has_pkg_installed(package, target): LOG.debug("Kernel package '%s' already installed", package) else: LOG.debug("installing kernel package '%s'", package) distro.install_packages([package], target=target) else: if kernel_fallback is not None: LOG.info("Kernel package '%s' not available. " "Installing fallback package '%s'.", package, kernel_fallback) distro.install_packages([kernel_fallback], target=target) else: LOG.warn("Kernel package '%s' not available and no fallback." " System may not boot.", package) def uefi_remove_old_loaders(grubcfg, target): """Removes the old UEFI loaders from efibootmgr.""" efi_output = util.get_efibootmgr(target) LOG.debug('UEFI remove old olders efi output:\n%s', efi_output) current_uefi_boot = efi_output.get('current', None) old_efi_entries = { entry: info for entry, info in efi_output['entries'].items() if re.match(r'^.*File\(\\EFI.*$', info['path']) } old_efi_entries.pop(current_uefi_boot, None) remove_old_loaders = grubcfg.get('remove_old_uefi_loaders', True) if old_efi_entries: if remove_old_loaders: with util.ChrootableTarget(target) as in_chroot: for entry, info in old_efi_entries.items(): LOG.debug("removing old UEFI entry: %s" % info['name']) in_chroot.subp( ['efibootmgr', '-B', '-b', entry], capture=True) else: LOG.debug( "Skipped removing %d old UEFI entrie%s.", len(old_efi_entries), '' if len(old_efi_entries) == 1 else 's') for info in old_efi_entries.values(): LOG.debug( "UEFI entry '%s' might no longer exist and " "should be removed.", info['name']) def uefi_boot_entry_is_network(boot_entry_name): """ Return boolean if boot entry name looks like a known network entry. """ return re.match(UEFI_BOOT_ENTRY_IS_NETWORK, boot_entry_name, re.IGNORECASE) is not None def _reorder_new_entry(boot_order, efi_output, efi_orig=None, variant=None): """ Reorder the EFI boot menu as follows 1. All PXE/Network boot entries 2. The newly installed entry variant (ubuntu/centos) 3. The other items in the boot order that are not in [1, 2] returns a list of bootnum strings """ if not boot_order: raise RuntimeError('boot_order is not a list') if efi_orig is None: raise RuntimeError('Missing efi_orig boot dictionary') if variant is None: variant = "" net_boot = [] other = [] target = [] LOG.debug("UEFI previous boot order: %s", efi_orig['order']) LOG.debug("UEFI current boot order: %s", boot_order) new_entries = list(set(boot_order).difference(set(efi_orig['order']))) if new_entries: LOG.debug("UEFI Found new boot entries: %s", new_entries) LOG.debug('UEFI Looking for installed entry variant=%s', variant.lower()) for bootnum in boot_order: entry = efi_output['entries'][bootnum] if uefi_boot_entry_is_network(entry['name']): net_boot.append(bootnum) else: if entry['name'].lower() == variant.lower(): target.append(bootnum) else: other.append(bootnum) if net_boot: LOG.debug("UEFI found netboot entries: %s", net_boot) if other: LOG.debug("UEFI found other entries: %s", other) if target: LOG.debug("UEFI found target entry: %s", target) else: LOG.debug("UEFI Did not find an entry with variant=%s", variant.lower()) new_order = net_boot + target + other if boot_order == new_order: LOG.debug("UEFI Current and Previous bootorders match") return new_order def uefi_reorder_loaders(grubcfg, target, efi_orig=None, variant=None): """Reorders the UEFI BootOrder to place BootCurrent first. The specifically doesn't try to do to much. The order in which grub places a new EFI loader is up to grub. This only moves the BootCurrent to the front of the BootOrder. In some systems, BootCurrent may not be set/present. In this case curtin will attempt to place the new boot entry created when grub is installed after the the previous first entry (before we installed grub). """ if grubcfg.get('reorder_uefi', True): efi_output = util.get_efibootmgr(target=target) LOG.debug('UEFI efibootmgr output after install:\n%s', efi_output) currently_booted = efi_output.get('current', None) boot_order = efi_output.get('order', []) new_boot_order = None force_fallback_reorder = config.value_as_boolean( grubcfg.get('reorder_uefi_force_fallback', False)) if currently_booted and force_fallback_reorder is False: if currently_booted in boot_order: boot_order.remove(currently_booted) boot_order = [currently_booted] + boot_order new_boot_order = ','.join(boot_order) LOG.debug( "Setting currently booted %s as the first " "UEFI loader.", currently_booted) else: reason = ( "config 'reorder_uefi_force_fallback' is True" if force_fallback_reorder else "missing 'BootCurrent' value") LOG.debug("Using fallback UEFI reordering: " + reason) if len(boot_order) < 2: LOG.debug( 'UEFI BootOrder has less than 2 entries, cannot reorder') return # look at efi entries before we added one to find the new addition new_order = _reorder_new_entry( copy.deepcopy(boot_order), efi_output, efi_orig, variant) if new_order != boot_order: new_boot_order = ','.join(new_order) else: LOG.debug("UEFI No changes to boot order.") if new_boot_order: LOG.debug( "New UEFI boot order: %s", new_boot_order) with util.ChrootableTarget(target) as in_chroot: in_chroot.subp(['efibootmgr', '-o', new_boot_order]) else: LOG.debug("Skipped reordering of UEFI boot methods.") LOG.debug("Currently booted UEFI loader might no longer boot.") def uefi_remove_duplicate_entries(grubcfg, target, to_remove=None): if not grubcfg.get('remove_duplicate_entries', True): LOG.debug("Skipped removing duplicate UEFI boot entries per config.") return if to_remove is None: to_remove = uefi_find_duplicate_entries(grubcfg, target) # check so we don't run ChrootableTarget code unless we have things to do if to_remove: with util.ChrootableTarget(target) as in_chroot: for bootnum, entry in to_remove: LOG.debug('Removing duplicate EFI entry (%s, %s)', bootnum, entry) in_chroot.subp(['efibootmgr', '--bootnum=%s' % bootnum, '--delete-bootnum']) def uefi_find_duplicate_entries(grubcfg, target, efi_output=None): seen = set() to_remove = [] if efi_output is None: efi_output = util.get_efibootmgr(target=target) entries = efi_output.get('entries', {}) current_bootnum = efi_output.get('current', None) # adding BootCurrent to seen first allows us to remove any other duplicate # entry of BootCurrent. if current_bootnum: seen.add(tuple(entries[current_bootnum].items())) for bootnum in sorted(entries): if bootnum == current_bootnum: continue entry = entries[bootnum] t = tuple(entry.items()) if t not in seen: seen.add(t) else: to_remove.append((bootnum, entry)) return to_remove def _debconf_multiselect(package, variable, choices): return "{package} {variable} multiselect {choices}".format( package=package, variable=variable, choices=", ".join(choices)) def configure_grub_debconf(boot_devices, target, uefi): """Configure grub debconf variables in target. Non-UEFI: grub-pc grub-pc/install_devices multiselect d1, d2, d3 UEFI: grub-pc grub-efi/install_devices multiselect d1 """ LOG.debug('Generating grub debconf_selections for devices=%s uefi=%s', boot_devices, uefi) byid_links = [] for dev in boot_devices: link = block.disk_to_byid_path(dev) byid_links.extend([link] if link else [dev]) selections = [] if uefi: selections.append(_debconf_multiselect( 'grub-pc', 'grub-efi/install_devices', byid_links)) else: selections.append(_debconf_multiselect( 'grub-pc', 'grub-pc/install_devices', byid_links)) cfg = {'debconf_selections': {'grub': "\n".join(selections)}} LOG.info('Applying grub debconf_selections config:\n%s', cfg) apt_config.apply_debconf_selections(cfg, target) return def uefi_find_grub_device_ids(sconfig): """ Scan the provided storage config for device_ids on which we will install grub. An order of precendence is required due to legacy configurations which set grub_device on the disk but not on the ESP config itself. We prefer the latter as this allows a disk to contain more than on ESP and choose to install grub to a subset. We always look for the 'primary' ESP which is signified by being mounted at /boot/efi (only one can be mounted). 1. ESPs with grub_device: true are the preferred way to find the specific set of devices on which to install grub 2. ESPs whose parent disk has grub_device: true The primary ESP is the first element of the result if any devices are found. returns a list of storage-config ids on which grub will be installed. """ # Only one EFI system partition can be mounted, but backup EFI # partitions may exist. Find all EFI partitions and determine # the primary. grub_device_ids = [] primary_esp = None grub_partitions = [] esp_partitions = [] for item_id, item in sconfig.items(): if item['type'] == 'partition': if item.get('grub_device'): grub_partitions.append(item_id) continue elif item.get('flag') == 'boot': esp_partitions.append(item_id) continue if item['type'] == 'mount' and item.get('path') == '/boot/efi': if primary_esp: LOG.debug('Ignoring duplicate mounted primary ESP: %s', item_id) continue primary_esp = sconfig[item['device']]['volume'] if sconfig[primary_esp]['type'] == 'partition': LOG.debug("Found primary UEFI ESP: %s", primary_esp) else: LOG.warn('Found primary ESP not on a partition: %s', item) if primary_esp is None: raise RuntimeError('Failed to find primary ESP mounted at /boot/efi') grub_device_ids = [primary_esp] # prefer grub_device: true partitions if len(grub_partitions): if primary_esp in grub_partitions: grub_partitions.remove(primary_esp) # insert the primary esp as first element grub_device_ids.extend(grub_partitions) # look at all esp entries, check if parent disk is grub_device: true elif len(esp_partitions): if primary_esp in esp_partitions: esp_partitions.remove(primary_esp) for esp_id in esp_partitions: esp_disk = sconfig[sconfig[esp_id]['device']] if esp_disk.get('grub_device'): grub_device_ids.append(esp_id) LOG.debug('Found UEFI ESP(s) for grub install: %s', grub_device_ids) return grub_device_ids def setup_grub(cfg, target, osfamily=DISTROS.debian, variant=None): # target is the path to the mounted filesystem # FIXME: these methods need moving to curtin.block # and using them from there rather than commands.block_meta from curtin.commands.block_meta import (extract_storage_ordered_dict, get_path_to_storage_volume) grubcfg = cfg.get('grub', {}) # copy legacy top level name if 'grub_install_devices' in cfg and 'install_devices' not in grubcfg: grubcfg['install_devices'] = cfg['grub_install_devices'] LOG.debug("setup grub on target %s", target) # if there is storage config, look for devices tagged with 'grub_device' storage_cfg_odict = None try: storage_cfg_odict = extract_storage_ordered_dict(cfg) except ValueError: pass uefi_bootable = util.is_uefi_bootable() if storage_cfg_odict: storage_grub_devices = [] if uefi_bootable: storage_grub_devices.extend([ get_path_to_storage_volume(dev_id, storage_cfg_odict) for dev_id in uefi_find_grub_device_ids(storage_cfg_odict)]) else: for item_id, item in storage_cfg_odict.items(): if not item.get('grub_device'): continue LOG.debug("checking: %s", item) storage_grub_devices.append( get_path_to_storage_volume(item_id, storage_cfg_odict)) if len(storage_grub_devices) > 0: if len(grubcfg.get('install_devices', [])): LOG.warn("Storage Config grub device config takes precedence " "over grub 'install_devices' value, ignoring: %s", grubcfg['install_devices']) grubcfg['install_devices'] = storage_grub_devices LOG.debug("install_devices: %s", grubcfg.get('install_devices')) if 'install_devices' in grubcfg: instdevs = grubcfg.get('install_devices') if isinstance(instdevs, str): instdevs = [instdevs] if instdevs is None: LOG.debug("grub installation disabled by config") else: # If there were no install_devices found then we try to do the right # thing. That right thing is basically installing on all block # devices that are mounted. On powerpc, though it means finding PrEP # partitions. devs = block.get_devices_for_mp(target) blockdevs = set() for maybepart in devs: try: (blockdev, part) = block.get_blockdev_for_partition(maybepart) blockdevs.add(blockdev) except ValueError: # if there is no syspath for this device such as a lvm # or raid device, then a ValueError is raised here. LOG.debug("failed to find block device for %s", maybepart) if platform.machine().startswith("ppc64"): # assume we want partitions that are 4100 (PReP). The snippet here # just prints the partition number partitions of that type. shnip = textwrap.dedent(""" export LANG=C; for d in "$@"; do sgdisk "$d" --print | awk '$6 == prep { print d $1 }' "d=$d" prep=4100 done """) try: out, err = util.subp( ['sh', '-c', shnip, '--'] + list(blockdevs), capture=True) instdevs = str(out).splitlines() if not instdevs: LOG.warn("No power grub target partitions found!") instdevs = None except util.ProcessExecutionError as e: LOG.warn("Failed to find power grub partitions: %s", e) instdevs = None else: instdevs = list(blockdevs) if instdevs: instdevs = [block.get_dev_name_entry(i)[1] for i in instdevs] if osfamily == DISTROS.debian: configure_grub_debconf(instdevs, target, uefi_bootable) else: instdevs = ["none"] update_nvram = grubcfg.get('update_nvram', True) if uefi_bootable and update_nvram: efi_orig_output = util.get_efibootmgr(target) uefi_remove_old_loaders(grubcfg, target) install_grub(instdevs, target, uefi=uefi_bootable, grubcfg=grubcfg) if uefi_bootable and update_nvram: uefi_reorder_loaders(grubcfg, target, efi_orig_output, variant) uefi_remove_duplicate_entries(grubcfg, target) def update_initramfs(target=None, all_kernels=False): """ Invoke update-initramfs in the target path. Look up the installed kernel versions in the target to ensure that an initrd get created or updated as needed. This allows curtin to invoke update-initramfs exactly once at the end of the install instead of multiple calls. """ if update_initramfs_is_disabled(target): return # We keep the all_kernels flag for callers, the implementation # now will operate correctly on all kernels present in the image # which is almost always exactly one. # # Ideally curtin should be able to use update-initramfs -k all # however, update-initramfs expects to be able to find out which # versions of kernels are installed by using values from the # kernel package invoking update-initramfs -c . # With update-initramfs diverted, nothing captures the kernel # version strings in the place where update-initramfs expects # to find this information. Instead, curtin will examine # /boot to see what kernels and initramfs are installed and # either create or update as needed. # # This loop below will examine the contents of target's # /boot and pattern match for kernel files. On Ubuntu this # is in the form of /boot/vmlinu[xz]-. # # For each kernel, we extract the version string and then # construct the name of of the initrd file that *would* # have been created when the kernel package was installed # if curtin had not diverted update-initramfs to prevent # duplicate initrd creation. # # if the initrd file exists, then we only need to invoke # update-initramfs's -u (update) method. If the file does # not exist, then we need to run the -c (create) method. boot = paths.target_path(target, 'boot') for kernel in sorted(glob.glob(boot + '/vmlinu*-*')): kfile = os.path.basename(kernel) # handle vmlinux or vmlinuz kprefix = kfile.split('-')[0] version = kfile.replace(kprefix + '-', '') initrd = kernel.replace(kprefix, 'initrd.img') # -u == update, -c == create mode = '-u' if os.path.exists(initrd) else '-c' cmd = ['update-initramfs', mode, '-k', version] with util.ChrootableTarget(target) as in_chroot: in_chroot.subp(cmd) if not os.path.exists(initrd): files = os.listdir(target + '/boot') LOG.debug('Failed to find initrd %s', initrd) LOG.debug('Files in target /boot: %s', files) def copy_fstab(fstab, target): if not fstab: LOG.warn("fstab variable not in state, not copying fstab") return content = util.load_file(fstab) header = distro.fstab_header() util.write_file(os.path.sep.join([target, 'etc/fstab']), content="%s\n%s" % (header, content)) def copy_crypttab(crypttab, target): if not crypttab: LOG.warn("crypttab config must be specified, not copying") return shutil.copy(crypttab, os.path.sep.join([target, 'etc/crypttab'])) def copy_iscsi_conf(nodes_dir, target, target_nodes_dir='etc/iscsi/nodes'): if not nodes_dir: LOG.warn("nodes directory must be specified, not copying") return LOG.info("copying iscsi nodes database into target") tdir = os.path.sep.join([target, target_nodes_dir]) if not os.path.exists(tdir): shutil.copytree(nodes_dir, tdir) else: # if /etc/iscsi/nodes exists, copy dirs underneath for ndir in os.listdir(nodes_dir): source_dir = os.path.join(nodes_dir, ndir) target_dir = os.path.join(tdir, ndir) shutil.copytree(source_dir, target_dir) def copy_mdadm_conf(mdadm_conf, target): if not mdadm_conf: LOG.warn("mdadm config must be specified, not copying") return LOG.info("copying mdadm.conf into target") shutil.copy(mdadm_conf, os.path.sep.join([target, 'etc/mdadm/mdadm.conf'])) def copy_zpool_cache(zpool_cache, target): if not zpool_cache: LOG.warn("zpool_cache path must be specified, not copying") return shutil.copy(zpool_cache, os.path.sep.join([target, 'etc/zfs'])) def copy_zkey_repository(zkey_repository, target, target_repo='etc/zkey/repository'): if not zkey_repository: LOG.warn("zkey repository path must be specified, not copying") return tdir = os.path.sep.join([target, target_repo]) if not os.path.exists(tdir): util.ensure_dir(tdir) files_copied = [] for src in os.listdir(zkey_repository): source_path = os.path.join(zkey_repository, src) target_path = os.path.join(tdir, src) if not os.path.exists(target_path): shutil.copy2(source_path, target_path) files_copied.append(target_path) LOG.debug('Imported zkey repo %s with files: %s', zkey_repository, files_copied) def apply_networking(target, state): netconf = state.get('network_config') def is_valid_src(infile): with open(infile, 'r') as fp: content = fp.read() if len(content.split('\n')) > 1: return True return False if is_valid_src(netconf): LOG.info("applying network_config") apply_net.apply_net(target, network_state=None, network_config=netconf) else: LOG.debug("copying interfaces") copy_interfaces(state.get('interfaces'), target) def copy_interfaces(interfaces, target): if not interfaces or not os.path.exists(interfaces): LOG.warn("no interfaces file to copy!") return eni = os.path.sep.join([target, 'etc/network/interfaces']) shutil.copy(interfaces, eni) def copy_dname_rules(rules_d, target): if not rules_d: LOG.warn("no udev rules directory to copy") return target_rules_dir = paths.target_path(target, "etc/udev/rules.d") for rule in os.listdir(rules_d): target_file = os.path.join(target_rules_dir, rule) shutil.copy(os.path.join(rules_d, rule), target_file) def restore_dist_interfaces(cfg, target): # cloud images have a link of /etc/network/interfaces into /run eni = os.path.sep.join([target, 'etc/network/interfaces']) if not cfg.get('restore_dist_interfaces', True): return rp = os.path.realpath(eni) if (os.path.exists(eni + ".dist") and (rp.startswith("/run") or rp.startswith(target + "/run"))): LOG.debug("restoring dist interfaces, existing link pointed to /run") shutil.move(eni, eni + ".old") shutil.move(eni + ".dist", eni) def add_swap(cfg, target, fstab): # add swap file per cfg to filesystem root at target. update fstab. # # swap: # filename: 'swap.img', # size: None # (or 1G) # maxsize: 2G if 'swap' in cfg and not cfg.get('swap'): LOG.debug("disabling 'add_swap' due to config") return swapcfg = cfg.get('swap', {}) fname = swapcfg.get('filename', None) size = swapcfg.get('size', None) maxsize = swapcfg.get('maxsize', None) force = swapcfg.get('force', False) if size: size = util.human2bytes(str(size)) if maxsize: maxsize = util.human2bytes(str(maxsize)) swap.setup_swapfile(target=target, fstab=fstab, swapfile=fname, size=size, maxsize=maxsize, force=force) def detect_and_handle_multipath(cfg, target, osfamily=DISTROS.debian): DEFAULT_MULTIPATH_PACKAGES = { DISTROS.debian: ['multipath-tools-boot'], DISTROS.redhat: ['device-mapper-multipath'], } if osfamily not in DEFAULT_MULTIPATH_PACKAGES: raise ValueError( 'No multipath package mapping for distro: %s' % osfamily) mpcfg = cfg.get('multipath', {}) mpmode = mpcfg.get('mode', 'auto') mppkgs = mpcfg.get('packages', DEFAULT_MULTIPATH_PACKAGES.get(osfamily)) mpbindings = mpcfg.get('overwrite_bindings', True) if isinstance(mppkgs, str): mppkgs = [mppkgs] if mpmode == 'disabled': return mp_device = block.detect_multipath(target) LOG.info('Multipath detection found: %s', mp_device) if mpmode == 'auto' and not mp_device: return LOG.info("Detected multipath device. Installing support via %s", mppkgs) needed = [pkg for pkg in mppkgs if pkg not in distro.get_installed_packages(target)] if needed: distro.install_packages(needed, target=target, osfamily=osfamily) replace_spaces = True if osfamily == DISTROS.debian: try: # check in-target version pkg_ver = distro.get_package_version('multipath-tools', target=target) LOG.debug("get_package_version:\n%s", pkg_ver) LOG.debug("multipath version is %s (major=%s minor=%s micro=%s)", pkg_ver['semantic_version'], pkg_ver['major'], pkg_ver['minor'], pkg_ver['micro']) # multipath-tools versions < 0.5.0 do _NOT_ # want whitespace replaced i.e. 0.4.X in Trusty. if pkg_ver['semantic_version'] < 500: replace_spaces = False except Exception as e: LOG.warn("failed reading multipath-tools version, " "assuming it wants no spaces in wwids: %s", e) multipath_cfg_path = os.path.sep.join([target, '/etc/multipath.conf']) multipath_bind_path = os.path.sep.join([target, '/etc/multipath/bindings']) # We don't want to overwrite multipath.conf file provided by the image. if not os.path.isfile(multipath_cfg_path): # Without user_friendly_names option enabled system fails to boot # if any of the disks has spaces in its name. Package multipath-tools # has bug opened for this issue LP: #1432062 but it was not fixed yet. multipath_cfg_content = '\n'.join( ['# This file was created by curtin while installing the system.', 'defaults {', ' user_friendly_names yes', '}', '']) util.write_file(multipath_cfg_path, content=multipath_cfg_content) if mpbindings or not os.path.isfile(multipath_bind_path): # we do assume that get_devices_for_mp()[0] is / target_dev = block.get_devices_for_mp(target)[0] wwid = block.get_scsi_wwid(target_dev, replace_whitespace=replace_spaces) blockdev, partno = block.get_blockdev_for_partition(target_dev) mpname = "mpath0" mp_supported = block.multipath.multipath_supported() if mp_supported: mpname = block.multipath.get_mpath_id_from_device(mp_device) if not mpname: LOG.warning('Failed to determine multipath device name, using' ' fallback name "mpatha".') mpname = 'mpatha' grub_dev = "/dev/mapper/" + mpname if partno is not None: if osfamily == DISTROS.debian: grub_dev += "-part%s" % partno elif osfamily == DISTROS.redhat: grub_dev += "p%s" % partno else: raise ValueError( 'Unknown grub_dev mapping for distro: %s' % osfamily) LOG.debug("configuring multipath for root=%s wwid=%s mpname=%s", grub_dev, wwid, mpname) # use host bindings in target if it exists if mp_supported and os.path.exists('/etc/multipath/bindings'): if os.path.exists(multipath_bind_path): util.del_file(multipath_bind_path) util.ensure_dir(os.path.dirname(multipath_bind_path)) shutil.copy('/etc/multipath/bindings', multipath_bind_path) else: # bindings map the wwid of the disk to an mpath name, if we have # a partition extract just the parent mpath_id, otherwise we'll # get /dev/mapper/mpatha-part1-part1 entries in dm. if '-part' in mpname: mpath_id, mpath_part_num = mpname.split("-part") else: mpath_id = mpname multipath_bind_content = '\n'.join([ ('# This file was created by curtin while ' 'installing the system.'), "%s %s" % (mpath_id, wwid), '# End of content generated by curtin.', '# Everything below is maintained by multipath subsystem.', '']) util.write_file(multipath_bind_path, content=multipath_bind_content) if osfamily == DISTROS.debian: grub_cfg = os.path.sep.join( [target, '/etc/default/grub.d/50-curtin-multipath.cfg']) omode = 'w' elif osfamily == DISTROS.redhat: grub_cfg = os.path.sep.join([target, '/etc/default/grub']) omode = 'a' else: raise ValueError( 'Unknown grub_cfg mapping for distro: %s' % osfamily) if mp_supported: # if root is on lvm, emit a multipath filter to lvm lvmfilter = lvm.generate_multipath_dm_uuid_filter() # lvm.conf device section indents config by 8 spaces indent = ' ' * 8 mpfilter = '\n'.join([ indent + ('# Modified by curtin for multipath ' 'device %s' % (mpname)), indent + lvmfilter]) lvmconf = paths.target_path(target, '/etc/lvm/lvm.conf') orig_content = util.load_file(lvmconf) devices_match = re.search(r'devices\ {', orig_content, re.MULTILINE) if devices_match: LOG.debug('Adding multipath filter (%s) to lvm.conf', mpfilter) shutil.move(lvmconf, lvmconf + '.orig-curtin') index = devices_match.end() new_content = ( orig_content[:index] + '\n' + mpfilter + '\n' + orig_content[index + 1:]) util.write_file(lvmconf, new_content) else: # TODO: fix up dnames without multipath available on host msg = '\n'.join([ '# Written by curtin for multipath device %s %s' % (mpname, wwid), 'GRUB_DEVICE=%s' % grub_dev, 'GRUB_DISABLE_LINUX_UUID=true', '']) util.write_file(grub_cfg, omode=omode, content=msg) else: LOG.warn("Not sure how this will boot") if osfamily == DISTROS.debian: # Initrams needs to be updated to include /etc/multipath.cfg # and /etc/multipath/bindings files. update_initramfs(target, all_kernels=True) elif osfamily == DISTROS.redhat: # Write out initramfs/dracut config for multipath dracut_conf_multipath = os.path.sep.join( [target, '/etc/dracut.conf.d/10-curtin-multipath.conf']) msg = '\n'.join([ '# Written by curtin for multipath device wwid "%s"' % wwid, 'force_drivers+=" dm-multipath "', 'add_dracutmodules+=" multipath"', 'install_items+="/etc/multipath.conf /etc/multipath/bindings"', '']) util.write_file(dracut_conf_multipath, content=msg) else: raise ValueError( 'Unknown initramfs mapping for distro: %s' % osfamily) def detect_required_packages(cfg, osfamily=DISTROS.debian): """ detect packages that will be required in-target by custom config items """ mapping = { 'storage': bdeps.detect_required_packages_mapping(osfamily=osfamily), 'network': ndeps.detect_required_packages_mapping(osfamily=osfamily), } needed_packages = [] for cfg_type, cfg_map in mapping.items(): # skip missing or invalid config items, configs may # only have network or storage, not always both cfg_type_value = cfg.get(cfg_type) if (not isinstance(cfg_type_value, dict) or cfg_type_value.get('config') == 'disabled'): continue cfg_version = cfg[cfg_type].get('version') if not isinstance(cfg_version, int) or cfg_version not in cfg_map: msg = ('Supplied configuration version "%s", for config type' '"%s" is not present in the known mapping.' % (cfg_version, cfg_type)) raise ValueError(msg) mapped_config = cfg_map[cfg_version] found_reqs = mapped_config['handler'](cfg, mapped_config['mapping']) needed_packages.extend(found_reqs) LOG.debug('Curtin config dependencies requires additional packages: %s', needed_packages) return needed_packages def install_missing_packages(cfg, target, osfamily=DISTROS.debian): ''' describe which operation types will require specific packages 'custom_config_key': { 'pkg1': ['op_name_1', 'op_name_2', ...] } ''' installed_packages = distro.get_installed_packages(target) needed_packages = set([pkg for pkg in detect_required_packages(cfg, osfamily=osfamily) if pkg not in installed_packages]) arch_packages = { 's390x': [('s390-tools', 'zipl')], } for pkg, cmd in arch_packages.get(platform.machine(), []): if not util.which(cmd, target=target): if pkg not in needed_packages: needed_packages.add(pkg) # UEFI requires grub-efi-{arch}. If a signed version of that package # exists then it will be installed. if util.is_uefi_bootable(): uefi_pkgs = ['efibootmgr'] if osfamily == DISTROS.redhat: arch = distro.get_architecture() if arch == 'amd64': # centos/redhat doesn't support 32-bit? if 'grub2-efi-x64-modules' not in installed_packages: # Previously Curtin only supported unsigned GRUB due to an # upstream bug. By default lp:maas-image-builder and # packer-maas have grub preinstalled. If # grub2-efi-x64-modules is already in the image use # unsigned grub so the install doesn't require Internet # access. If grub is missing use to signed version. uefi_pkgs.extend(['grub2-efi-x64', 'shim-x64']) if arch == 'arm64': if 'grub2-efi-aa64-modules' not in installed_packages: # Packages required for arm64 grub installer uefi_pkgs.extend(['grub2-efi-aa64-modules', 'grub2-efi-aa64', 'shim-aa64']) elif osfamily == DISTROS.debian: arch = distro.get_architecture() if arch == 'i386': arch = 'ia32' uefi_pkgs.append('grub-efi-%s' % arch) # Architecture might support a signed UEFI loader uefi_pkg_signed = 'grub-efi-%s-signed' % arch if distro.has_pkg_available(uefi_pkg_signed): uefi_pkgs.append(uefi_pkg_signed) # amd64 and arm64 (since bionic) has shim-signed for # SecureBoot support if distro.has_pkg_available("shim-signed"): uefi_pkgs.append("shim-signed") else: raise ValueError('Unknown grub2 package list for distro: %s' % osfamily) needed_packages.update([pkg for pkg in uefi_pkgs if pkg not in installed_packages]) # Filter out ifupdown network packages on netplan enabled systems. has_netplan = ('nplan' in installed_packages or 'netplan.io' in installed_packages) if 'ifupdown' not in installed_packages and has_netplan: drops = set(['bridge-utils', 'ifenslave', 'vlan']) if needed_packages.union(drops): LOG.debug("Skipping install of %s. Not needed on netplan system.", needed_packages.union(drops)) needed_packages = needed_packages.difference(drops) if needed_packages: to_add = list(sorted(needed_packages)) state = util.load_command_environment() with events.ReportEventStack( name=state.get('report_stack_prefix'), reporting_enabled=True, level="INFO", description="Installing packages on target system: " + str(to_add)): distro.install_packages(to_add, target=target, osfamily=osfamily) def system_upgrade(cfg, target, osfamily=DISTROS.debian): """run system-upgrade (apt-get dist-upgrade) or other in target. config: system_upgrade: enabled: False """ mycfg = {'system_upgrade': {'enabled': False}} config.merge_config(mycfg, cfg) mycfg = mycfg.get('system_upgrade') if not isinstance(mycfg, dict): LOG.debug("system_upgrade disabled by config. entry not a dict.") return if not config.value_as_boolean(mycfg.get('enabled', True)): LOG.debug("system_upgrade disabled by config.") return distro.system_upgrade(target=target, osfamily=osfamily) def inject_pollinate_user_agent_config(ua_cfg, target): """Write out user-agent config dictionary to pollinate's user-agent file (/etc/pollinate/add-user-agent) in target. """ if not isinstance(ua_cfg, dict): raise ValueError('ua_cfg is not a dictionary: %s', ua_cfg) pollinate_cfg = paths.target_path(target, '/etc/pollinate/add-user-agent') comment = "# written by curtin" content = "\n".join(["%s/%s %s" % (ua_key, ua_val, comment) for ua_key, ua_val in ua_cfg.items()]) + "\n" util.write_file(pollinate_cfg, content=content) def handle_pollinate_user_agent(cfg, target): """Configure the pollinate user-agent if provided configuration pollinate: user_agent: false # disable writing out a user-agent string # custom agent key/value pairs pollinate: user_agent: key1: value1 key2: value2 No config will result in curtin fetching: curtin version maas version (via endpoint URL, if present) """ if not util.which('pollinate', target=target): return pcfg = cfg.get('pollinate') if not isinstance(pcfg, dict): pcfg = {'user_agent': {}} uacfg = pcfg.get('user_agent', {}) if uacfg is False: return # set curtin version uacfg['curtin'] = curtin_version.version_string() # maas configures a curtin reporting webhook handler with # an endpoint URL. This url is used to query the MAAS REST # api to extract the exact maas version. maas_reporting = cfg.get('reporting', {}).get('maas', None) if maas_reporting: endpoint = maas_reporting.get('endpoint') maas_version = get_maas_version(endpoint) if maas_version: uacfg['maas'] = maas_version['version'] inject_pollinate_user_agent_config(uacfg, target) def configure_iscsi(cfg, state_etcd, target, osfamily=DISTROS.debian): # If a /etc/iscsi/nodes/... file was created by block_meta then it # needs to be copied onto the target system nodes = os.path.join(state_etcd, "nodes") if not os.path.exists(nodes): return LOG.info('Iscsi configuration found, enabling service') if osfamily == DISTROS.redhat: # copy iscsi node config to target image LOG.debug('Copying iscsi node config to target') copy_iscsi_conf(nodes, target, target_nodes_dir='var/lib/iscsi/nodes') # update in-target config with util.ChrootableTarget(target) as in_chroot: # enable iscsid service LOG.debug('Enabling iscsi daemon') in_chroot.subp(['chkconfig', 'iscsid', 'on']) # update selinux config for iscsi ports required for port in [str(port) for port in iscsi.get_iscsi_ports_from_config(cfg)]: LOG.debug('Adding iscsi port %s to selinux iscsi_port_t list', port) in_chroot.subp(['semanage', 'port', '-a', '-t', 'iscsi_port_t', '-p', 'tcp', port]) elif osfamily == DISTROS.debian: copy_iscsi_conf(nodes, target) else: raise ValueError( 'Unknown iscsi requirements for distro: %s' % osfamily) def configure_mdadm(cfg, state_etcd, target, osfamily=DISTROS.debian): # If a mdadm.conf file was created by block_meta than it needs # to be copied onto the target system mdadm_location = os.path.join(state_etcd, "mdadm.conf") if not os.path.exists(mdadm_location): return conf_map = { DISTROS.debian: 'etc/mdadm/mdadm.conf', DISTROS.redhat: 'etc/mdadm.conf', } if osfamily not in conf_map: raise ValueError( 'Unknown mdadm conf mapping for distro: %s' % osfamily) LOG.info('Mdadm configuration found, enabling service') shutil.copy(mdadm_location, paths.target_path(target, conf_map[osfamily])) if osfamily == DISTROS.debian: # as per LP: #964052 reconfigure mdadm with util.ChrootableTarget(target) as in_chroot: in_chroot.subp( ['dpkg-reconfigure', '--frontend=noninteractive', 'mdadm'], data=None, target=target) def handle_cloudconfig(cfg, base_dir=None): """write cloud-init configuration files into base_dir. cloudconfig format is a dictionary of keys and values of content cloudconfig: cfg-datasource: content: | #cloud-cfg datasource_list: [ MAAS ] cfg-maas: content: | #cloud-cfg reporting: maas: { consumer_key: 8cW9kadrWZcZvx8uWP, endpoint: 'http://XXX', token_key: jD57DB9VJYmDePCRkq, token_secret: mGFFMk6YFLA3h34QHCv22FjENV8hJkRX, type: webhook} """ # check that cfg is dict if not isinstance(cfg, dict): raise ValueError("cloudconfig configuration is not in dict format") # for each item in the dict # generate a path based on item key # if path is already in the item, LOG warning, and use generated path for cfgname, cfgvalue in cfg.items(): cfgpath = "50-cloudconfig-%s.cfg" % cfgname if 'path' in cfgvalue: LOG.warning("cloudconfig ignoring 'path' key in config") cfgvalue['path'] = cfgpath # re-use write_files format and adjust target to prepend LOG.debug('Calling write_files with cloudconfig @ %s', base_dir) LOG.debug('Injecting cloud-config:\n%s', cfg) futil.write_files(cfg, base_dir) def ubuntu_core_curthooks(cfg, target=None): """ Ubuntu-Core images cannot execute standard curthooks. Instead, for core16/18 we copy in any cloud-init configuration to the 'LABEL=writable' partition mounted at target. For core20, we write a cloud-config.d directory in the 'ubuntu-seed' location. """ ubuntu_core_target = os.path.join(target, "system-data") cc_target = os.path.join(ubuntu_core_target, 'etc/cloud/cloud.cfg.d') if not os.path.exists(ubuntu_core_target): # uc20 ubuntu_core_target = target cc_target = os.path.join(ubuntu_core_target, 'data', 'etc', 'cloud', 'cloud.cfg.d') cloudconfig = cfg.get('cloudconfig', None) if cloudconfig: # remove cloud-init.disabled, if found cloudinit_disable = os.path.join(ubuntu_core_target, 'etc/cloud/cloud-init.disabled') if os.path.exists(cloudinit_disable): util.del_file(cloudinit_disable) handle_cloudconfig(cloudconfig, base_dir=cc_target) netconfig = cfg.get('network', None) if netconfig: LOG.info('Writing network configuration') ubuntu_core_netconfig = os.path.join(cc_target, "50-curtin-networking.cfg") util.write_file(ubuntu_core_netconfig, content=config.dump_config({'network': netconfig})) def redhat_upgrade_cloud_init(netcfg, target=None, osfamily=DISTROS.redhat): """ CentOS images execute built-in curthooks which only supports simple networking configuration. This hook enables advanced network configuration via config passthrough to the target. """ def cloud_init_repo(version): if not version: raise ValueError('Missing required version parameter') return CLOUD_INIT_YUM_REPO_TEMPLATE % version if netcfg: LOG.info('Removing embedded network configuration (if present)') ifcfgs = glob.glob( paths.target_path(target, 'etc/sysconfig/network-scripts') + '/ifcfg-*') # remove ifcfg-* (except ifcfg-lo) for ifcfg in ifcfgs: if os.path.basename(ifcfg) != "ifcfg-lo": util.del_file(ifcfg) LOG.info('Checking cloud-init in target [%s] for network ' 'configuration passthrough support.', target) passthrough = net.netconfig_passthrough_available(target) LOG.debug('passthrough available via in-target: %s', passthrough) # if in-target cloud-init is not updated, upgrade via cloud-init repo if not passthrough: cloud_init_yum_repo = ( paths.target_path(target, 'etc/yum.repos.d/curtin-cloud-init.repo')) rhel_ver = distro.rpm_get_dist_id(target) # Inject cloud-init daily yum repo util.write_file(cloud_init_yum_repo, content=cloud_init_repo(rhel_ver)) # ensure up-to-date ca-certificates to handle https mirror # connections for epel and cloud-init-el. packages = ['ca-certificates'] if int(rhel_ver) < 8: # cloud-init in RHEL < 8 requires EPEL for dependencies. packages += ['epel-release'] # RHEL8+ no longer ships bridge-utils. This does not effect # bridge configuration. Only install on RHEL < 8 if not # available, do not upgrade. with util.ChrootableTarget(target) as in_chroot: try: in_chroot.subp(['rpm', '-q', 'bridge-utils'], capture=False, rcs=[0]) except util.ProcessExecutionError: LOG.debug( 'Image missing bridge-utils package, installing') packages += ['bridge-utils'] packages += ['cloud-init-el-release', 'cloud-init'] # We separate the installation of repository packages (epel, # cloud-init-el-release) as we need a new invocation of yum # to read the newly installed repo files. for package in packages: distro.install_packages( [package], target=target, osfamily=osfamily) # remove cloud-init el-stable bootstrap repo config as the # cloud-init-el-release package points to the correct repo util.del_file(cloud_init_yum_repo) LOG.info('Passing network configuration through to target') net.render_netconfig_passthrough(target, netconfig={'network': netcfg}) # Public API, maas may call this from internal curthooks centos_apply_network_config = redhat_upgrade_cloud_init def redhat_apply_selinux_autorelabel(target): """Creates file /.autorelabel. This is used by SELinux to relabel all of the files on the filesystem to have the correct security context. Without this SSH login will fail. """ LOG.debug('enabling selinux autorelabel') open(paths.target_path(target, '.autorelabel'), 'a').close() def redhat_update_dracut_config(target, cfg): initramfs_mapping = { 'lvm': {'conf': 'lvmconf', 'modules': 'lvm'}, 'raid': {'conf': 'mdadmconf', 'modules': 'mdraid'}, } # no need to update initramfs if no custom storage if 'storage' not in cfg: return False storage_config = cfg.get('storage', {}).get('config') if not storage_config: raise ValueError('Invalid storage config') add_conf = set() add_modules = set() for scfg in storage_config: if scfg['type'] == 'raid': add_conf.add(initramfs_mapping['raid']['conf']) add_modules.add(initramfs_mapping['raid']['modules']) elif scfg['type'] in ['lvm_volgroup', 'lvm_partition']: add_conf.add(initramfs_mapping['lvm']['conf']) add_modules.add(initramfs_mapping['lvm']['modules']) dconfig = ['# Written by curtin for custom storage config'] dconfig.append('add_dracutmodules+=" %s"' % (" ".join(add_modules))) for conf in add_conf: dconfig.append('%s="yes"' % conf) # Write out initramfs/dracut config for storage config dracut_conf_storage = os.path.sep.join( [target, '/etc/dracut.conf.d/50-curtin-storage.conf']) msg = '\n'.join(dconfig + ['']) LOG.debug('Updating redhat dracut config') util.write_file(dracut_conf_storage, content=msg) return True def redhat_update_initramfs(target, cfg): if not redhat_update_dracut_config(target, cfg): LOG.debug('Skipping redhat initramfs update, no custom storage config') return kver_cmd = ['rpm', '-q', '--queryformat', '%{VERSION}-%{RELEASE}.%{ARCH}', 'kernel'] with util.ChrootableTarget(target) as in_chroot: LOG.debug('Finding redhat kernel version: %s', kver_cmd) kver, _err = in_chroot.subp(kver_cmd, capture=True) LOG.debug('Found kver=%s' % kver) initramfs = '/boot/initramfs-%s.img' % kver dracut_cmd = ['dracut', '-f', initramfs, kver] LOG.debug('Rebuilding initramfs with: %s', dracut_cmd) in_chroot.subp(dracut_cmd, capture=True) def builtin_curthooks(cfg, target, state): LOG.info('Running curtin builtin curthooks') stack_prefix = state.get('report_stack_prefix', '') state_etcd = os.path.split(state['fstab'])[0] machine = platform.machine() distro_info = distro.get_distroinfo(target=target) if not distro_info: raise RuntimeError('Failed to determine target distro') osfamily = distro_info.family LOG.info('Configuring target system for distro: %s osfamily: %s', distro_info.variant, osfamily) if osfamily == DISTROS.debian: with events.ReportEventStack( name=stack_prefix + '/writing-apt-config', reporting_enabled=True, level="INFO", description="configuring apt configuring apt"): do_apt_config(cfg, target) disable_overlayroot(cfg, target) disable_update_initramfs(cfg, target, machine) # LP: #1742560 prevent zfs-dkms from being installed (Xenial) if distro.lsb_release(target=target)['codename'] == 'xenial': distro.apt_update(target=target) with util.ChrootableTarget(target) as in_chroot: in_chroot.subp(['apt-mark', 'hold', 'zfs-dkms']) # packages may be needed prior to installing kernel with events.ReportEventStack( name=stack_prefix + '/installing-missing-packages', reporting_enabled=True, level="INFO", description="installing missing packages"): install_missing_packages(cfg, target, osfamily=osfamily) with events.ReportEventStack( name=stack_prefix + '/configuring-iscsi-service', reporting_enabled=True, level="INFO", description="configuring iscsi service"): configure_iscsi(cfg, state_etcd, target, osfamily=osfamily) with events.ReportEventStack( name=stack_prefix + '/configuring-mdadm-service', reporting_enabled=True, level="INFO", description="configuring raid (mdadm) service"): configure_mdadm(cfg, state_etcd, target, osfamily=osfamily) if osfamily == DISTROS.debian: with events.ReportEventStack( name=stack_prefix + '/installing-kernel', reporting_enabled=True, level="INFO", description="installing kernel"): setup_zipl(cfg, target) setup_kernel_img_conf(target) install_kernel(cfg, target) run_zipl(cfg, target) restore_dist_interfaces(cfg, target) chzdev_persist_active_online(cfg, target) with events.ReportEventStack( name=stack_prefix + '/setting-up-swap', reporting_enabled=True, level="INFO", description="setting up swap"): add_swap(cfg, target, state.get('fstab')) if osfamily == DISTROS.redhat: # set cloud-init maas datasource for centos images if cfg.get('cloudconfig'): handle_cloudconfig( cfg['cloudconfig'], base_dir=paths.target_path(target, 'etc/cloud/cloud.cfg.d')) # For vmtests to force execute redhat_upgrade_cloud_init, uncomment # the value in examples/tests/centos_defaults.yaml if cfg.get('_ammend_centos_curthooks'): with events.ReportEventStack( name=stack_prefix + '/upgrading cloud-init', reporting_enabled=True, level="INFO", description="Upgrading cloud-init in target"): redhat_upgrade_cloud_init(cfg.get('network', {}), target) with events.ReportEventStack( name=stack_prefix + '/apply-networking-config', reporting_enabled=True, level="INFO", description="apply networking config"): apply_networking(target, state) with events.ReportEventStack( name=stack_prefix + '/writing-etc-fstab', reporting_enabled=True, level="INFO", description="writing etc/fstab"): copy_fstab(state.get('fstab'), target) with events.ReportEventStack( name=stack_prefix + '/configuring-multipath', reporting_enabled=True, level="INFO", description="configuring multipath"): detect_and_handle_multipath(cfg, target, osfamily=osfamily) with events.ReportEventStack( name=stack_prefix + '/system-upgrade', reporting_enabled=True, level="INFO", description="updating packages on target system"): system_upgrade(cfg, target, osfamily=osfamily) if osfamily == DISTROS.redhat: with events.ReportEventStack( name=stack_prefix + '/enabling-selinux-autorelabel', reporting_enabled=True, level="INFO", description="enabling selinux autorelabel mode"): redhat_apply_selinux_autorelabel(target) with events.ReportEventStack( name=stack_prefix + '/pollinate-user-agent', reporting_enabled=True, level="INFO", description="configuring pollinate user-agent on target"): handle_pollinate_user_agent(cfg, target) if osfamily == DISTROS.debian: # check for the zpool cache file and copy to target if present zpool_cache = '/etc/zfs/zpool.cache' if os.path.exists(zpool_cache): copy_zpool_cache(zpool_cache, target) zkey_repository = '/etc/zkey/repository' zkey_used = os.path.join(os.path.split(state['fstab'])[0], "zkey_used") if all(map(os.path.exists, [zkey_repository, zkey_used])): distro.install_packages(['s390-tools-zkey'], target=target, osfamily=osfamily) copy_zkey_repository(zkey_repository, target) # If a crypttab file was created by block_meta than it needs to be # copied onto the target system, and update_initramfs() needs to be # run, so that the cryptsetup hooks are properly configured on the # installed system and it will be able to open encrypted volumes # at boot. crypttab_location = os.path.join(os.path.split(state['fstab'])[0], "crypttab") if os.path.exists(crypttab_location): copy_crypttab(crypttab_location, target) update_initramfs(target) # If udev dname rules were created, copy them to target udev_rules_d = os.path.join(state['scratch'], "rules.d") if os.path.isdir(udev_rules_d): copy_dname_rules(udev_rules_d, target) with events.ReportEventStack( name=stack_prefix + '/updating-initramfs-configuration', reporting_enabled=True, level="INFO", description="updating initramfs configuration"): if osfamily == DISTROS.debian: # re-enable update_initramfs enable_update_initramfs(cfg, target, machine) update_initramfs(target, all_kernels=True) elif osfamily == DISTROS.redhat: redhat_update_initramfs(target, cfg) with events.ReportEventStack( name=stack_prefix + '/configuring-bootloader', reporting_enabled=True, level="INFO", description="configuring target system bootloader"): # As a rule, ARMv7 systems don't use grub. This may change some # day, but for now, assume no. They do require the initramfs # to be updated, and this also triggers boot loader setup via # flash-kernel. if (machine.startswith('armv7') or machine.startswith('s390x') or machine.startswith('aarch64') and not util.is_uefi_bootable()): return with events.ReportEventStack( name=stack_prefix + '/install-grub', reporting_enabled=True, level="INFO", description="installing grub to target devices"): setup_grub(cfg, target, osfamily=osfamily, variant=distro_info.variant) def curthooks(args): state = util.load_command_environment() if args.target is not None: target = args.target else: target = state['target'] if target is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) cfg = config.load_command_config(args, state) stack_prefix = state.get('report_stack_prefix', '') curthooks_mode = cfg.get('curthooks', {}).get('mode', 'auto') # UC is special, handle it first. if distro.is_ubuntu_core(target): LOG.info('Detected Ubuntu-Core image, running hooks') with events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level="INFO", description="Configuring Ubuntu-Core for first boot"): ubuntu_core_curthooks(cfg, target) sys.exit(0) # user asked for target, or auto mode if curthooks_mode in ['auto', 'target']: if util.run_hook_if_exists(target, 'curtin-hooks'): sys.exit(0) builtin_curthooks(cfg, target, state) sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, curthooks) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/extract.py000066400000000000000000000177421415350476600201220ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. try: from abc import ABC except ImportError: ABC = object import abc import os import shutil import sys import tempfile import curtin.config from curtin.log import LOG from curtin import util from curtin.futil import write_files from curtin.reporter import events from curtin import url_helper from . import populate_one_subcmd CMD_ARGUMENTS = ( ((('-t', '--target'), {'help': ('target directory to extract to (root) ' '[default TARGET_MOUNT_POINT]'), 'action': 'store', 'default': os.environ.get('TARGET_MOUNT_POINT')}), (('sources',), {'help': 'the sources to install [default read from CONFIG]', 'nargs': '*'}), ) ) def tar_xattr_opts(cmd=None): # if tar cmd supports xattrs, return the required flags to extract them. if cmd is None: cmd = ['tar'] if isinstance(cmd, str): cmd = [cmd] (out, _err) = util.subp(cmd + ['--help'], capture=True) if "xattr" in out: return ['--xattrs', '--xattrs-include=*'] return [] def extract_root_tgz_url(url, target): # extract a -root.tar.gz url in the 'target' directory path = _path_from_file_url(url) if path != url or os.path.isfile(path): util.subp(args=['smtar', '-C', target] + tar_xattr_opts() + ['-Sxpf', path, '--numeric-owner']) return # Uses smtar to avoid specifying the compression type util.subp(args=['sh', '-cf', ('wget "$1" --progress=dot:mega -O - |' 'smtar -C "$2" ' + ' '.join(tar_xattr_opts()) + ' ' + '-Sxpf - --numeric-owner'), '--', url, target]) def mount(device, mountpoint, options=None, type=None): opts = [] if options is not None: opts.extend(['-o', options]) if type is not None: opts.extend(['-t', type]) util.subp(['mount'] + opts + [device, mountpoint], capture=True) def unmount(mountpoint): util.subp(['umount', mountpoint], capture=True) class AbstractSourceHandler(ABC): """Encapsulate setting up an installation source for copy_to_target. A source hander sets up a curtin installation source (see https://curtin.readthedocs.io/en/latest/topics/config.html#sources) for copying to the target with copy_to_target. """ @abc.abstractmethod def setup(self): """Set up the source for copying and return the path to it.""" pass @abc.abstractmethod def cleanup(self): """Perform any necessary clean up of actions performed by setup.""" pass class LayeredSourceHandler(AbstractSourceHandler): def __init__(self, image_stack): self.image_stack = image_stack self._tmpdir = None self._mounts = [] def _download(self): new_image_stack = [] for path in self.image_stack: if url_helper.urlparse(path).scheme not in ["", "file"]: new_path = os.path.join(self._tmpdir, os.path.basename(path)) url_helper.download(path, new_path, retries=3) else: new_path = _path_from_file_url(path) new_image_stack.append(new_path) self.image_stack = new_image_stack def setup(self): self._tmpdir = tempfile.mkdtemp() try: self._download() # Check that all images exists on disk and are not empty for img in self.image_stack: if not os.path.isfile(img) or os.path.getsize(img) <= 0: raise ValueError( ("Failed to use fsimage: '%s' doesn't exist " + "or is invalid") % (img,)) for img in self.image_stack: mp = os.path.join( self._tmpdir, os.path.basename(img) + ".dir") os.mkdir(mp) mount(img, mp, options='loop,ro') self._mounts.append(mp) if len(self._mounts) == 1: root_dir = self._mounts[0] else: # Multiple image files, merge them with an overlay. root_dir = os.path.join(self._tmpdir, "root.dir") os.mkdir(root_dir) mount( 'overlay', root_dir, type='overlay', options='lowerdir=' + ':'.join(reversed(self._mounts))) self._mounts.append(root_dir) return root_dir except Exception: self.cleanup() raise def cleanup(self): for mount in reversed(self._mounts): unmount(mount) self._mounts = [] if self._tmpdir is not None: shutil.rmtree(self._tmpdir) self._tmpdir = None class TrivialSourceHandler(AbstractSourceHandler): def __init__(self, path): self.path = path def setup(self): return self.path def cleanup(self): pass def _get_image_stack(uri): '''Find a list of dependent images for given layered fsimage path uri: URI of the layer file return: tuple of path to dependent images ''' image_stack = [] img_name = os.path.basename(uri) root_dir = uri[:-len(img_name)] img_base, img_ext = os.path.splitext(img_name) if not img_base: return [] img_parts = img_base.split('.') for i in range(len(img_parts)): image_stack.append( root_dir + '.'.join(img_parts[0:i+1]) + img_ext) return image_stack def get_handler_for_source(source): """Return an AbstractSourceHandler for setting up `source`.""" if source['uri'].startswith("cp://"): return TrivialSourceHandler(source['uri'][5:]) elif source['type'] == "fsimage": return LayeredSourceHandler([source['uri']]) elif source['type'] == "fsimage-layered": return LayeredSourceHandler(_get_image_stack(source['uri'])) else: return None def extract_source(source, target): handler = get_handler_for_source(source) if handler is not None: root_dir = handler.setup() try: copy_to_target(root_dir, target) finally: handler.cleanup() else: extract_root_tgz_url(source['uri'], target=target) def copy_to_target(source, target): if source.startswith("cp://"): source = source[5:] source = os.path.abspath(source) util.subp(args=['sh', '-c', ('mkdir -p "$2" && cd "$2" && ' 'rsync -aXHAS --one-file-system "$1/" .'), '--', source, target]) def _path_from_file_url(url): return url[7:] if url.startswith("file://") else url def extract(args): if not args.target: raise ValueError("Target must be defined or set in environment") state = util.load_command_environment() cfg = curtin.config.load_command_config(args, state) sources = args.sources target = args.target if not sources: if not cfg.get('sources'): raise ValueError("'sources' must be on cmdline or in config") sources = cfg.get('sources') if isinstance(sources, dict): sources = [sources[k] for k in sorted(sources.keys())] sources = [util.sanitize_source(s) for s in sources] LOG.debug("Installing sources: %s to target at %s" % (sources, target)) stack_prefix = state.get('report_stack_prefix', '') for source in sources: with events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level="INFO", description="acquiring and extracting image from %s" % source['uri']): if source['type'].startswith('dd-'): continue extract_source(source, target) if cfg.get('write_files'): LOG.info("Applying write_files from config.") write_files(cfg['write_files'], target) else: LOG.info("No write_files in config.") sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, extract) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/features.py000066400000000000000000000007531415350476600202600ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """List the supported feature names to stdout.""" import sys from .. import FEATURES from . import populate_one_subcmd CMD_ARGUMENTS = ((tuple())) def features_main(args): sys.stdout.write("\n".join(sorted(FEATURES)) + "\n") sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, features_main) parser.description = __doc__ # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/hook.py000066400000000000000000000014241415350476600173760ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys import curtin.config from curtin.log import LOG import curtin.util from . import populate_one_subcmd CMD_ARGUMENTS = ( ((('target',), {'help': 'finalize the provided directory [default TARGET_MOUNT_POINT]', 'action': 'store', 'default': os.environ.get('TARGET_MOUNT_POINT'), 'nargs': '?'}), ) ) def hook(args): if not args.target: raise ValueError("Target must be provided or set in environment") LOG.debug("Finalizing %s" % args.target) curtin.util.run_hook_if_exists(args.target, "finalize") sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, hook) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/in_target.py000066400000000000000000000044321415350476600204140ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import pty import sys from curtin import paths, util from . import populate_one_subcmd CMD_ARGUMENTS = ( ((('-a', '--allow-daemons'), {'help': 'do not disable daemons via invoke-rc.d', 'action': 'store_true', 'default': False, }), (('-i', '--interactive'), {'help': 'use command invoked interactively', 'action': 'store_true', 'default': False}), (('--capture',), {'help': 'capture/swallow output of command', 'action': 'store_true', 'default': False}), (('-t', '--target'), {'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), ('command_args', {'help': 'run a command chrooted in the target', 'nargs': '*'}), ) ) def in_target_main(args): if args.target is not None: target = args.target else: state = util.load_command_environment() target = state['target'] if args.target is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) daemons = args.allow_daemons if paths.target_path(args.target) == "/": sys.stderr.write("WARN: Target is /, daemons are allowed.\n") daemons = True cmd = args.command_args with util.ChrootableTarget(target, allow_daemons=daemons) as chroot: exit = 0 if not args.interactive: try: chroot.subp(cmd, capture=args.capture) except util.ProcessExecutionError as e: exit = e.exit_code else: if chroot.target != "/": cmd = ["chroot", chroot.target] + args.command_args # in python 3.4 pty.spawn started returning a value. # There, it is the status from os.waitpid. From testing (py3.6) # that seemse to be exit_code * 256. ret = pty.spawn(cmd) # pylint: disable=E1111 if ret is not None: exit = int(ret / 256) sys.exit(exit) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, in_target_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/install.py000066400000000000000000000442411415350476600201100ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import argparse from copy import deepcopy import json import os import re import shlex import shutil import subprocess import sys import tempfile from curtin.block import iscsi, zfs from curtin import config from curtin import distro from curtin import util from curtin import paths from curtin import version from curtin.log import LOG, logged_time from curtin.reporter.legacy import load_reporter from curtin.reporter import events from . import populate_one_subcmd INSTALL_LOG = "/var/log/curtin/install.log" # Upon error, curtin creates a tar of all related logs at ERROR_TARFILE ERROR_TARFILE = '/var/log/curtin/curtin-error-logs.tar' SAVE_INSTALL_LOG = '/root/curtin-install.log' SAVE_INSTALL_CONFIG = '/root/curtin-install-cfg.yaml' INSTALL_START_MSG = ("curtin: Installation started. (%s)" % version.version_string()) INSTALL_PASS_MSG = "curtin: Installation finished." INSTALL_FAIL_MSG = "curtin: Installation failed with exception: {exception}" STAGE_DESCRIPTIONS = { 'early': 'preparing for installation', 'partitioning': 'configuring storage', 'network': 'configuring network', 'extract': 'writing install sources to disk', 'curthooks': 'configuring installed system', 'hook': 'finalizing installation', 'late': 'executing late commands', } CONFIG_BUILTIN = { 'sources': {}, 'stages': ['early', 'partitioning', 'network', 'extract', 'curthooks', 'hook', 'late'], 'extract_commands': {'builtin': ['curtin', 'extract']}, 'hook_commands': {'builtin': ['curtin', 'hook']}, 'partitioning_commands': { 'builtin': ['curtin', 'block-meta', 'simple']}, 'curthooks_commands': {'builtin': ['curtin', 'curthooks']}, 'late_commands': {'builtin': []}, 'network_commands': {'builtin': ['curtin', 'net-meta', 'auto']}, 'apply_net_commands': {'builtin': []}, 'install': {'log_file': INSTALL_LOG, 'error_tarfile': ERROR_TARFILE} } def clear_install_log(logfile): """Clear the installation log, so no previous installation is present.""" util.ensure_dir(os.path.dirname(logfile)) try: open(logfile, 'w').close() except Exception: pass def copy_install_log(logfile, target, log_target_path): """Copy curtin install log file to target system""" basemsg = 'Cannot copy curtin install log "%s" to target.' % logfile if not logfile: LOG.warn(basemsg) return if not os.path.isfile(logfile): LOG.warn(basemsg + " file does not exist.") return LOG.debug('Copying curtin install log from %s to target/%s', logfile, log_target_path) util.write_file( filename=paths.target_path(target, log_target_path), content=util.load_file(logfile, decode=False), mode=0o400, omode="wb") def writeline_and_stdout(logfile, message): writeline(logfile, message) out = sys.stdout msg = message + "\n" if hasattr(out, 'buffer'): out = out.buffer # pylint: disable=no-member msg = msg.encode() out.write(msg) out.flush() def writeline(fname, output): """Write a line to a file.""" if not output.endswith('\n'): output += '\n' try: with open(fname, 'a') as fp: fp.write(output) except IOError: pass class WorkingDir(object): def __init__(self, config): top_d = tempfile.mkdtemp() state_d = os.path.join(top_d, 'state') scratch_d = os.path.join(top_d, 'scratch') for p in (state_d, scratch_d): os.mkdir(p) target_d = config.get('install', {}).get('target') if not target_d: target_d = os.path.join(top_d, 'target') try: util.ensure_dir(target_d) except OSError as e: raise ValueError( "Unable to create target directory '%s': %s" % (target_d, e)) if os.listdir(target_d) != []: raise ValueError( "Provided target dir '%s' was not empty." % target_d) netconf_f = os.path.join(state_d, 'network_config') netstate_f = os.path.join(state_d, 'network_state') interfaces_f = os.path.join(state_d, 'interfaces') config_f = os.path.join(state_d, 'config') fstab_f = os.path.join(state_d, 'fstab') with open(config_f, "w") as fp: json.dump(config, fp) # just touch these files to make sure they exist for f in (config_f, fstab_f, netconf_f, netstate_f): with open(f, "ab") as fp: pass self.scratch = scratch_d self.target = target_d self.top = top_d self.interfaces = interfaces_f self.netconf = netconf_f self.netstate = netstate_f self.fstab = fstab_f self.config = config self.config_file = config_f def env(self): return ({'WORKING_DIR': self.scratch, 'OUTPUT_FSTAB': self.fstab, 'OUTPUT_INTERFACES': self.interfaces, 'OUTPUT_NETWORK_CONFIG': self.netconf, 'OUTPUT_NETWORK_STATE': self.netstate, 'TARGET_MOUNT_POINT': self.target, 'CONFIG': self.config_file}) class Stage(object): def __init__(self, name, commands, env, reportstack=None, logfile=None): self.name = name self.commands = commands self.env = env if logfile is None: logfile = INSTALL_LOG self.install_log = self._open_install_log(logfile) if hasattr(sys.stdout, 'buffer'): self.write_stdout = self._write_stdout3 else: self.write_stdout = self._write_stdout2 if reportstack is None: reportstack = events.ReportEventStack( name="stage-%s" % name, description="basic stage %s" % name, reporting_enabled=False) self.reportstack = reportstack def _open_install_log(self, logfile): """Open the install log.""" if not logfile: return None try: return open(logfile, 'ab') except IOError: return None def _write_stdout3(self, data): sys.stdout.buffer.write(data) # pylint: disable=no-member sys.stdout.flush() def _write_stdout2(self, data): sys.stdout.write(data) sys.stdout.flush() def write(self, data): """Write data to stdout and to the install_log.""" self.write_stdout(data) if self.install_log is not None: self.install_log.write(data) self.install_log.flush() def run(self): for cmdname in sorted(self.commands.keys()): cmd = self.commands[cmdname] if not cmd: continue cur_res = events.ReportEventStack( name=cmdname, description="running '%s'" % ' '.join(cmd), parent=self.reportstack, level="DEBUG") env = self.env.copy() env['CURTIN_REPORTSTACK'] = cur_res.fullname shell = not isinstance(cmd, list) with util.LogTimer(LOG.debug, cmdname): with cur_res: try: sp = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, env=env, shell=shell) except OSError as e: LOG.warn("%s command failed", cmdname) raise util.ProcessExecutionError(cmd=cmd, reason=e) output = b"" while True: data = sp.stdout.read(1) if not data and sp.poll() is not None: break self.write(data) output += data rc = sp.returncode if rc != 0: LOG.warn("%s command failed", cmdname) raise util.ProcessExecutionError( stdout=output, stderr="", exit_code=rc, cmd=cmd) def apply_power_state(pstate): """ power_state: delay: 5 mode: poweroff message: Bye Bye """ cmd = load_power_state(pstate) if not cmd: return LOG.info("powering off with %s", cmd) fid = os.fork() if fid == 0: try: util.subp(cmd) os._exit(0) except Exception as e: LOG.warn("%s returned non-zero: %s" % (cmd, e)) os._exit(1) return def load_power_state(pstate): """Returns a command to reboot the system if power_state should.""" if pstate is None: return None if not isinstance(pstate, dict): raise TypeError("power_state is not a dict.") opt_map = {'halt': '-H', 'poweroff': '-P', 'reboot': '-r'} mode = pstate.get("mode") if mode not in opt_map: raise TypeError("power_state[mode] required, must be one of: %s." % ','.join(opt_map.keys())) delay = pstate.get("delay", "5") if delay == "now": delay = "0" elif re.match(r"\+[0-9]+", str(delay)): delay = "%sm" % delay[1:] else: delay = str(delay) args = ["shutdown", opt_map[mode], "now"] if pstate.get("message"): args.append(pstate.get("message")) shcmd = ('sleep "$1" && shift; ' '[ -f /run/block-curtin-poweroff ] && exit 0; ' 'exec "$@"') return (['sh', '-c', shcmd, 'curtin-poweroff', delay] + args) def apply_kexec(kexec, target): """ load kexec kernel from target dir, similar to /etc/init.d/kexec-load kexec: mode: on """ grubcfg = "boot/grub/grub.cfg" target_grubcfg = os.path.join(target, grubcfg) if kexec is None or kexec.get("mode") != "on": return False if not isinstance(kexec, dict): raise TypeError("kexec is not a dict.") if not util.which('kexec'): distro.install_packages('kexec-tools') if not os.path.isfile(target_grubcfg): raise ValueError("%s does not exist in target" % grubcfg) with open(target_grubcfg, "r") as fp: default = 0 menu_lines = [] # get the default grub boot entry number and menu entry line numbers for line_num, line in enumerate(fp, 1): if re.search(r"\bset default=\"[0-9]+\"\b", " %s " % line): default = int(re.sub(r"[^0-9]", '', line)) if re.search(r"\bmenuentry\b", " %s " % line): menu_lines.append(line_num) if not menu_lines: LOG.error("grub config file does not have a menuentry\n") return False # get the begin and end line numbers for default menuentry section, # using end of file if it's the last menuentry section begin = menu_lines[default] if begin != menu_lines[-1]: end = menu_lines[default + 1] - 1 else: end = line_num fp.seek(0) lines = fp.readlines() kernel = append = initrd = "" for i in range(begin, end): if 'linux' in lines[i].split(): split_line = shlex.split(lines[i]) kernel = os.path.join(target, split_line[1]) append = "--append=" + ' '.join(split_line[2:]) if 'initrd' in lines[i].split(): split_line = shlex.split(lines[i]) initrd = "--initrd=" + os.path.join(target, split_line[1]) if not kernel: LOG.error("grub config file does not have a kernel\n") return False LOG.debug("kexec -l %s %s %s" % (kernel, append, initrd)) util.subp(args=['kexec', '-l', kernel, append, initrd]) return True def migrate_proxy_settings(cfg): """Move the legacy proxy setting 'http_proxy' into cfg['proxy'].""" proxy = cfg.get('proxy', {}) if not isinstance(proxy, dict): raise ValueError("'proxy' in config is not a dictionary: %s" % proxy) if 'http_proxy' in cfg: hp = cfg['http_proxy'] if hp: if proxy.get('http_proxy', hp) != hp: LOG.warn("legacy http_proxy setting (%s) differs from " "proxy/http_proxy (%s), using %s", hp, proxy['http_proxy'], proxy['http_proxy']) else: LOG.debug("legacy 'http_proxy' migrated to proxy/http_proxy") proxy['http_proxy'] = hp del cfg['http_proxy'] cfg['proxy'] = proxy @logged_time("INSTALL_COMMAND") def cmd_install(args): from .collect_logs import create_log_tarfile cfg = deepcopy(CONFIG_BUILTIN) config.merge_config(cfg, args.config) for source in args.source: src = util.sanitize_source(source) cfg['sources']["%02d_cmdline" % len(cfg['sources'])] = src LOG.info(INSTALL_START_MSG) LOG.debug('LANG=%s', os.environ.get('LANG')) LOG.debug("merged config: %s" % cfg) if not len(cfg.get('sources', [])): raise util.BadUsage("no sources provided to install") migrate_proxy_settings(cfg) for k in ('http_proxy', 'https_proxy', 'no_proxy'): if k in cfg['proxy']: os.environ[k] = cfg['proxy'][k] instcfg = cfg.get('install', {}) logfile = instcfg.get('log_file') error_tarfile = instcfg.get('error_tarfile') post_files = instcfg.get('post_files', [logfile]) # Generate curtin configuration dump and add to write_files unless # installation config disables dump yaml_dump_file = instcfg.get('save_install_config', SAVE_INSTALL_CONFIG) if yaml_dump_file: write_files = cfg.get('write_files', {}) write_files['curtin_install_cfg'] = { 'path': yaml_dump_file, 'permissions': '0400', 'owner': 'root:root', 'content': config.dump_config(cfg) } cfg['write_files'] = write_files # Load reporter clear_install_log(logfile) legacy_reporter = load_reporter(cfg) legacy_reporter.files = post_files writeline_and_stdout(logfile, INSTALL_START_MSG) args.reportstack.post_files = post_files workingd = None try: workingd = WorkingDir(cfg) dd_images = util.get_dd_images(cfg.get('sources', {})) if len(dd_images) > 1: raise ValueError("You may not use more than one disk image") LOG.debug(workingd.env()) env = os.environ.copy() env.update(workingd.env()) for name in cfg.get('stages'): desc = STAGE_DESCRIPTIONS.get(name, "stage %s" % name) reportstack = events.ReportEventStack( "stage-%s" % name, description=desc, parent=args.reportstack) env['CURTIN_REPORTSTACK'] = reportstack.fullname with reportstack: commands_name = '%s_commands' % name with util.LogTimer(LOG.debug, 'stage_%s' % name): stage = Stage(name, cfg.get(commands_name, {}), env, reportstack=reportstack, logfile=logfile) stage.run() if apply_kexec(cfg.get('kexec'), workingd.target): cfg['power_state'] = {'mode': 'reboot', 'delay': 'now', 'message': "'rebooting with kexec'"} writeline_and_stdout(logfile, INSTALL_PASS_MSG) legacy_reporter.report_success() except Exception as e: exp_msg = INSTALL_FAIL_MSG.format(exception=e) writeline(logfile, exp_msg) LOG.error(exp_msg) legacy_reporter.report_failure(exp_msg) if error_tarfile: create_log_tarfile(error_tarfile, cfg) raise e finally: log_target_path = instcfg.get('save_install_log', SAVE_INSTALL_LOG) if log_target_path and workingd: copy_install_log(logfile, workingd.target, log_target_path) if instcfg.get('unmount', "") == "disabled": LOG.info('Skipping unmount: config disabled target unmounting') elif workingd: # unmount everything (including iscsi disks) util.do_umount(workingd.target, recursive=True) # The open-iscsi service in the ephemeral environment handles # disconnecting active sessions. On Artful release the systemd # unit file has conditionals that are not met at boot time and # results in open-iscsi service not being started; This breaks # shutdown on Artful releases. # Additionally, in release < Artful, if the storage configuration # is layered, like RAID over iscsi volumes, then disconnecting # iscsi sessions before stopping the raid device hangs. # As it turns out, letting the open-iscsi service take down the # session last is the cleanest way to handle all releases # regardless of what may be layered on top of the iscsi disks. # # Check if storage configuration has iscsi volumes and if so ensure # iscsi service is active before exiting install if iscsi.get_iscsi_disks_from_config(cfg): iscsi.restart_iscsi_service() for pool in zfs.get_zpool_from_config(cfg): LOG.debug('Exporting ZFS zpool %s', pool) zfs.zpool_export(pool) shutil.rmtree(workingd.top) apply_power_state(cfg.get('power_state')) sys.exit(0) # we explicitly accept config on install for backwards compatibility CMD_ARGUMENTS = ( ((('-c', '--config'), {'help': 'read configuration from cfg', 'action': util.MergedCmdAppend, 'metavar': 'FILE', 'type': argparse.FileType("rb"), 'dest': 'cfgopts', 'default': []}), ('--set', {'action': util.MergedCmdAppend, 'help': ('define a config variable. key can be a "/" ' 'delimited path ("early_commands/cmd1=a"). if ' 'key starts with "json:" then val is loaded as ' 'json (json:stages="[\'early\']")'), 'metavar': 'key=val', 'dest': 'cfgopts'}), ('source', {'help': 'what to install', 'nargs': '*'}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, cmd_install) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/install_grub.py000066400000000000000000000364251415350476600211340ustar00rootroot00000000000000import os import re import platform import shutil import sys from curtin import block from curtin import config from curtin import distro from curtin import util from curtin.log import LOG from curtin.paths import target_path from curtin.reporter import events from . import populate_one_subcmd CMD_ARGUMENTS = ( ((('-t', '--target'), {'help': 'operate on target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': None}), (('-c', '--config'), {'help': 'operate on config. default is env[CONFIG]', 'action': 'store', 'metavar': 'CONFIG', 'default': None}), ) ) GRUB_MULTI_INSTALL = '/usr/lib/grub/grub-multi-install' def get_grub_package_name(target_arch, uefi, rhel_ver=None): """Determine the correct grub distro package name. :param: target_arch: string specifying the target system architecture :param: uefi: boolean indicating if system is booted via UEFI or not :param: rhel_ver: string specifying the major Redhat version in use. :returns: tuple of strings, grub package name and grub target name """ if target_arch is None: raise ValueError('Missing target_arch parameter') if uefi is None: raise ValueError('Missing uefi parameter') if 'ppc64' in target_arch: return ('grub-ieee1275', 'powerpc-ieee1275') if uefi: if target_arch == 'amd64': grub_name = 'grub-efi-%s' % target_arch grub_target = "x86_64-efi" elif target_arch == 'x86_64': # centos 7+, no centos6 support # grub2-efi-x64 installs a signed grub bootloader grub_name = "grub2-efi-x64" grub_target = "x86_64-efi" elif target_arch == 'aarch64': # centos 7+, no centos6 support # grub2-efi-aa64 installs a signed grub bootloader grub_name = "grub2-efi-aa64" grub_target = "arm64-efi" elif target_arch == 'arm64': grub_name = 'grub-efi-%s' % target_arch grub_target = "arm64-efi" elif target_arch == 'i386': grub_name = 'grub-efi-ia32' grub_target = 'i386-efi' else: raise ValueError('Unsupported UEFI arch: %s' % target_arch) else: grub_target = 'i386-pc' if target_arch in ['i386', 'amd64']: grub_name = 'grub-pc' elif target_arch == 'x86_64': if rhel_ver == '6': grub_name = 'grub' elif rhel_ver in ['7', '8']: grub_name = 'grub2-pc' else: raise ValueError('Unsupported RHEL version: %s', rhel_ver) else: raise ValueError('Unsupported arch: %s' % target_arch) return (grub_name, grub_target) def get_grub_config_file(target=None, osfamily=None): """Return the filename used to configure grub. :param: osfamily: string specifying the target os family being configured :returns: string, path to the osfamily grub config file """ if not osfamily: osfamily = distro.get_osfamily(target=target) if osfamily == distro.DISTROS.debian: # to avoid tripping prompts on upgrade LP: #564853 return '/etc/default/grub.d/50-curtin-settings.cfg' return '/etc/default/grub' def prepare_grub_dir(target, grub_cfg): util.ensure_dir(os.path.dirname(target_path(target, grub_cfg))) # LP: #1179940 . The 50-cloudig-settings.cfg file is written by the cloud # images build and defines/override some settings. Disable it. ci_cfg = target_path(target, os.path.join( os.path.dirname(grub_cfg), "50-cloudimg-settings.cfg")) if os.path.exists(ci_cfg): LOG.debug('grub: moved %s out of the way', ci_cfg) shutil.move(ci_cfg, ci_cfg + '.disabled') def get_carryover_params(distroinfo): # return a string to append to installed systems boot parameters # it may include a '--' after a '---' # see LP: 1402042 for some history here. # this is similar to 'user-params' from d-i cmdline = util.load_file('/proc/cmdline') preferred_sep = '---' # KERNEL_CMDLINE_COPY_TO_INSTALL_SEP legacy_sep = '--' def wrap(sep): return ' ' + sep + ' ' sections = [] if wrap(preferred_sep) in cmdline: sections = cmdline.split(wrap(preferred_sep)) elif wrap(legacy_sep) in cmdline: sections = cmdline.split(wrap(legacy_sep)) else: extra = "" lead = cmdline if sections: lead = sections[0] extra = " ".join(sections[1:]) carry_extra = [] if extra: for tok in extra.split(): if re.match(r'(BOOTIF=.*|initrd=.*|BOOT_IMAGE=.*)', tok): continue carry_extra.append(tok) carry_lead = [] for tok in lead.split(): if tok in carry_extra: continue if tok.startswith('console='): carry_lead.append(tok) # always append rd.auto=1 for redhat family if distroinfo.family == distro.DISTROS.redhat: carry_extra.append('rd.auto=1') return carry_lead + carry_extra def replace_grub_cmdline_linux_default(target, new_args): # we always update /etc/default/grub to avoid "hiding" the override in # a grub.d directory. newcontent = 'GRUB_CMDLINE_LINUX_DEFAULT="%s"' % " ".join(new_args) target_grubconf = target_path(target, '/etc/default/grub') content = "" if os.path.exists(target_grubconf): content = util.load_file(target_grubconf) existing = re.search( r'GRUB_CMDLINE_LINUX_DEFAULT=.*', content, re.MULTILINE) if existing: omode = 'w+' updated_content = content[:existing.start()] updated_content += newcontent updated_content += content[existing.end():] else: omode = 'a+' updated_content = newcontent + '\n' util.write_file(target_grubconf, updated_content, omode=omode) LOG.debug('updated %s to set: %s', target_grubconf, newcontent) def write_grub_config(target, grubcfg, grub_conf, new_params): replace_default = config.value_as_boolean( grubcfg.get('replace_linux_default', True)) if replace_default: replace_grub_cmdline_linux_default(target, new_params) probe_os = config.value_as_boolean( grubcfg.get('probe_additional_os', False)) if not probe_os: probe_content = [ ('# Curtin disable grub os prober that might find other ' 'OS installs.'), 'GRUB_DISABLE_OS_PROBER="true"', ''] util.write_file(target_path(target, grub_conf), "\n".join(probe_content), omode='a+') # if terminal is present in config, but unset, then don't grub_terminal = grubcfg.get('terminal', 'console') if not isinstance(grub_terminal, str): raise ValueError("Unexpected value %s for 'terminal'. " "Value must be a string" % grub_terminal) if not grub_terminal.lower() == "unmodified": terminal_content = [ '# Curtin configured GRUB_TERMINAL value', 'GRUB_TERMINAL="%s"' % grub_terminal] util.write_file(target_path(target, grub_conf), "\n".join(terminal_content), omode='a+') def find_efi_loader(target, bootid): efi_path = '/boot/efi/EFI' possible_loaders = [ os.path.join(efi_path, bootid, 'shimx64.efi'), os.path.join(efi_path, 'BOOT', 'BOOTX64.EFI'), os.path.join(efi_path, bootid, 'grubx64.efi'), ] for loader in possible_loaders: tloader = target_path(target, path=loader) if os.path.exists(tloader): LOG.debug('find_efi_loader: found %s', loader) return loader return None def efi_loader_esp_path(loader): if loader.startswith('/boot/efi'): return loader[9:] # len('/boot/efi') == 9 return loader def get_efi_disk_part(devices): for disk in devices: (parent, partnum) = block.get_blockdev_for_partition(disk) if partnum: return (parent, partnum) return (None, None) def get_grub_install_command(uefi, distroinfo, target): grub_install_cmd = 'grub-install' if distroinfo.family == distro.DISTROS.debian: # prefer grub-multi-install if present if uefi and os.path.exists(target_path(target, GRUB_MULTI_INSTALL)): grub_install_cmd = GRUB_MULTI_INSTALL elif distroinfo.family == distro.DISTROS.redhat: grub_install_cmd = 'grub2-install' LOG.debug('Using grub install command: %s', grub_install_cmd) return grub_install_cmd def gen_uefi_install_commands(grub_name, grub_target, grub_cmd, update_nvram, distroinfo, devices, target): install_cmds = [['efibootmgr', '-v']] post_cmds = [] bootid = distroinfo.variant efidir = '/boot/efi' if distroinfo.family == distro.DISTROS.debian: install_cmds.append(['dpkg-reconfigure', grub_name]) install_cmds.append(['update-grub']) elif distroinfo.family == distro.DISTROS.redhat: # RHEL distros uses 'redhat' for bootid if bootid == 'rhel': bootid = 'redhat' loader = find_efi_loader(target, bootid) if loader: # Disable running grub's install command. CentOS/RHEL ships # a pre-built signed grub which installs into /boot. grub2-install # will generated a new unsigned grub which breaks UEFI secure boot. grub_cmd = None if update_nvram: efi_disk, efi_part_num = get_efi_disk_part(devices) # Add entry to the EFI boot menu install_cmds.append(['efibootmgr', '--create', '--write-signature', '--label', bootid, '--disk', efi_disk, '--part', efi_part_num, '--loader', efi_loader_esp_path(loader)]) post_cmds.append(['grub2-mkconfig', '-o', '/boot/efi/EFI/%s/grub.cfg' % bootid]) else: post_cmds.append(['grub2-mkconfig', '-o', '/boot/grub2/grub.cfg']) else: raise ValueError("Unsupported os family for grub " "install: %s" % distroinfo.family) if grub_cmd == GRUB_MULTI_INSTALL: # grub-multi-install is called with no arguments install_cmds.append([grub_cmd]) elif grub_cmd: install_cmds.append( [grub_cmd, '--target=%s' % grub_target, '--efi-directory=%s' % efidir, '--bootloader-id=%s' % bootid, '--recheck'] + ([] if update_nvram else ['--no-nvram'])) # check efi boot menu before and after post_cmds.append(['efibootmgr', '-v']) return (install_cmds, post_cmds) def gen_install_commands(grub_name, grub_cmd, distroinfo, devices, rhel_ver=None): install_cmds = [] post_cmds = [] if distroinfo.family == distro.DISTROS.debian: install_cmds.append(['dpkg-reconfigure', grub_name]) install_cmds.append(['update-grub']) elif distroinfo.family == distro.DISTROS.redhat: if rhel_ver in ["7", "8"]: post_cmds.append( ['grub2-mkconfig', '-o', '/boot/grub2/grub.cfg']) else: raise ValueError('Unsupported "rhel_ver" value: %s' % rhel_ver) else: raise ValueError("Unsupported os family for grub " "install: %s" % distroinfo.family) for dev in devices: install_cmds.append([grub_cmd, dev]) return (install_cmds, post_cmds) def check_target_arch_machine(target, arch=None, machine=None, uefi=None): """ Check target arch and machine type are grub supported. """ if not arch: arch = distro.get_architecture(target=target) if not machine: machine = platform.machine() errmsg = "Grub is not supported on arch=%s machine=%s" % (arch, machine) # s390x uses zipl if arch == "s390x": raise RuntimeError(errmsg) # As a rule, ARMv7 systems don't use grub. This may change some # day, but for now, assume no. They do require the initramfs # to be updated, and this also triggers boot loader setup via # flash-kernel. if (machine.startswith('armv7') or machine.startswith('s390x') or machine.startswith('aarch64') and not uefi): raise RuntimeError(errmsg) def install_grub(devices, target, uefi=None, grubcfg=None): """Install grub to devices inside target chroot. :param: devices: List of block device paths to install grub upon. :param: target: A string specifying the path to the chroot mountpoint. :param: uefi: A boolean set to True if system is UEFI bootable otherwise False. :param: grubcfg: An config dict with grub config options. """ if not devices: raise ValueError("Invalid parameter 'devices': %s" % devices) if not target: raise ValueError("Invalid parameter 'target': %s" % target) LOG.debug("installing grub to target=%s devices=%s [replace_defaults=%s]", target, devices, grubcfg.get('replace_default')) update_nvram = config.value_as_boolean(grubcfg.get('update_nvram', True)) distroinfo = distro.get_distroinfo(target=target) target_arch = distro.get_architecture(target=target) rhel_ver = (distro.rpm_get_dist_id(target) if distroinfo.family == distro.DISTROS.redhat else None) check_target_arch_machine(target, arch=target_arch, uefi=uefi) grub_name, grub_target = get_grub_package_name(target_arch, uefi, rhel_ver) grub_conf = get_grub_config_file(target, distroinfo.family) new_params = get_carryover_params(distroinfo) prepare_grub_dir(target, grub_conf) write_grub_config(target, grubcfg, grub_conf, new_params) grub_cmd = get_grub_install_command(uefi, distroinfo, target) if uefi: install_cmds, post_cmds = gen_uefi_install_commands( grub_name, grub_target, grub_cmd, update_nvram, distroinfo, devices, target) else: install_cmds, post_cmds = gen_install_commands( grub_name, grub_cmd, distroinfo, devices, rhel_ver) env = os.environ.copy() env['DEBIAN_FRONTEND'] = 'noninteractive' LOG.debug('Grub install cmds:\n%s', str(install_cmds + post_cmds)) with util.ChrootableTarget(target) as in_chroot: for cmd in install_cmds + post_cmds: in_chroot.subp(cmd, env=env, capture=True) def install_grub_main(args): state = util.load_command_environment() if args.target is not None: target = args.target else: target = state['target'] if target is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) cfg = config.load_command_config(args, state) stack_prefix = state.get('report_stack_prefix', '') uefi = util.is_uefi_bootable() grubcfg = cfg.get('grub') with events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level="INFO", description="Installing grub to target devices"): install_grub(args.devices, target, uefi=uefi, grubcfg=grubcfg) sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, install_grub_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/main.py000066400000000000000000000157561415350476600173770ustar00rootroot00000000000000#!/usr/bin/python # This file is part of curtin. See LICENSE file for copyright and license info. import argparse import os import sys import traceback from .. import log from .. import util from ..deps import install_deps from .. import version VERSIONSTR = version.version_string() SUB_COMMAND_MODULES = [ 'apply_net', 'apt-config', 'block-attach-iscsi', 'block-detach-iscsi', 'block-discover', 'block-info', 'block-meta', 'block-wipe', 'clear-holders', 'curthooks', 'collect-logs', 'extract', 'features', 'hook', 'install', 'mkfs', 'in-target', 'net-meta', 'pack', 'schema-validate', 'swap', 'system-install', 'system-upgrade', 'unmount', 'version', ] def add_subcmd(subparser, subcmd): modname = subcmd.replace("-", "_") subcmd_full = "curtin.commands.%s" % modname __import__(subcmd_full) try: popfunc = getattr(sys.modules[subcmd_full], 'POPULATE_SUBCMD') except AttributeError: raise AttributeError("No 'POPULATE_SUBCMD' in %s" % subcmd_full) popfunc(subparser.add_parser(subcmd)) class NoHelpParser(argparse.ArgumentParser): # ArgumentParser with forced 'add_help=False' def __init__(self, *args, **kwargs): kwargs.update({'add_help': False}) super(NoHelpParser, self).__init__(*args, **kwargs) def error(self, message): # without overriding this, argparse exits with bad usage raise ValueError("failed parsing arguments: %s" % message) def get_main_parser(stacktrace=False, verbosity=0, parser_class=argparse.ArgumentParser): parser = parser_class(prog='curtin', epilog='Version %s' % VERSIONSTR) parser.add_argument('--showtrace', action='store_true', default=stacktrace) parser.add_argument('-v', '--verbose', action='count', default=verbosity, dest='verbosity') parser.add_argument('--log-file', default=sys.stderr, type=argparse.FileType('w')) parser.add_argument('-c', '--config', action=util.MergedCmdAppend, help='read configuration from cfg', metavar='FILE', type=argparse.FileType("rb"), dest='main_cfgopts', default=[]) parser.add_argument('--install-deps', action='store_true', help='install dependencies as necessary', default=False) parser.add_argument('--set', action=util.MergedCmdAppend, help=('define a config variable. key can be a "/" ' 'delimited path ("early_commands/cmd1=a"). if ' 'key starts with "json:" then val is loaded as ' 'json (json:stages="[\'early\']")'), metavar='key=val', dest='main_cfgopts') parser.set_defaults(config={}) parser.set_defaults(reportstack=None) return parser def maybe_install_deps(args, stacktrace=True, verbosity=0): parser = get_main_parser(stacktrace=stacktrace, verbosity=verbosity, parser_class=NoHelpParser) subps = parser.add_subparsers(dest="subcmd", parser_class=NoHelpParser) for subcmd in SUB_COMMAND_MODULES: subps.add_parser(subcmd) install_only_args = [ ['-v', '--install-deps'], ['-vv', '--install-deps'], ['--install-deps', '-v'], ['--install-deps', '-vv'], ['--install-deps'], ] install_only = args in install_only_args if install_only: verbosity = 1 else: try: ns, unknown = parser.parse_known_args(args) verbosity = ns.verbosity if not ns.install_deps: return except ValueError: # bad usage will be reported by the real reporter return ret = install_deps(verbosity=verbosity) if ret != 0 or install_only: sys.exit(ret) return def main(argv=None): if argv is None: argv = sys.argv[1:] stacktrace = (os.environ.get('CURTIN_STACKTRACE', "0").lower() not in ("0", "false", "")) try: verbosity = int(os.environ.get('CURTIN_VERBOSITY', "0")) except ValueError: verbosity = 1 maybe_install_deps(argv, stacktrace=stacktrace, verbosity=verbosity) # Above here, only standard library modules can be assumed. from .. import config from ..reporter import (events, update_configuration) parser = get_main_parser(stacktrace=stacktrace, verbosity=verbosity) subps = parser.add_subparsers(dest="subcmd") for subcmd in SUB_COMMAND_MODULES: add_subcmd(subps, subcmd) args = parser.parse_args(argv) # merge config flags into a single config dictionary cfg_opts = args.main_cfgopts if hasattr(args, 'cfgopts'): cfg_opts += getattr(args, 'cfgopts') cfg = {} if cfg_opts: for (flag, val) in cfg_opts: if flag in ('-c', '--config'): config.merge_config_fp(cfg, val) val.close() elif flag in ('--set'): config.merge_cmdarg(cfg, val) else: cfg = config.load_command_config(args, util.load_command_environment()) args.config = cfg # if user gave cmdline arguments, then set environ so subsequent # curtin calls get those as default showtrace = args.showtrace if 'showtrace' in cfg: showtrace = str(cfg['showtrace']).lower() not in ("0", "false") os.environ['CURTIN_STACKTRACE'] = str(int(showtrace)) verbosity = args.verbosity if 'verbosity' in cfg: verbosity = int(cfg['verbosity']) os.environ['CURTIN_VERBOSITY'] = str(verbosity) if not getattr(args, 'func', None): # http://bugs.python.org/issue16308 parser.print_help() sys.exit(1) log.basicConfig(stream=args.log_file, verbosity=verbosity) paths = util.get_paths() if paths['helpers'] is None or paths['curtin_exe'] is None: raise OSError("Unable to find helpers or 'curtin' exe to add to path") path = os.environ['PATH'].split(':') for cand in (paths['helpers'], os.path.dirname(paths['curtin_exe'])): if cand not in [os.path.abspath(d) for d in path]: path.insert(0, cand) os.environ['PATH'] = ':'.join(path) # set up the reportstack update_configuration(cfg.get('reporting', {})) stack_prefix = (os.environ.get("CURTIN_REPORTSTACK", "") + "/cmd-%s" % args.subcmd) if stack_prefix.startswith("/"): stack_prefix = stack_prefix[1:] os.environ["CURTIN_REPORTSTACK"] = stack_prefix args.reportstack = events.ReportEventStack( name=stack_prefix, reporting_enabled=True, level="DEBUG", description="curtin command %s" % args.subcmd) try: with args.reportstack: ret = args.func(args) sys.exit(ret) except Exception as e: if showtrace: traceback.print_exc() sys.stderr.write("%s\n" % e) sys.exit(3) if __name__ == '__main__': sys.exit(main()) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/mkfs.py000066400000000000000000000030261415350476600173760ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from . import populate_one_subcmd from curtin.block.mkfs import mkfs as run_mkfs from curtin.block.mkfs import valid_fstypes import sys CMD_ARGUMENTS = ( (('devices', {'help': 'create filesystem on the target volume(s) or storage config \ item(s)', 'metavar': 'DEVICE', 'action': 'store', 'nargs': '+'}), (('-f', '--fstype'), {'help': 'filesystem type to use. default is ext4', 'choices': sorted(valid_fstypes()), 'default': 'ext4', 'action': 'store'}), (('-l', '--label'), {'help': 'label to use for filesystem', 'action': 'store'}), (('-u', '--uuid'), {'help': 'uuid to use for filesystem', 'action': 'store'}), (('-s', '--strict'), {'help': 'exit if mkfs cannot do exactly what is specified', 'action': 'store_true', 'default': False}), (('-F', '--force'), {'help': 'continue if some data already exists on device', 'action': 'store_true', 'default': False}) ) ) def mkfs(args): for device in args.devices: uuid = run_mkfs(device, args.fstype, strict=args.strict, uuid=args.uuid, label=args.label, force=args.force) print("Created '%s' filesystem in '%s' with uuid '%s' and label '%s'" % (args.fstype, device, uuid, args.label)) sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, mkfs) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/net_meta.py000066400000000000000000000125321415350476600202340ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import argparse import os import sys from curtin import net from curtin.log import LOG import curtin.util as util import curtin.config as config from . import populate_one_subcmd DEVNAME_ALIASES = ['connected', 'configured', 'netboot'] def network_device(value): if value in DEVNAME_ALIASES: return value if (value.startswith('eth') or (value.startswith('en') and len(value) == 3)): return value raise argparse.ArgumentTypeError("%s does not look like a netdev name") def resolve_alias(alias): if alias == "connected": alldevs = net.get_devicelist() return [d for d in alldevs if net.is_physical(d) and net.is_up(d)] elif alias == "configured": alldevs = net.get_devicelist() return [d for d in alldevs if net.is_physical(d) and net.is_up(d) and net.is_connected(d)] elif alias == "netboot": # should read /proc/cmdline here for BOOTIF raise NotImplementedError("netboot alias not implemented") else: raise ValueError("'%s' is not an alias: %s", alias, DEVNAME_ALIASES) def interfaces_basic_dhcp(devices, macs=None): # return network configuration that says to dhcp on provided devices if macs is None: macs = {} for dev in devices: macs[dev] = net.get_interface_mac(dev) config = [] for dev in devices: config.append({ 'type': 'physical', 'name': dev, 'mac_address': macs.get(dev), 'subnets': [{'type': 'dhcp4'}]}) return {'network': {'version': 1, 'config': config}} def interfaces_custom(args): state = util.load_command_environment() cfg = config.load_command_config(args, state) network_config = cfg.get('network', []) if not network_config: raise Exception("network configuration is required by mode '%s' " "but not provided in the config file" % 'custom') return {'network': network_config} def net_meta(args): # curtin net-meta --devices connected dhcp # curtin net-meta --devices configured dhcp # curtin net-meta --devices netboot dhcp # curtin net-meta --devices connected custom # if network-config hook exists in target, # we do not run the builtin if util.run_hook_if_exists(args.target, 'network-config'): sys.exit(0) if args.mode == "disabled": sys.exit(0) state = util.load_command_environment() cfg = config.load_command_config(args, state) if cfg.get("network") is not None: args.mode = "custom" eni = "etc/network/interfaces" if args.mode == "auto": if not args.devices: args.devices = ["connected"] t_eni = None if args.target: t_eni = os.path.sep.join((args.target, eni,)) if not os.path.isfile(t_eni): t_eni = None if t_eni: args.mode = "copy" else: args.mode = "dhcp" devices = [] if args.devices: for dev in args.devices: if dev in DEVNAME_ALIASES: devices += resolve_alias(dev) else: devices.append(dev) LOG.debug("net-meta mode is '%s'. devices=%s", args.mode, devices) output_network_config = os.environ.get("OUTPUT_NETWORK_CONFIG", "") if args.mode == "copy": if not args.target: raise argparse.ArgumentTypeError("mode 'copy' requires --target") t_eni = os.path.sep.join((args.target, "etc/network/interfaces",)) with open(t_eni, "r") as fp: content = fp.read() LOG.warn("net-meta mode is 'copy', static network interfaces files" "can be brittle. Copied interfaces: %s", content) target = args.output elif args.mode == "dhcp": target = output_network_config content = config.dump_config(interfaces_basic_dhcp(devices)) elif args.mode == 'custom': target = output_network_config content = config.dump_config(interfaces_custom(args)) else: raise Exception("Unexpected network config mode '%s'." % args.mode) if not target: raise Exception( "No target given for mode = '%s'. Nowhere to write content: %s" % (args.mode, content)) LOG.debug("writing to file %s with network config: %s", target, content) if target == "-": sys.stdout.write(content) else: with open(target, "w") as fp: fp.write(content) sys.exit(0) CMD_ARGUMENTS = ( ((('-D', '--devices'), {'help': 'which devices to operate on', 'action': 'append', 'metavar': 'DEVICE', 'type': network_device}), (('-o', '--output'), {'help': 'file to write to. defaults to env["OUTPUT_INTERFACES"] or "-"', 'metavar': 'IFILE', 'action': 'store', 'default': os.environ.get('OUTPUT_INTERFACES', "-")}), (('-t', '--target'), {'help': 'operate on target. default is env[TARGET_MOUNT_POINT]', 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), ('mode', {'help': 'meta-mode to use', 'choices': ['dhcp', 'copy', 'auto', 'custom', 'disabled']}) ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, net_meta) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/pack.py000066400000000000000000000023531415350476600173560ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import sys from curtin import pack from . import populate_one_subcmd CMD_ARGUMENTS = ( ((('-o', '--output'), {'help': 'where to write the archive to', 'action': 'store', 'metavar': 'FILE', 'default': "-", }), (('-a', '--add'), {'help': 'include FILE_PATH in archive at ARCHIVE_PATH', 'action': 'append', 'metavar': 'ARCHIVE_PATH:FILE_PATH', 'default': []}), ('command_args', {'help': 'command to run after extracting', 'nargs': '*'}), ) ) def pack_main(args): if args.output == "-": fdout = sys.stdout else: fdout = open(args.output, "w") delim = ":" addl = [] for tok in args.add: if delim not in tok: raise ValueError("'--add' argument '%s' did not have a '%s'", (tok, delim)) (archpath, filepath) = tok.split(":", 1) addl.append((archpath, filepath),) pack.pack(fdout, command=args.command_args, copy_files=addl) if args.output != "-": fdout.close() sys.exit(0) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, pack_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/schema_validate.py000066400000000000000000000023571415350476600215550ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """List the supported feature names to stdout.""" import sys from .. import storage_config from . import populate_one_subcmd CMD_ARGUMENTS = ( (('-s', '--storage'), {'help': 'apply storage config validator to config file', 'action': 'store_true', 'required': True}), (('-c', '--config'), {'help': 'path to configuration file to validate.', 'required': True, 'metavar': 'FILE', 'action': 'store', 'dest': 'schema_cfg'}), ) def schema_validate_storage(confpath): try: storage_config.load_and_validate(confpath) except Exception as e: sys.stderr.write(' ' + str(e) + '\n') return 1 sys.stdout.write(' Valid storage config: %s\n' % confpath) return 0 def schema_validate_main(args): errors = [] if args.storage: sys.stdout.write( 'Validating storage config in %s:\n' % args.schema_cfg) if schema_validate_storage(args.schema_cfg) != 0: errors.append('storage') return len(errors) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, schema_validate_main) parser.description = __doc__ # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/swap.py000066400000000000000000000045231415350476600174130ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys import curtin.swap as swap import curtin.util as util from . import populate_one_subcmd def swap_main(args): # curtin swap [--size=4G] [--target=/] [--fstab=/etc/fstab] [swap] state = util.load_command_environment() if args.target is not None: state['target'] = args.target if args.fstab is not None: state['fstab'] = args.fstab if state['target'] is None: sys.stderr.write("Unable to find target. " "Use --target or set TARGET_MOUNT_POINT\n") sys.exit(2) size = args.size if size is not None and size.lower() == "auto": size = None if size is not None: try: size = util.human2bytes(size) except ValueError as e: sys.stderr.write("%s\n" % e) sys.exit(2) if args.maxsize is not None: args.maxsize = util.human2bytes(args.maxsize) swap.setup_swapfile(target=state['target'], fstab=state['fstab'], swapfile=args.swapfile, size=size, maxsize=args.maxsize, force=args.force) sys.exit(2) CMD_ARGUMENTS = ( ((('-f', '--fstab'), {'help': 'file to write to. defaults to env["OUTPUT_FSTAB"]', 'metavar': 'FSTAB', 'action': 'store', 'default': os.environ.get('OUTPUT_FSTAB')}), (('-t', '--target'), {'help': ('target filesystem root to add swap file to. ' 'default is env[TARGET_MOUNT_POINT]'), 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), (('-F', '--force'), {'help': 'force creating of swapfile even if it may fail (btrfs,xfs)', 'default': False, 'action': 'store_true'}), (('-s', '--size'), {'help': 'size of swap file (eg: 1G, 1500M, 1024K, 100000. def: "auto")', 'default': None, 'action': 'store'}), (('-M', '--maxsize'), {'help': 'maximum size of swap file (assuming "auto")', 'default': None, 'action': 'store'}), ('swapfile', {'help': 'path to swap file under target', 'default': 'swap.img', 'nargs': '?'}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, swap_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/system_install.py000066400000000000000000000025511415350476600215120ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys import curtin.util as util from . import populate_one_subcmd from curtin.log import LOG from curtin import distro def system_install_pkgs_main(args): # curtin system-install [--target=/] [pkg, [pkg...]] if args.target is None: args.target = "/" exit_code = 0 try: distro.install_packages( pkglist=args.packages, target=args.target, allow_daemons=args.allow_daemons) except util.ProcessExecutionError as e: LOG.warn("system install failed for %s: %s" % (args.packages, e)) exit_code = e.exit_code sys.exit(exit_code) CMD_ARGUMENTS = ( ((('--allow-daemons',), {'help': ('do not disable running of daemons during upgrade.'), 'action': 'store_true', 'default': False}), (('-t', '--target'), {'help': ('target root to upgrade. ' 'default is env[TARGET_MOUNT_POINT]'), 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), ('packages', {'help': 'the list of packages to install', 'metavar': 'PACKAGES', 'action': 'store', 'nargs': '+'}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, system_install_pkgs_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/system_upgrade.py000066400000000000000000000022361415350476600214730ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys import curtin.util as util from . import populate_one_subcmd from curtin.log import LOG from curtin import distro def system_upgrade_main(args): # curtin system-upgrade [--target=/] if args.target is None: args.target = "/" exit_code = 0 try: distro.system_upgrade(target=args.target, allow_daemons=args.allow_daemons) except util.ProcessExecutionError as e: LOG.warn("system upgrade failed: %s" % e) exit_code = e.exit_code sys.exit(exit_code) CMD_ARGUMENTS = ( ((('--allow-daemons',), {'help': ('do not disable running of daemons during upgrade.'), 'action': 'store_true', 'default': False}), (('-t', '--target'), {'help': ('target root to upgrade. ' 'default is env[TARGET_MOUNT_POINT]'), 'action': 'store', 'metavar': 'TARGET', 'default': os.environ.get('TARGET_MOUNT_POINT')}), ) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, system_upgrade_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/unmount.py000066400000000000000000000024201415350476600201400ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin.log import LOG from curtin import util from . import populate_one_subcmd import os def unmount_main(args): """ run util.umount(target, recursive=True) """ if args.target is None: msg = "Missing target. Please provide target path parameter" raise ValueError(msg) if not os.path.exists(args.target): msg = "Cannot unmount target path %s: it does not exist" % args.target raise util.FileMissingError(msg) LOG.info("Unmounting devices from target path: %s", args.target) recursive_mode = not args.disable_recursive_mounts util.do_umount(args.target, recursive=recursive_mode) CMD_ARGUMENTS = ( (('-t', '--target'), {'help': ('Path to mountpoint to be unmounted.' 'The default is env variable "TARGET_MOUNT_POINT"'), 'metavar': 'TARGET', 'action': 'store', 'default': os.environ.get('TARGET_MOUNT_POINT')}), (('-d', '--disable-recursive-mounts'), {'help': 'Disable unmounting recursively under target', 'default': False, 'action': 'store_true'}), ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, unmount_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/commands/version.py000066400000000000000000000006311415350476600201220ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import sys from .. import version from . import populate_one_subcmd def version_main(args): sys.stdout.write(version.version_string() + "\n") sys.exit(0) CMD_ARGUMENTS = ( (tuple()) ) def POPULATE_SUBCMD(parser): populate_one_subcmd(parser, CMD_ARGUMENTS, version_main) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/config.py000066400000000000000000000065701415350476600161110ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import yaml import json ARCHIVE_HEADER = "#curtin-config-archive" ARCHIVE_TYPE = "text/curtin-config-archive" CONFIG_HEADER = "#curtin-config" CONFIG_TYPE = "text/curtin-config" try: # python2 _STRING_TYPES = (str, basestring, unicode) except NameError: # python3 _STRING_TYPES = (str,) def merge_config_fp(cfgin, fp): merge_config_str(cfgin, fp.read()) def merge_config_str(cfgin, cfgstr): cfg2 = yaml.safe_load(cfgstr) if not isinstance(cfg2, dict): raise TypeError("Failed reading config. not a dictionary: %s" % cfgstr) merge_config(cfgin, cfg2) def merge_config(cfg, cfg2): # update cfg by merging cfg2 over the top for k, v in cfg2.items(): if isinstance(v, dict) and isinstance(cfg.get(k, None), dict): merge_config(cfg[k], v) else: cfg[k] = v def merge_cmdarg(cfg, cmdarg, delim="/"): merge_config(cfg, cmdarg2cfg(cmdarg, delim)) def cmdarg2cfg(cmdarg, delim="/"): if '=' not in cmdarg: raise ValueError('no "=" in "%s"' % cmdarg) key, val = cmdarg.split("=", 1) cfg = {} cur = cfg is_json = False if key.startswith("json:"): is_json = True key = key[5:] items = key.split(delim) for item in items[:-1]: cur[item] = {} cur = cur[item] if is_json: try: val = json.loads(val) except (ValueError, TypeError): raise ValueError("setting of key '%s' had invalid json: %s" % (key, val)) # this would occur if 'json:={"topkey": "topval"}' if items[-1] == "": cfg = val else: cur[items[-1]] = val return cfg def load_config_archive(content): archive = yaml.safe_load(content) config = {} for part in archive: if isinstance(part, (str,)): if part.startswith(ARCHIVE_HEADER): merge_config(config, load_config_archive(part)) elif part.startswith(CONFIG_HEADER): merge_config_str(config, part) elif isinstance(part, dict) and isinstance(part.get('content'), str): payload = part.get('content') if (part.get('type') == ARCHIVE_TYPE or payload.startswith(ARCHIVE_HEADER)): merge_config(config, load_config_archive(payload)) elif (part.get('type') == CONFIG_TYPE or payload.startswith(CONFIG_HEADER)): merge_config_str(config, payload) return config def load_config(cfg_file): with open(cfg_file, "r") as fp: content = fp.read() if not content.startswith(ARCHIVE_HEADER): return yaml.safe_load(content) else: return load_config_archive(content) def load_command_config(args, state): if hasattr(args, 'config') and args.config: return args.config else: # state 'config' points to a file with fully rendered config cfg_file = state.get('config') if not cfg_file: cfg = {} else: cfg = load_config(cfg_file) return cfg def dump_config(config): return yaml.dump(config, default_flow_style=False, indent=2) def value_as_boolean(value): false_values = (False, None, 0, '0', 'False', 'false', 'None', 'none', '') return value not in false_values # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/deps/000077500000000000000000000000001415350476600152155ustar00rootroot00000000000000curtin-21.3/curtin/deps/__init__.py000066400000000000000000000120071415350476600173260ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import sys from curtin.util import ( ProcessExecutionError, is_uefi_bootable, subp, which, ) from curtin.distro import ( get_architecture, install_packages, lsb_release, ) REQUIRED_IMPORTS = [ # import string to execute, python2 package, python3 package ('import yaml', 'python-yaml', 'python3-yaml'), ('import pyudev', 'python-pyudev', 'python3-pyudev'), ] REQUIRED_EXECUTABLES = [ # executable in PATH, package ('file', 'file'), ('lvcreate', 'lvm2'), ('mdadm', 'mdadm'), ('mkfs.vfat', 'dosfstools'), ('mkfs.btrfs', '^btrfs-(progs|tools)$'), ('mkfs.ext4', 'e2fsprogs'), ('mkfs.xfs', 'xfsprogs'), ('partprobe', 'parted'), ('sgdisk', 'gdisk'), ('udevadm', 'udev'), ('make-bcache', 'bcache-tools'), ('iscsiadm', 'open-iscsi'), ] REQUIRED_KERNEL_MODULES = [ # kmod name ] if lsb_release()['codename'] == "precise": REQUIRED_IMPORTS.append( ('import oauth.oauth', 'python-oauth', None),) else: REQUIRED_IMPORTS.append( ('import oauthlib.oauth1', 'python-oauthlib', 'python3-oauthlib'),) # zfs is > trusty only if not lsb_release()['codename'] in ["precise", "trusty"]: REQUIRED_EXECUTABLES.append(('zfs', 'zfsutils-linux')) REQUIRED_KERNEL_MODULES.append('zfs') if not is_uefi_bootable() and 'arm' in get_architecture(): REQUIRED_EXECUTABLES.append(('flash-kernel', 'flash-kernel')) class MissingDeps(Exception): def __init__(self, message, deps): self.message = message if isinstance(deps, str) or deps is None: deps = [deps] self.deps = [d for d in deps if d is not None] self.fatal = None in deps def __str__(self): if self.fatal: if not len(self.deps): return self.message + " Unresolvable." return (self.message + " Unresolvable. Partially resolvable with packages: %s" % ' '.join(self.deps)) else: return self.message + " Install packages: %s" % ' '.join(self.deps) def check_import(imports, py2pkgs, py3pkgs, message=None): import_group = imports if isinstance(import_group, str): import_group = [import_group] for istr in import_group: try: exec(istr) return except ImportError: pass if not message: if isinstance(imports, str): message = "Failed '%s'." % imports else: message = "Unable to do any of %s." % import_group if sys.version_info[0] == 2: pkgs = py2pkgs else: pkgs = py3pkgs raise MissingDeps(message, pkgs) def check_executable(cmdname, pkg): if not which(cmdname): raise MissingDeps("Missing program '%s'." % cmdname, pkg) def check_executables(executables=None): if executables is None: executables = REQUIRED_EXECUTABLES mdeps = [] for exe, pkg in executables: try: check_executable(exe, pkg) except MissingDeps as e: mdeps.append(e) return mdeps def check_imports(imports=None): if imports is None: imports = REQUIRED_IMPORTS mdeps = [] for import_str, py2pkg, py3pkg in imports: try: check_import(import_str, py2pkg, py3pkg) except MissingDeps as e: mdeps.append(e) return mdeps def check_kernel_modules(modules=None): if modules is None: modules = REQUIRED_KERNEL_MODULES # if we're missing any modules, install the full # linux-image package for this environment for kmod in modules: try: subp(['modinfo', '--filename', kmod], capture=True) except ProcessExecutionError: kernel_pkg = 'linux-image-%s' % os.uname()[2] return [MissingDeps('missing kernel module %s' % kmod, kernel_pkg)] return [] def find_missing_deps(): return check_executables() + check_imports() + check_kernel_modules() def install_deps(verbosity=False, dry_run=False, allow_daemons=True): errors = find_missing_deps() if len(errors) == 0: if verbosity: sys.stderr.write("No missing dependencies\n") return 0 missing_pkgs = [] for e in errors: missing_pkgs += e.deps deps_string = ' '.join(sorted(missing_pkgs)) if dry_run: sys.stderr.write("Missing dependencies: %s\n" % deps_string) return 0 if os.geteuid() != 0: sys.stderr.write("Missing dependencies: %s\n" % deps_string) sys.stderr.write("Package installation is not possible as non-root.\n") return 2 if verbosity: sys.stderr.write("Installing %s\n" % deps_string) ret = 0 try: install_packages(missing_pkgs, allow_daemons=allow_daemons, opts=["--no-install-recommends"]) except ProcessExecutionError as e: sys.stderr.write("%s\n" % e) ret = e.exit_code return ret # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/deps/check.py000066400000000000000000000030341415350476600166440ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ The intent point of this module is that it can be called and exit success or fail, indicating that deps should be there. python -m curtin.deps.check [-v] """ import argparse import sys from . import find_missing_deps def debug(level, msg_level, msg): if level >= msg_level: if msg[-1] != "\n": msg += "\n" sys.stderr.write(msg) def main(): parser = argparse.ArgumentParser( prog='curtin-check-deps', description='check dependencies for curtin.') parser.add_argument('-v', '--verbose', action='count', default=0, dest='verbosity') args, extra = parser.parse_known_args(sys.argv[1:]) errors = find_missing_deps() if len(errors) == 0: # exit 0 means all dependencies are available. debug(args.verbosity, 1, "No missing dependencies") sys.exit(0) missing_pkgs = [] fatal = [] for e in errors: if e.fatal: fatal.append(e) debug(args.verbosity, 2, str(e)) missing_pkgs += e.deps if len(fatal): for e in fatal: debug(args.verbosity, 1, str(e)) sys.exit(1) debug(args.verbosity, 1, "Fix with:\n apt-get -qy install %s\n" % ' '.join(sorted(missing_pkgs))) # we exit higher with less deps needed. # exiting 99 means just 1 dep needed. sys.exit(100-len(missing_pkgs)) if __name__ == '__main__': main() # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/deps/install.py000066400000000000000000000016501415350476600172370ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ The intent of this module is that it can be called to install deps python -m curtin.deps.install [-v] """ import argparse import sys from . import install_deps def main(): parser = argparse.ArgumentParser( prog='curtin-install-deps', description='install dependencies for curtin.') parser.add_argument('-v', '--verbose', action='count', default=0, dest='verbosity') parser.add_argument('--dry-run', action='store_true', default=False) parser.add_argument('--no-allow-daemons', action='store_false', default=True) args = parser.parse_args(sys.argv[1:]) ret = install_deps(verbosity=args.verbosity, dry_run=args.dry_run, allow_daemons=True) sys.exit(ret) if __name__ == '__main__': main() # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/distro.py000066400000000000000000000460751415350476600161540ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import glob from collections import namedtuple import os import re import shutil import tempfile import textwrap from .paths import target_path from .util import ( ChrootableTarget, find_newer, load_file, load_shell_content, ProcessExecutionError, set_unexecutable, string_types, subp, which ) from .log import LOG DistroInfo = namedtuple('DistroInfo', ('variant', 'family')) DISTRO_NAMES = ['arch', 'centos', 'debian', 'fedora', 'freebsd', 'gentoo', 'opensuse', 'redhat', 'rhel', 'sles', 'suse', 'ubuntu'] # python2.7 lacks PEP 435, so we must make use an alternative for py2.7/3.x # https://stackoverflow.com/questions/36932/how-can-i-represent-an-enum-in-python def distro_enum(*distros): return namedtuple('Distros', distros)(*distros) DISTROS = distro_enum(*DISTRO_NAMES) OS_FAMILIES = { DISTROS.debian: [DISTROS.debian, DISTROS.ubuntu], DISTROS.redhat: [DISTROS.centos, DISTROS.fedora, DISTROS.redhat, DISTROS.rhel], DISTROS.gentoo: [DISTROS.gentoo], DISTROS.freebsd: [DISTROS.freebsd], DISTROS.suse: [DISTROS.opensuse, DISTROS.sles, DISTROS.suse], DISTROS.arch: [DISTROS.arch], } # invert the mapping for faster lookup of variants DISTRO_TO_OSFAMILY = ( {variant: family for family, variants in OS_FAMILIES.items() for variant in variants}) _LSB_RELEASE = {} def name_to_distro(distname): try: return DISTROS[DISTROS.index(distname)] except (IndexError, AttributeError): LOG.error('Unknown distro name: %s', distname) def lsb_release(target=None): if target_path(target) != "/": # do not use or update cache if target is provided return _lsb_release(target) global _LSB_RELEASE if not _LSB_RELEASE: data = _lsb_release() _LSB_RELEASE.update(data) return _LSB_RELEASE def os_release(target=None): data = {} os_release = target_path(target, 'etc/os-release') if os.path.exists(os_release): data = load_shell_content(load_file(os_release), add_empty=False, empty_val=None) if not data: for relfile in [target_path(target, rel) for rel in ['etc/centos-release', 'etc/redhat-release']]: data = _parse_redhat_release(release_file=relfile, target=target) if data: break return data def _parse_redhat_release(release_file=None, target=None): """Return a dictionary of distro info fields from /etc/redhat-release. Dict keys will align with /etc/os-release keys: ID, VERSION_ID, VERSION_CODENAME """ if not release_file: release_file = target_path('etc/redhat-release') if not os.path.exists(release_file): return {} redhat_release = load_file(release_file) redhat_regex = ( r'(?P.+) release (?P[\d\.]+) ' r'\((?P[^)]+)\)') match = re.match(redhat_regex, redhat_release) if match: group = match.groupdict() group['name'] = group['name'].lower().partition(' linux')[0] if group['name'] == 'red hat enterprise': group['name'] = 'redhat' return {'ID': group['name'], 'VERSION_ID': group['version'], 'VERSION_CODENAME': group['codename']} return {} def get_distroinfo(target=None): variant_os_release = os_release(target=target) variant_name = variant_os_release['ID'] try: variant = name_to_distro(variant_name) except ValueError: for variant_name in variant_os_release["ID_LIKE"].split(): try: variant = name_to_distro(variant_name) break except ValueError: pass else: raise ValueError("Unknown distro: %s" % variant_os_release['ID']) family = DISTRO_TO_OSFAMILY.get(variant) return DistroInfo(variant, family) def get_distro(target=None): distinfo = get_distroinfo(target=target) return distinfo.variant def get_osfamily(target=None): distinfo = get_distroinfo(target=target) return distinfo.family def is_ubuntu_core(target=None): """Check if any Ubuntu-Core specific directory is present at target""" return any([is_ubuntu_core_16(target), is_ubuntu_core_18(target), is_ubuntu_core_20(target)]) def is_ubuntu_core_16(target=None): """Check if Ubuntu-Core 16 specific directory is present at target""" return os.path.exists(target_path(target, 'system-data/var/lib/snapd')) def is_ubuntu_core_18(target=None): """Check if Ubuntu-Core 18 specific directory is present at target""" return is_ubuntu_core_16(target) def is_ubuntu_core_20(target=None): """Check if Ubuntu-Core 20 specific directory is present at target""" return os.path.exists(target_path(target, 'snaps')) def is_centos(target=None): """Check if CentOS specific file is present at target""" return os.path.exists(target_path(target, 'etc/centos-release')) def is_rhel(target=None): """Check if RHEL specific file is present at target""" return os.path.exists(target_path(target, 'etc/redhat-release')) def _lsb_release(target=None): fmap = {'Codename': 'codename', 'Description': 'description', 'Distributor ID': 'id', 'Release': 'release'} data = {} try: out, _ = subp(['lsb_release', '--all'], capture=True, target=target) for line in out.splitlines(): fname, _, val = line.partition(":") if fname in fmap: data[fmap[fname]] = val.strip() missing = [k for k in fmap.values() if k not in data] if len(missing): LOG.warn("Missing fields in lsb_release --all output: %s", ','.join(missing)) except ProcessExecutionError as err: LOG.warn("Unable to get lsb_release --all: %s", err) data = {v: "UNAVAILABLE" for v in fmap.values()} return data def apt_update(target=None, env=None, force=False, comment=None, retries=None): marker = "tmp/curtin.aptupdate" if env is None: env = os.environ.copy() if retries is None: # by default run apt-update up to 3 times to allow # for transient failures retries = (1, 2, 3) if comment is None: comment = "no comment provided" if comment.endswith("\n"): comment = comment[:-1] marker = target_path(target, marker) # if marker exists, check if there are files that would make it obsolete listfiles = [target_path(target, "/etc/apt/sources.list")] listfiles += glob.glob( target_path(target, "etc/apt/sources.list.d/*.list")) if os.path.exists(marker) and not force: if len(find_newer(marker, listfiles)) == 0: return restore_perms = [] abs_tmpdir = tempfile.mkdtemp(dir=target_path(target, "/tmp")) try: abs_slist = abs_tmpdir + "/sources.list" abs_slistd = abs_tmpdir + "/sources.list.d" ch_tmpdir = "/tmp/" + os.path.basename(abs_tmpdir) ch_slist = ch_tmpdir + "/sources.list" ch_slistd = ch_tmpdir + "/sources.list.d" # this file gets executed on apt-get update sometimes. (LP: #1527710) motd_update = target_path( target, "/usr/lib/update-notifier/update-motd-updates-available") pmode = set_unexecutable(motd_update) if pmode is not None: restore_perms.append((motd_update, pmode),) # create tmpdir/sources.list with all lines other than deb-src # avoid apt complaining by using existing and empty dir for sourceparts os.mkdir(abs_slistd) with open(abs_slist, "w") as sfp: for sfile in listfiles: with open(sfile, "r") as fp: contents = fp.read() for line in contents.splitlines(): line = line.lstrip() if not line.startswith("deb-src"): sfp.write(line + "\n") update_cmd = [ 'apt-get', '--quiet', '--option=Acquire::Languages=none', '--option=Dir::Etc::sourcelist=%s' % ch_slist, '--option=Dir::Etc::sourceparts=%s' % ch_slistd, 'update'] # do not using 'run_apt_command' so we can use 'retries' to subp with ChrootableTarget(target, allow_daemons=True) as inchroot: inchroot.subp(update_cmd, env=env, retries=retries) finally: for fname, perms in restore_perms: os.chmod(fname, perms) if abs_tmpdir: shutil.rmtree(abs_tmpdir) with open(marker, "w") as fp: fp.write(comment + "\n") def run_apt_command(mode, args=None, opts=None, env=None, target=None, execute=True, allow_daemons=False, clean=True): defopts = ['--quiet', '--assume-yes', '--option=Dpkg::options::=--force-unsafe-io', '--option=Dpkg::Options::=--force-confold'] if args is None: args = [] if opts is None: opts = [] if env is None: env = os.environ.copy() env['DEBIAN_FRONTEND'] = 'noninteractive' if which('eatmydata', target=target): emd = ['eatmydata'] else: emd = [] cmd = emd + ['apt-get'] + defopts + opts + [mode] + args if not execute: return env, cmd apt_update(target, env=env, comment=' '.join(cmd)) with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: cmd_rv = inchroot.subp(cmd, env=env) if clean and mode in ['dist-upgrade', 'install', 'upgrade']: inchroot.subp(['apt-get', 'clean']) return cmd_rv def run_yum_command(mode, args=None, opts=None, env=None, target=None, execute=True, allow_daemons=False): defopts = ['--assumeyes', '--quiet'] if args is None: args = [] if opts is None: opts = [] # dnf is a drop in replacement for yum. On newer RH based systems yum # is just a sym link to dnf. if which('dnf', target=target): cmd = ['dnf'] else: cmd = ['yum'] cmd += defopts + opts + [mode] + args if not execute: return env, cmd if mode in ["install", "update", "upgrade"]: return yum_install(mode, args, opts=opts, env=env, target=target, allow_daemons=allow_daemons) with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: return inchroot.subp(cmd, env=env) def yum_install(mode, packages=None, opts=None, env=None, target=None, allow_daemons=False): defopts = ['--assumeyes', '--quiet'] if packages is None: packages = [] if opts is None: opts = [] if mode not in ['install', 'update', 'upgrade']: raise ValueError( 'Unsupported mode "%s" for yum package install/upgrade' % mode) # dnf is a drop in replacement for yum. On newer RH based systems yum # is just a sym link to dnf. if which('dnf', target=target): cmd = ['dnf'] else: cmd = ['yum'] # download first, then install/upgrade from cache cmd += defopts + opts + [mode] dl_opts = ['--downloadonly', '--setopt=keepcache=1'] inst_opts = ['--cacheonly'] # rpm requires /dev /sys and /proc be mounted, use ChrootableTarget with ChrootableTarget(target, allow_daemons=allow_daemons) as inchroot: inchroot.subp(cmd + dl_opts + packages, env=env, retries=[1] * 10) return inchroot.subp(cmd + inst_opts + packages, env=env) def rpm_get_dist_id(target=None): """Use rpm command to extract the '%rhel' distro macro which returns the major os version id (6, 7, 8). This works for centos or rhel """ # rpm requires /dev /sys and /proc be mounted, use ChrootableTarget with ChrootableTarget(target) as in_chroot: dist, _ = in_chroot.subp(['rpm', '-E', '%rhel'], capture=True) return dist.rstrip() def system_upgrade(opts=None, target=None, env=None, allow_daemons=False, osfamily=None): LOG.debug("Upgrading system in %s", target) distro_cfg = { DISTROS.debian: {'function': run_apt_command, 'subcommands': ('dist-upgrade', 'autoremove')}, DISTROS.redhat: {'function': run_yum_command, 'subcommands': ('upgrade',)}, } if osfamily not in distro_cfg: raise ValueError('Distro "%s" does not have system_upgrade support', osfamily) for mode in distro_cfg[osfamily]['subcommands']: ret = distro_cfg[osfamily]['function']( mode, opts=opts, target=target, env=env, allow_daemons=allow_daemons) return ret def install_packages(pkglist, osfamily=None, opts=None, target=None, env=None, allow_daemons=False): if isinstance(pkglist, str): pkglist = [pkglist] if not osfamily: osfamily = get_osfamily(target=target) installer_map = { DISTROS.debian: run_apt_command, DISTROS.redhat: run_yum_command, } install_cmd = installer_map.get(osfamily) if not install_cmd: raise ValueError('No packge install command for distro: %s' % osfamily) return install_cmd('install', args=pkglist, opts=opts, target=target, env=env, allow_daemons=allow_daemons) def has_pkg_available(pkg, target=None, osfamily=None): if not osfamily: osfamily = get_osfamily(target=target) if osfamily not in [DISTROS.debian, DISTROS.redhat]: raise ValueError('has_pkg_available: unsupported distro family: %s', osfamily) if osfamily == DISTROS.debian: out, _ = subp(['apt-cache', 'pkgnames'], capture=True, target=target) for item in out.splitlines(): if pkg == item.strip(): return True return False if osfamily == DISTROS.redhat: out, _ = run_yum_command('list', opts=['--cacheonly']) for item in out.splitlines(): if item.lower().startswith(pkg.lower()): return True return False def get_installed_packages(target=None): out = None if which('dpkg-query', target=target): (out, _) = subp(['dpkg-query', '--list'], target=target, capture=True) elif which('rpm', target=target): # rpm requires /dev /sys and /proc be mounted, use ChrootableTarget with ChrootableTarget(target) as in_chroot: (out, _) = in_chroot.subp(['rpm', '-qa', '--queryformat', 'ii %{NAME} %{VERSION}-%{RELEASE}\n'], target=target, capture=True) if not out: raise ValueError('No package query tool') pkgs_inst = set() for line in out.splitlines(): try: (state, pkg, other) = line.split(None, 2) except ValueError: continue if state.startswith("hi") or state.startswith("ii"): pkgs_inst.add(re.sub(":.*", "", pkg)) return pkgs_inst def has_pkg_installed(pkg, target=None): try: out, _ = subp(['dpkg-query', '--show', '--showformat', '${db:Status-Abbrev}', pkg], capture=True, target=target) return out.rstrip() == "ii" except ProcessExecutionError: return False def parse_dpkg_version(raw, name=None, semx=None): """Parse a dpkg version string into various parts and calcualate a numerical value of the version for use in comparing package versions Native packages (without a '-'), will have the package version treated as the upstream version. returns a dictionary with fields: 'epoch' 'major' (int), 'minor' (int), 'micro' (int), 'semantic_version' (int), 'extra' (string), 'raw' (string), 'upstream' (string), 'name' (present only if name is not None) """ if not isinstance(raw, string_types): raise TypeError( "Invalid type %s for parse_dpkg_version" % raw.__class__) if semx is None: semx = (10000, 100, 1) raw_offset = 0 if ':' in raw: epoch, _, upstream = raw.partition(':') raw_offset = len(epoch) + 1 else: epoch = 0 upstream = raw if "-" in raw[raw_offset:]: upstream = raw[raw_offset:].rsplit('-', 1)[0] else: # this is a native package, package version treated as upstream. upstream = raw[raw_offset:] match = re.search(r'[^0-9.]', upstream) if match: extra = upstream[match.start():] upstream_base = upstream[:match.start()] else: upstream_base = upstream extra = None toks = upstream_base.split(".", 3) if len(toks) == 4: major, minor, micro, extra = toks elif len(toks) == 3: major, minor, micro = toks elif len(toks) == 2: major, minor, micro = (toks[0], toks[1], 0) elif len(toks) == 1: major, minor, micro = (toks[0], 0, 0) version = { 'epoch': int(epoch), 'major': int(major), 'minor': int(minor), 'micro': int(micro), 'extra': extra, 'raw': raw, 'upstream': upstream, } if name: version['name'] = name if semx: try: version['semantic_version'] = int( int(major) * semx[0] + int(minor) * semx[1] + int(micro) * semx[2]) except (ValueError, IndexError): version['semantic_version'] = None return version def get_package_version(pkg, target=None, semx=None): """Use dpkg-query to extract package pkg's version string and parse the version string into a dictionary """ try: out, _ = subp(['dpkg-query', '--show', '--showformat', '${Version}', pkg], capture=True, target=target) raw = out.rstrip() return parse_dpkg_version(raw, name=pkg, semx=semx) except ProcessExecutionError: return None def fstab_header(): return textwrap.dedent("""\ # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # """) def dpkg_get_architecture(target=None): out, _ = subp(['dpkg', '--print-architecture'], capture=True, target=target) return out.strip() def rpm_get_architecture(target=None): # rpm requires /dev /sys and /proc be mounted, use ChrootableTarget with ChrootableTarget(target) as in_chroot: out, _ = in_chroot.subp(['rpm', '-E', '%_arch'], capture=True) return out.strip() def get_architecture(target=None, osfamily=None): if not osfamily: osfamily = get_osfamily(target=target) if osfamily == DISTROS.debian: return dpkg_get_architecture(target=target) if osfamily == DISTROS.redhat: return rpm_get_architecture(target=target) raise ValueError("Unhandled osfamily=%s" % osfamily) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/futil.py000066400000000000000000000057751415350476600157750ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import grp import pwd import os import warnings from .util import write_file from .paths import target_path from .log import LOG def chownbyid(fname, uid=None, gid=None): if uid in [None, -1] and gid in [None, -1]: return os.chown(fname, uid, gid) def decode_perms(perm, default=0o644): try: if perm is None: return default if isinstance(perm, (int, float)): # Just 'downcast' it (if a float) return int(perm) else: # Force to string and try octal conversion return int(str(perm), 8) except (TypeError, ValueError): return default def chownbyname(fname, user=None, group=None): uid = -1 gid = -1 try: if user: uid = pwd.getpwnam(user).pw_uid if group: gid = grp.getgrnam(group).gr_gid except KeyError as e: raise OSError("Unknown user or group: %s" % (e)) chownbyid(fname, uid, gid) def extract_usergroup(ug_pair): if not ug_pair: return (None, None) ug_parted = ug_pair.split(':', 1) u = ug_parted[0].strip() if len(ug_parted) == 2: g = ug_parted[1].strip() else: g = None if not u or u == "-1" or u.lower() == "none": u = None if not g or g == "-1" or g.lower() == "none": g = None return (u, g) def write_finfo(path, content, owner="-1:-1", perms="0644"): (u, g) = extract_usergroup(owner) omode = "w" if isinstance(content, bytes): omode = "wb" write_file(path, content, mode=decode_perms(perms), omode=omode) chownbyname(path, u, g) def write_files(files, base_dir=None): """Write files described in the dictionary 'files' paths are assumed under 'base_dir', which will default to '/'. A trailing '/' will be applied if not present. files is a dictionary where each entry has: path: /file1 content: (bytes or string) permissions: (optional, default=0644) owner: (optional, default -1:-1): string of 'uid:gid'.""" for (key, info) in files.items(): if not info.get('path'): LOG.warn("Warning, write_files[%s] had no 'path' entry", key) continue write_finfo(path=target_path(base_dir, info['path']), content=info.get('content', ''), owner=info.get('owner', "-1:-1"), perms=info.get('permissions', info.get('perms', "0644"))) def _legacy_write_files(cfg, base_dir=None): """Backwards compatibility for curthooks.write_files (LP: #1731709) It needs to work like: curthooks.write_files(cfg, target) cfg is a 'cfg' dictionary with a 'write_files' entry in it. """ warnings.warn( "write_files use from curtin.util is deprecated. " "Please use curtin.futil.write_files.", DeprecationWarning) return write_files(cfg.get('write_files', {}), base_dir=base_dir) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/gpg.py000066400000000000000000000037071415350476600154200ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ gpg.py gpg related utilities to get raw keys data by their id """ from curtin import util from .log import LOG def export_armour(key): """Export gpg key, armoured key gets returned""" try: (armour, _) = util.subp(["gpg", "--export", "--armour", key], capture=True) except util.ProcessExecutionError as error: # debug, since it happens for any key not on the system initially LOG.debug('Failed to export armoured key "%s": %s', key, error) armour = None return armour def recv_key(key, keyserver, retries=None): """Receive gpg key from the specified keyserver""" LOG.debug('Receive gpg key "%s"', key) try: util.subp(["gpg", "--keyserver", keyserver, "--recv", key], capture=True, retries=retries) except util.ProcessExecutionError as error: raise ValueError(('Failed to import key "%s" ' 'from server "%s" - error %s') % (key, keyserver, error)) def delete_key(key): """Delete the specified key from the local gpg ring""" try: util.subp(["gpg", "--batch", "--yes", "--delete-keys", key], capture=True) except util.ProcessExecutionError as error: LOG.warn('Failed delete key "%s": %s', key, error) def getkeybyid(keyid, keyserver='keyserver.ubuntu.com', retries=None): """get gpg keyid from keyserver""" armour = export_armour(keyid) if not armour: try: recv_key(keyid, keyserver=keyserver, retries=retries) armour = export_armour(keyid) except ValueError: LOG.exception('Failed to obtain gpg key %s', keyid) raise finally: # delete just imported key to leave environment as it was before delete_key(keyid) return armour # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/log.py000066400000000000000000000050741415350476600154230ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import logging import time from functools import wraps # Logging items for easy access getLogger = logging.getLogger CRITICAL = logging.CRITICAL FATAL = logging.FATAL ERROR = logging.ERROR WARNING = logging.WARNING WARN = logging.WARN INFO = logging.INFO DEBUG = logging.DEBUG NOTSET = logging.NOTSET class NullHandler(logging.Handler): def emit(self, record): pass def basicConfig(**kwargs): # basically like logging.basicConfig but only output for our logger if kwargs.get('filename'): handler = logging.FileHandler(filename=kwargs['filename'], mode=kwargs.get('filemode', 'a')) elif kwargs.get('stream'): handler = logging.StreamHandler(stream=kwargs['stream']) else: handler = NullHandler() if 'verbosity' in kwargs: level = ((logging.ERROR, logging.INFO, logging.DEBUG) [min(kwargs['verbosity'], 2)]) else: level = kwargs.get('level', logging.NOTSET) handler.setFormatter(logging.Formatter(fmt=kwargs.get('format'), datefmt=kwargs.get('datefmt'))) handler.setLevel(level) logging.getLogger().setLevel(level) logger = _getLogger() for h in list(logger.handlers): logger.removeHandler(h) logger.setLevel(level) logger.addHandler(handler) def _getLogger(name='curtin'): return logging.getLogger(name) if not logging.getLogger().handlers: logging.getLogger().addHandler(NullHandler()) def _repr_call(name, *args, **kwargs): return "%s(%s)" % ( name, ', '.join([str(repr(a)) for a in args] + ["%s=%s" % (k, repr(v)) for k, v in kwargs.items()])) def log_call(func, *args, **kwargs): return log_time( "TIMED %s: " % _repr_call(func.__name__, *args, **kwargs), func, *args, **kwargs) def log_time(msg, func, *args, **kwargs): start = time.time() try: return func(*args, **kwargs) finally: LOG.debug(msg + "%.3f", (time.time() - start)) def logged_call(): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): return log_call(func, *args, **kwargs) return wrapper return decorator def logged_time(msg): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): return log_time("TIMED %s: " % msg, func, *args, **kwargs) return wrapper return decorator LOG = _getLogger() # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/net/000077500000000000000000000000001415350476600150505ustar00rootroot00000000000000curtin-21.3/curtin/net/__init__.py000066400000000000000000000476701415350476600171770ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import errno import glob import os import re from curtin.log import LOG from curtin.udev import generate_udev_rule import curtin.util as util import curtin.config as config from . import network_state SYS_CLASS_NET = "/sys/class/net/" NET_CONFIG_OPTIONS = [ "address", "netmask", "broadcast", "network", "metric", "gateway", "pointtopoint", "media", "mtu", "hostname", "leasehours", "leasetime", "vendor", "client", "bootfile", "server", "hwaddr", "provider", "frame", "netnum", "endpoint", "local", "ttl", ] NET_CONFIG_COMMANDS = [ "pre-up", "up", "post-up", "down", "pre-down", "post-down", ] NET_CONFIG_BRIDGE_OPTIONS = [ "bridge_ageing", "bridge_bridgeprio", "bridge_fd", "bridge_gcinit", "bridge_hello", "bridge_maxage", "bridge_maxwait", "bridge_stp", ] def sys_dev_path(devname, path=""): return SYS_CLASS_NET + devname + "/" + path def read_sys_net(devname, path, translate=None, enoent=None, keyerror=None): try: contents = "" with open(sys_dev_path(devname, path), "r") as fp: contents = fp.read().strip() if translate is None: return contents try: return translate.get(contents) except KeyError: LOG.debug("found unexpected value '%s' in '%s/%s'", contents, devname, path) if keyerror is not None: return keyerror raise except OSError as e: if e.errno == errno.ENOENT and enoent is not None: return enoent raise def is_up(devname): # The linux kernel says to consider devices in 'unknown' # operstate as up for the purposes of network configuration. See # Documentation/networking/operstates.txt in the kernel source. translate = {'up': True, 'unknown': True, 'down': False} return read_sys_net(devname, "operstate", enoent=False, keyerror=False, translate=translate) def is_wireless(devname): return os.path.exists(sys_dev_path(devname, "wireless")) def is_connected(devname): # is_connected isn't really as simple as that. 2 is # 'physically connected'. 3 is 'not connected'. but a wlan interface will # always show 3. try: iflink = read_sys_net(devname, "iflink", enoent=False) if iflink == "2": return True if not is_wireless(devname): return False LOG.debug("'%s' is wireless, basing 'connected' on carrier", devname) return read_sys_net(devname, "carrier", enoent=False, keyerror=False, translate={'0': False, '1': True}) except IOError as e: if e.errno == errno.EINVAL: return False raise def is_physical(devname): return os.path.exists(sys_dev_path(devname, "device")) def is_present(devname): return os.path.exists(sys_dev_path(devname)) def get_devicelist(): return os.listdir(SYS_CLASS_NET) class ParserError(Exception): """Raised when parser has issue parsing the interfaces file.""" def parse_deb_config_data(ifaces, contents, src_dir, src_path): """Parses the file contents, placing result into ifaces. '_source_path' is added to every dictionary entry to define which file the configration information came from. :param ifaces: interface dictionary :param contents: contents of interfaces file :param src_dir: directory interfaces file was located :param src_path: file path the `contents` was read """ currif = None for line in contents.splitlines(): line = line.strip() if line.startswith('#'): continue split = line.split(' ') option = split[0] if option == "source-directory": parsed_src_dir = split[1] if not parsed_src_dir.startswith("/"): parsed_src_dir = os.path.join(src_dir, parsed_src_dir) for expanded_path in glob.glob(parsed_src_dir): dir_contents = os.listdir(expanded_path) dir_contents = [ os.path.join(expanded_path, path) for path in dir_contents if (os.path.isfile(os.path.join(expanded_path, path)) and re.match("^[a-zA-Z0-9_-]+$", path) is not None) ] for entry in dir_contents: with open(entry, "r") as fp: src_data = fp.read().strip() abs_entry = os.path.abspath(entry) parse_deb_config_data( ifaces, src_data, os.path.dirname(abs_entry), abs_entry) elif option == "source": new_src_path = split[1] if not new_src_path.startswith("/"): new_src_path = os.path.join(src_dir, new_src_path) for expanded_path in glob.glob(new_src_path): with open(expanded_path, "r") as fp: src_data = fp.read().strip() abs_path = os.path.abspath(expanded_path) parse_deb_config_data( ifaces, src_data, os.path.dirname(abs_path), abs_path) elif option == "auto": for iface in split[1:]: if iface not in ifaces: ifaces[iface] = { # Include the source path this interface was found in. "_source_path": src_path } ifaces[iface]['auto'] = True ifaces[iface]['control'] = 'auto' elif option.startswith('allow-'): for iface in split[1:]: if iface not in ifaces: ifaces[iface] = { # Include the source path this interface was found in. "_source_path": src_path } ifaces[iface]['auto'] = False ifaces[iface]['control'] = option.split('allow-')[-1] elif option == "iface": iface, family, method = split[1:4] if iface not in ifaces: ifaces[iface] = { # Include the source path this interface was found in. "_source_path": src_path } # man (5) interfaces says we can have multiple iface stanzas # all options are combined ifaces[iface]['family'] = family ifaces[iface]['method'] = method currif = iface elif option == "hwaddress": ifaces[currif]['hwaddress'] = split[1] elif option in NET_CONFIG_OPTIONS: ifaces[currif][option] = split[1] elif option in NET_CONFIG_COMMANDS: if option not in ifaces[currif]: ifaces[currif][option] = [] ifaces[currif][option].append(' '.join(split[1:])) elif option.startswith('dns-'): if 'dns' not in ifaces[currif]: ifaces[currif]['dns'] = {} if option == 'dns-search': ifaces[currif]['dns']['search'] = [] for domain in split[1:]: ifaces[currif]['dns']['search'].append(domain) elif option == 'dns-nameservers': ifaces[currif]['dns']['nameservers'] = [] for server in split[1:]: ifaces[currif]['dns']['nameservers'].append(server) elif option.startswith('bridge_'): if 'bridge' not in ifaces[currif]: ifaces[currif]['bridge'] = {} if option in NET_CONFIG_BRIDGE_OPTIONS: bridge_option = option.replace('bridge_', '', 1) ifaces[currif]['bridge'][bridge_option] = split[1] elif option == "bridge_ports": ifaces[currif]['bridge']['ports'] = [] for iface in split[1:]: ifaces[currif]['bridge']['ports'].append(iface) elif option == "bridge_hw" and split[1].lower() == "mac": ifaces[currif]['bridge']['mac'] = split[2] elif option == "bridge_pathcost": if 'pathcost' not in ifaces[currif]['bridge']: ifaces[currif]['bridge']['pathcost'] = {} ifaces[currif]['bridge']['pathcost'][split[1]] = split[2] elif option == "bridge_portprio": if 'portprio' not in ifaces[currif]['bridge']: ifaces[currif]['bridge']['portprio'] = {} ifaces[currif]['bridge']['portprio'][split[1]] = split[2] elif option.startswith('bond-'): if 'bond' not in ifaces[currif]: ifaces[currif]['bond'] = {} bond_option = option.replace('bond-', '', 1) ifaces[currif]['bond'][bond_option] = split[1] for iface in ifaces.keys(): if 'auto' not in ifaces[iface]: ifaces[iface]['auto'] = False def parse_deb_config(path): """Parses a debian network configuration file.""" ifaces = {} with open(path, "r") as fp: contents = fp.read().strip() abs_path = os.path.abspath(path) parse_deb_config_data( ifaces, contents, os.path.dirname(abs_path), abs_path) return ifaces def parse_net_config_data(net_config): """Parses the config, returns NetworkState dictionary :param net_config: curtin network config dict """ state = None if 'version' in net_config and 'config' in net_config: # For disabled config, we will not return any network state if net_config["config"] != "disabled": ns = network_state.NetworkState(version=net_config.get('version'), config=net_config.get('config')) ns.parse_config() state = ns.network_state return state def parse_net_config(path): """Parses a curtin network configuration file and return network state""" ns = None net_config = config.load_config(path) if 'network' in net_config: ns = parse_net_config_data(net_config.get('network')) return ns def render_persistent_net(network_state): ''' Given state, emit udev rules to map mac to ifname ''' content = "# Autogenerated by curtin\n" interfaces = network_state.get('interfaces') for iface in interfaces.values(): if iface['type'] == 'physical': ifname = iface.get('name', None) mac = iface.get('mac_address', '') # len(macaddr) == 2 * 6 + 5 == 17 if ifname and mac and len(mac) == 17: content += generate_udev_rule(ifname, mac.lower()) return content # TODO: switch valid_map based on mode inet/inet6 def iface_add_subnet(iface, subnet): content = "" valid_map = [ 'address', 'netmask', 'broadcast', 'metric', 'gateway', 'pointopoint', 'mtu', 'scope', 'dns_search', 'dns_nameservers', ] for key, value in subnet.items(): if value and key in valid_map: if type(value) == list: value = " ".join(value) if '_' in key: key = key.replace('_', '-') content += " {} {}\n".format(key, value) return content # TODO: switch to valid_map for attrs def iface_add_attrs(iface, index): # If the index is non-zero, this is an alias interface. Alias interfaces # represent additional interface addresses, and should not have additional # attributes. (extra attributes here are almost always either incorrect, # or are applied to the parent interface.) So if this is an alias, stop # right here. if index != 0: return "" content = "" ignore_map = [ 'control', 'index', 'inet', 'mode', 'name', 'subnets', 'type', ] # These values require repetitive printing # of the key for each value multiline_keys = [ 'bridge_pathcost', 'bridge_portprio', 'bridge_waitport', ] def add_entry(key, value): if type(value) == list: value = " ".join([str(v) for v in value]) return " {} {}\n".format(key, value) if iface['type'] not in ['bond', 'bridge', 'vlan']: ignore_map.append('mac_address') for key, value in iface.items(): if value and key not in ignore_map: if key in multiline_keys: for v in value: content += add_entry(key, v) else: content += add_entry(key, value) return content def render_route(route, indent=""): """When rendering routes for an iface, in some cases applying a route may result in the route command returning non-zero which produces some confusing output for users manually using ifup/ifdown[1]. To that end, we will optionally include an '|| true' postfix to each route line allowing users to work with ifup/ifdown without using --force option. We may at somepoint not want to emit this additional postfix, and add a 'strict' flag to this function. When called with strict=True, then we will not append the postfix. 1. http://askubuntu.com/questions/168033/ how-to-set-static-routes-in-ubuntu-server """ content = [] up = indent + "post-up route add" down = indent + "pre-down route del" or_true = " || true" mapping = { 'network': '-net', 'netmask': 'netmask', 'gateway': 'gw', 'metric': 'metric', } if route['network'] == '0.0.0.0' and route['netmask'] == '0.0.0.0': default_gw = " default gw %s" % route['gateway'] content.append(up + default_gw + or_true) content.append(down + default_gw + or_true) elif route['network'] == '::' and route['netmask'] == 0: # ipv6! default_gw = " -A inet6 default gw %s" % route['gateway'] content.append(up + default_gw + or_true) content.append(down + default_gw + or_true) else: route_line = "" for k in ['network', 'netmask', 'gateway', 'metric']: if k in route: route_line += " %s %s" % (mapping[k], route[k]) content.append(up + route_line + or_true) content.append(down + route_line + or_true) return "\n".join(content) + "\n" def iface_start_entry(iface): fullname = iface['name'] control = iface['control'] if control == "auto": cverb = "auto" elif control in ("hotplug",): cverb = "allow-" + control else: cverb = "# control-" + control subst = iface.copy() subst.update({'fullname': fullname, 'cverb': cverb}) return ("{cverb} {fullname}\n" "iface {fullname} {inet} {mode}\n").format(**subst) def subnet_is_ipv6(subnet): # 'static6' or 'dhcp6' if subnet['type'].endswith('6'): # This is a request for DHCPv6. return True elif subnet['type'] == 'static' and ":" in subnet['address']: return True return False def render_interfaces(network_state): ''' Given state, emit etc/network/interfaces content ''' content = "" interfaces = network_state.get('interfaces') ''' Apply a sort order to ensure that we write out the physical interfaces first; this is critical for bonding ''' order = { 'physical': 0, 'bond': 1, 'bridge': 2, 'vlan': 3, } content += "auto lo\niface lo inet loopback\n" for dnskey, value in network_state.get('dns', {}).items(): if len(value): content += " dns-{} {}\n".format(dnskey, " ".join(value)) for iface in sorted(interfaces.values(), key=lambda k: (order[k['type']], k['name'])): if content[-2:] != "\n\n": content += "\n" subnets = iface.get('subnets', {}) if subnets: for index, subnet in enumerate(subnets): if content[-2:] != "\n\n": content += "\n" iface['index'] = index iface['mode'] = subnet['type'] iface['control'] = subnet.get('control', 'auto') subnet_inet = 'inet' if subnet_is_ipv6(subnet): subnet_inet += '6' iface['inet'] = subnet_inet if subnet['type'].startswith('dhcp'): iface['mode'] = 'dhcp' # do not emit multiple 'auto $IFACE' lines as older (precise) # ifupdown complains if "auto %s\n" % (iface['name']) in content: iface['control'] = 'alias' content += iface_start_entry(iface) content += iface_add_subnet(iface, subnet) content += iface_add_attrs(iface, index) for route in subnet.get('routes', []): content += render_route(route, indent=" ") + '\n' else: # ifenslave docs say to auto the slave devices if 'bond-master' in iface or 'bond-slaves' in iface: content += "auto {name}\n".format(**iface) content += "iface {name} {inet} {mode}\n".format(**iface) content += iface_add_attrs(iface, 0) for route in network_state.get('routes'): content += render_route(route) # global replacements until v2 format content = content.replace('mac_address', 'hwaddress ether') # Play nice with others and source eni config files content += "\nsource /etc/network/interfaces.d/*.cfg\n" return content def netconfig_passthrough_available(target, feature='NETWORK_CONFIG_V2'): """ Determine if curtin can pass v2 network config to in target cloud-init """ LOG.debug('Checking in-target cloud-init for feature: %s', feature) with util.ChrootableTarget(target) as in_chroot: cloudinit = util.which('cloud-init', target=target) if not cloudinit: LOG.warning('Target does not have cloud-init installed') return False available = False try: out, _ = in_chroot.subp([cloudinit, 'features'], capture=True) available = feature in out.splitlines() except util.ProcessExecutionError: # we explicitly don't dump the exception as this triggers # vmtest failures when parsing the installation log file LOG.warning("Failed to probe cloudinit features") return False LOG.debug('cloud-init feature %s available? %s', feature, available) return available def render_netconfig_passthrough(target, netconfig=None): """ Extract original network config and pass it through to cloud-init in target """ cc = 'etc/cloud/cloud.cfg.d/50-curtin-networking.cfg' if not isinstance(netconfig, dict): raise ValueError('Network config must be a dictionary') if 'network' not in netconfig: raise ValueError("Network config must contain the key 'network'") content = config.dump_config(netconfig) cc_passthrough = os.path.sep.join((target, cc,)) LOG.info('Writing network config to %s: %s', cc, cc_passthrough) util.write_file(cc_passthrough, content=content) def render_network_state(target, network_state): LOG.debug("rendering eni from netconfig") eni = 'etc/network/interfaces' netrules = 'etc/udev/rules.d/70-persistent-net.rules' cc = 'etc/cloud/cloud.cfg.d/curtin-disable-cloudinit-networking.cfg' eni = os.path.sep.join((target, eni,)) LOG.info('Writing ' + eni) util.write_file(eni, content=render_interfaces(network_state)) netrules = os.path.sep.join((target, netrules,)) LOG.info('Writing ' + netrules) util.write_file(netrules, content=render_persistent_net(network_state)) cc_disable = os.path.sep.join((target, cc,)) LOG.info('Writing ' + cc_disable) util.write_file(cc_disable, content='network: {config: disabled}\n') def get_interface_mac(ifname): """Returns the string value of an interface's MAC Address""" return read_sys_net(ifname, "address", enoent=False) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/net/deps.py000066400000000000000000000056261415350476600163660ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin.distro import DISTROS def network_config_required_packages(network_config, mapping=None): if network_config is None: network_config = {} if not isinstance(network_config, dict): raise ValueError('Invalid network configuration. Must be a dict') if mapping is None: mapping = {} if not isinstance(mapping, dict): raise ValueError('Invalid network mapping. Must be a dict') # allow top-level 'network' key if 'network' in network_config: network_config = network_config.get('network') # v1 has 'config' key and uses type: devtype elements if 'config' in network_config: netconf = network_config['config'] dev_configs = set() if netconf == 'disabled' else set( device['type'] for device in netconf) else: # v2 has no config key dev_configs = set() for cfgtype, cfg in network_config.items(): if cfgtype == 'version': continue dev_configs.add(cfgtype) # subkeys under the type may trigger package adds for entry, entry_cfg in cfg.items(): if entry_cfg.get('renderer'): dev_configs.add(entry_cfg.get('renderer')) else: for sub_entry, sub_cfg in entry_cfg.items(): dev_configs.add(sub_entry) needed_packages = [] for dev_type in dev_configs: if dev_type in mapping: needed_packages.extend(mapping[dev_type]) return needed_packages def detect_required_packages_mapping(osfamily=DISTROS.debian): """Return a dictionary providing a versioned configuration which maps network configuration elements to the packages which are required for functionality. """ # keys ending with 's' are v2 values distro_mapping = { DISTROS.debian: { 'bond': ['ifenslave'], 'bonds': ['ifenslave'], 'bridge': ['bridge-utils'], 'bridges': ['bridge-utils'], 'openvswitch': ['openvswitch-switch'], 'networkd': ['systemd'], 'NetworkManager': ['network-manager'], 'vlan': ['vlan'], 'vlans': ['vlan']}, DISTROS.redhat: { 'bond': [], 'bonds': [], 'bridge': [], 'bridges': [], 'openvswitch': ['openvswitch-switch'], 'vlan': [], 'vlans': []}, } if osfamily not in distro_mapping: raise ValueError('No net package mapping for distro: %s' % osfamily) return {1: {'handler': network_config_required_packages, 'mapping': distro_mapping.get(osfamily)}, 2: {'handler': network_config_required_packages, 'mapping': distro_mapping.get(osfamily)}} # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/net/network_state.py000066400000000000000000000340251415350476600203170ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin.log import LOG import curtin.config as curtin_config NETWORK_STATE_VERSION = 1 NETWORK_STATE_REQUIRED_KEYS = { 1: ['version', 'config', 'network_state'], } def from_state_file(state_file): network_state = None state = curtin_config.load_config(state_file) network_state = NetworkState() network_state.load(state) return network_state class NetworkState: def __init__(self, version=NETWORK_STATE_VERSION, config=None): self.version = version self.config = [] if config in [None, 'disabled'] else config self.network_state = { 'interfaces': {}, 'routes': [], 'dns': { 'nameservers': [], 'search': [], } } self.command_handlers = self.get_command_handlers() def get_command_handlers(self): METHOD_PREFIX = 'handle_' methods = filter(lambda x: callable(getattr(self, x)) and x.startswith(METHOD_PREFIX), dir(self)) handlers = {} for m in methods: key = m.replace(METHOD_PREFIX, '') handlers[key] = getattr(self, m) return handlers def dump(self): state = { 'version': self.version, 'config': self.config, 'network_state': self.network_state, } return curtin_config.dump_config(state) def load(self, state): if 'version' not in state: LOG.error('Invalid state, missing version field') raise Exception('Invalid state, missing version field') required_keys = NETWORK_STATE_REQUIRED_KEYS[state['version']] if not self.valid_command(state, required_keys): msg = 'Invalid state, missing keys: %s' % (required_keys) LOG.error(msg) raise Exception(msg) # v1 - direct attr mapping, except version for key in [k for k in required_keys if k not in ['version']]: setattr(self, key, state[key]) self.command_handlers = self.get_command_handlers() def dump_network_state(self): return curtin_config.dump_config(self.network_state) def parse_config(self): # rebuild network state for command in self.config: handler = self.command_handlers.get(command['type']) handler(command) def valid_command(self, command, required_keys): if not required_keys: return False found_keys = [key for key in command.keys() if key in required_keys] return len(found_keys) == len(required_keys) def handle_physical(self, command): ''' command = { 'type': 'physical', 'mac_address': 'c0:d6:9f:2c:e8:80', 'name': 'eth0', 'subnets': [ {'type': 'dhcp4'} ] } ''' required_keys = [ 'name', ] if not self.valid_command(command, required_keys): LOG.warn('Skipping Invalid command: %s', command) LOG.debug(self.dump_network_state()) return interfaces = self.network_state.get('interfaces') iface = interfaces.get(command['name'], {}) for param, val in command.get('params', {}).items(): iface.update({param: val}) # convert subnet ipv6 netmask to cidr as needed subnets = command.get('subnets') if subnets: for subnet in subnets: if subnet['type'] == 'static': if 'netmask' in subnet and ':' in subnet['address']: subnet['netmask'] = mask2cidr(subnet['netmask']) for route in subnet.get('routes', []): if 'netmask' in route: route['netmask'] = mask2cidr(route['netmask']) iface.update({ 'name': command.get('name'), 'type': command.get('type'), 'mac_address': command.get('mac_address'), 'inet': 'inet', 'mode': 'manual', 'mtu': command.get('mtu'), 'address': None, 'gateway': None, 'subnets': subnets, }) self.network_state['interfaces'].update({command.get('name'): iface}) self.dump_network_state() def handle_vlan(self, command): ''' auto eth0.222 iface eth0.222 inet static address 10.10.10.1 netmask 255.255.255.0 hwaddress ether BC:76:4E:06:96:B3 vlan-raw-device eth0 ''' required_keys = [ 'name', 'vlan_link', 'vlan_id', ] if not self.valid_command(command, required_keys): print('Skipping Invalid command: {}'.format(command)) print(self.dump_network_state()) return interfaces = self.network_state.get('interfaces') self.handle_physical(command) iface = interfaces.get(command.get('name'), {}) iface['vlan-raw-device'] = command.get('vlan_link') iface['vlan_id'] = command.get('vlan_id') interfaces.update({iface['name']: iface}) def handle_bond(self, command): ''' #/etc/network/interfaces auto eth0 iface eth0 inet manual bond-master bond0 bond-mode 802.3ad auto eth1 iface eth1 inet manual bond-master bond0 bond-mode 802.3ad auto bond0 iface bond0 inet static address 192.168.0.10 gateway 192.168.0.1 netmask 255.255.255.0 bond-slaves none bond-mode 802.3ad bond-miimon 100 bond-downdelay 200 bond-updelay 200 bond-lacp-rate 4 ''' required_keys = [ 'name', 'bond_interfaces', 'params', ] if not self.valid_command(command, required_keys): print('Skipping Invalid command: {}'.format(command)) print(self.dump_network_state()) return self.handle_physical(command) interfaces = self.network_state.get('interfaces') iface = interfaces.get(command.get('name'), {}) for param, val in command.get('params').items(): iface.update({param: val}) iface.update({'bond-slaves': 'none'}) self.network_state['interfaces'].update({iface['name']: iface}) # handle bond slaves for ifname in command.get('bond_interfaces'): if ifname not in interfaces: cmd = { 'name': ifname, 'type': 'bond', } # inject placeholder self.handle_physical(cmd) interfaces = self.network_state.get('interfaces') bond_if = interfaces.get(ifname) bond_if['bond-master'] = command.get('name') # copy in bond config into slave for param, val in command.get('params').items(): bond_if.update({param: val}) self.network_state['interfaces'].update({ifname: bond_if}) def handle_bridge(self, command): ''' auto br0 iface br0 inet static address 10.10.10.1 netmask 255.255.255.0 bridge_ports eth1 eth2 bridge_ageing: 250 bridge_bridgeprio: 22 bridge_fd: 1 bridge_gcint: 2 bridge_hello: 1 bridge_hw: 00:11:22:33:44:55 bridge_maxage: 10 bridge_maxwait: 0 bridge_pathcost: eth1 50 bridge_pathcost: eth2 75 bridge_portprio: eth1 64 bridge_portprio: eth2 192 bridge_stp: 'off' bridge_waitport: 1 eth1 bridge_waitport: 2 eth2 bridge_params = [ "bridge_ports", "bridge_ageing", "bridge_bridgeprio", "bridge_fd", "bridge_gcint", "bridge_hello", "bridge_hw", "bridge_maxage", "bridge_maxwait", "bridge_pathcost", "bridge_portprio", "bridge_stp", "bridge_waitport", ] ''' required_keys = [ 'name', 'bridge_interfaces', 'params', ] if not self.valid_command(command, required_keys): LOG.warn('Skipping Invalid command: %s', command) LOG.warn(self.dump_network_state()) return # find one of the bridge port ifaces to get mac_addr # handle bridge_slaves interfaces = self.network_state.get('interfaces') for ifname in command.get('bridge_interfaces'): if ifname in interfaces: continue cmd = { 'name': ifname, } # inject placeholder self.handle_physical(cmd) interfaces = self.network_state.get('interfaces') self.handle_physical(command) iface = interfaces.get(command.get('name'), {}) iface['bridge_ports'] = command['bridge_interfaces'] for param, val in command.get('params', {}).items(): iface.update({param: val}) interfaces.update({iface['name']: iface}) def handle_nameserver(self, command): required_keys = [ 'address', ] if not self.valid_command(command, required_keys): LOG.warn('Skipping Invalid command: %s', command) LOG.warn(self.dump_network_state()) return dns = self.network_state.get('dns') if 'address' in command: addrs = command['address'] if not type(addrs) == list: addrs = [addrs] for addr in addrs: dns['nameservers'].append(addr) if 'search' in command: paths = command['search'] if not isinstance(paths, list): paths = [paths] for path in paths: dns['search'].append(path) def handle_route(self, command): required_keys = [ 'destination', ] if not self.valid_command(command, required_keys): LOG.warn('Skipping Invalid command: %s', command) LOG.warn(self.dump_network_state()) return routes = self.network_state.get('routes') network, cidr = command['destination'].split("/") netmask = cidr2mask(int(cidr)) route = { 'network': network, 'netmask': netmask, 'gateway': command.get('gateway'), 'metric': command.get('metric'), } routes.append(route) def cidr2mask(cidr): mask = [0, 0, 0, 0] for i in list(range(0, cidr)): idx = int(i / 8) mask[idx] = mask[idx] + (1 << (7 - i % 8)) return ".".join([str(x) for x in mask]) def ipv4mask2cidr(mask): if '.' not in mask: return mask return sum([bin(int(x)).count('1') for x in mask.split('.')]) def ipv6mask2cidr(mask): if ':' not in mask: return mask bitCount = [0, 0x8000, 0xc000, 0xe000, 0xf000, 0xf800, 0xfc00, 0xfe00, 0xff00, 0xff80, 0xffc0, 0xffe0, 0xfff0, 0xfff8, 0xfffc, 0xfffe, 0xffff] cidr = 0 for word in mask.split(':'): if not word or int(word, 16) == 0: break cidr += bitCount.index(int(word, 16)) return cidr def mask2cidr(mask): if ':' in mask: return ipv6mask2cidr(mask) elif '.' in mask: return ipv4mask2cidr(mask) else: return mask if __name__ == '__main__': import sys import random from curtin import net def load_config(nc): version = nc.get('version') config = nc.get('config') return (version, config) def test_parse(network_config): (version, config) = load_config(network_config) ns1 = NetworkState(version=version, config=config) ns1.parse_config() random.shuffle(config) ns2 = NetworkState(version=version, config=config) ns2.parse_config() print("----NS1-----") print(ns1.dump_network_state()) print() print("----NS2-----") print(ns2.dump_network_state()) print("NS1 == NS2 ?=> {}".format( ns1.network_state == ns2.network_state)) eni = net.render_interfaces(ns2.network_state) print(eni) udev_rules = net.render_persistent_net(ns2.network_state) print(udev_rules) def test_dump_and_load(network_config): print("Loading network_config into NetworkState") (version, config) = load_config(network_config) ns1 = NetworkState(version=version, config=config) ns1.parse_config() print("Dumping state to file") ns1_dump = ns1.dump() ns1_state = "/tmp/ns1.state" with open(ns1_state, "w+") as f: f.write(ns1_dump) print("Loading state from file") ns2 = from_state_file(ns1_state) print("NS1 == NS2 ?=> {}".format( ns1.network_state == ns2.network_state)) def test_output(network_config): (version, config) = load_config(network_config) ns1 = NetworkState(version=version, config=config) ns1.parse_config() random.shuffle(config) ns2 = NetworkState(version=version, config=config) ns2.parse_config() print("NS1 == NS2 ?=> {}".format( ns1.network_state == ns2.network_state)) eni_1 = net.render_interfaces(ns1.network_state) eni_2 = net.render_interfaces(ns2.network_state) print(eni_1) print(eni_2) print("eni_1 == eni_2 ?=> {}".format( eni_1 == eni_2)) y = curtin_config.load_config(sys.argv[1]) network_config = y.get('network') test_parse(network_config) test_dump_and_load(network_config) test_output(network_config) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/pack.py000066400000000000000000000164731415350476600155650ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import errno import os import shutil import tempfile from . import util from . import version CALL_ENTRY_POINT_SH_HEADER = """ #!/bin/sh PY3OR2_MAIN="%(ep_main)s" PY3OR2_MCHECK="%(ep_mcheck)s" PY3OR2_PYTHONS=${PY3OR2_PYTHONS:-"%(python_exe_list)s"} PYTHON=${PY3OR2_PYTHON} PY3OR2_DEBUG=${PY3OR2_DEBUG:-0} """.strip() CALL_ENTRY_POINT_SH_BODY = """ debug() { [ "${PY3OR2_DEBUG}" != "0" ] || return 0 echo "$@" 1>&2 } fail() { echo "$@" 1>&2; exit 1; } # if $0 is is bin/ and dirname($0)/../module exists, then prepend PYTHONPATH mydir=${0%/*} updir=${mydir%/*} if [ "${mydir#${updir}/}" = "bin" -a -d "$updir/${PY3OR2_MCHECK%%.*}" ]; then updir=$(cd "$mydir/.." && pwd) case "$PYTHONPATH" in *:$updir:*|$updir:*|*:$updir) :;; *) export PYTHONPATH="$updir${PYTHONPATH:+:$PYTHONPATH}" debug "adding '$updir' to PYTHONPATH" ;; esac fi if [ ! -n "$PYTHON" ]; then first_exe="" oifs="$IFS"; IFS=":" best=0 best_exe="" [ "${PY3OR2_DEBUG}" = "0" ] && _v="" || _v="-v" for p in $PY3OR2_PYTHONS; do command -v "$p" >/dev/null 2>&1 || { debug "$p: not in path"; continue; } [ -z "$PY3OR2_MCHECK" ] && PYTHON=$p && break out=$($p -m "$PY3OR2_MCHECK" $_v -- "$@" 2>&1) && PYTHON="$p" && { debug "$p is good [$p -m $PY3OR2_MCHECK $_v -- $*]"; break; } ret=$? debug "$p [$ret]: $out" # exit code of 1 is unuseable [ $ret -eq 1 ] && continue [ -n "$first_exe" ] || first_exe="$p" # higher non-zero exit values indicate more plausible usability [ $best -lt $ret ] && best_exe="$p" && best=$ret && debug "current best: $best_exe" done IFS="$oifs" [ -z "$best_exe" -a -n "$first_exe" ] && best_exe="$first_exe" [ -n "$PYTHON" ] || PYTHON="$best_exe" [ -n "$PYTHON" ] || fail "no availble python? [PY3OR2_DEBUG=1 for more info]" fi debug "executing: $PYTHON -m \\"$PY3OR2_MAIN\\" $*" exec $PYTHON -m "$PY3OR2_MAIN" "$@" """ def write_exe_wrapper(entrypoint, path=None, interpreter=None, deps_check_entry=None, mode=0o755): if not interpreter: interpreter = "python3:python" subs = { 'ep_main': entrypoint, 'ep_mcheck': deps_check_entry if deps_check_entry else "", 'python_exe_list': interpreter, } content = '\n'.join( (CALL_ENTRY_POINT_SH_HEADER % subs, CALL_ENTRY_POINT_SH_BODY)) if path is not None: with open(path, "w") as fp: fp.write(content) if mode is not None: os.chmod(path, mode) else: return content def pack(fdout=None, command=None, paths=None, copy_files=None, add_files=None): # write to 'fdout' a self extracting file to execute 'command' # if fdout is None, return content that would be written to fdout. # add_files is a list of (archive_path, file_content) tuples. # copy_files is a list of (archive_path, file_path) tuples. if paths is None: paths = util.get_paths() if add_files is None: add_files = [] if copy_files is None: copy_files = [] try: from probert import prober psource = os.path.dirname(prober.__file__) copy_files.append(('probert', psource),) except Exception: pass tmpd = None try: tmpd = tempfile.mkdtemp() exdir = os.path.join(tmpd, 'curtin') os.mkdir(exdir) bindir = os.path.join(exdir, 'bin') os.mkdir(bindir) def not_dot_py(input_d, flist): # include .py files and directories other than __pycache__ return [f for f in flist if not (f.endswith(".py") or (f != "__pycache__" and os.path.isdir(os.path.join(input_d, f))))] shutil.copytree(paths['helpers'], os.path.join(exdir, "helpers")) shutil.copytree(paths['lib'], os.path.join(exdir, "curtin"), ignore=not_dot_py) write_exe_wrapper(entrypoint='curtin.commands.main', path=os.path.join(bindir, 'curtin'), deps_check_entry="curtin.deps.check") packed_version = version.version_string() ver_file = os.path.join(exdir, 'curtin', 'version.py') util.write_file( ver_file, util.load_file(ver_file).replace("@@PACKED_VERSION@@", packed_version)) for archpath, filepath in copy_files: target = os.path.abspath(os.path.join(exdir, archpath)) if not target.startswith(exdir + os.path.sep): raise ValueError("'%s' resulted in path outside archive" % archpath) try: os.mkdir(os.path.dirname(target)) except OSError as e: if e.errno == errno.EEXIST: pass if os.path.isfile(filepath): shutil.copy(filepath, target) else: shutil.copytree(filepath, target) for archpath, content in add_files: target = os.path.abspath(os.path.join(exdir, archpath)) if not target.startswith(exdir + os.path.sep): raise ValueError("'%s' resulted in path outside archive" % archpath) try: os.mkdir(os.path.dirname(target)) except OSError as e: if e.errno == errno.EEXIST: pass with open(target, "w") as fp: fp.write(content) archcmd = os.path.join(paths['helpers'], 'shell-archive') archout = None args = [archcmd] if fdout is not None: archout = os.path.join(tmpd, 'output') args.append("--output=%s" % archout) args.extend(["--bin-path=_pwd_/bin", "--python-path=_pwd_", exdir, "curtin", "--"]) if command is not None: args.extend(command) (out, _err) = util.subp(args, capture=True) if fdout is None: if isinstance(out, bytes): out = out.decode() return out else: with open(archout, "r") as fp: while True: buf = fp.read(4096) fdout.write(buf) if len(buf) != 4096: break finally: if tmpd: shutil.rmtree(tmpd) def pack_install(fdout=None, configs=None, paths=None, add_files=None, copy_files=None, args=None, install_deps=True): if configs is None: configs = [] if add_files is None: add_files = [] if args is None: args = [] if install_deps: dep_flags = ["--install-deps"] else: dep_flags = [] command = ["curtin"] + dep_flags + ["install"] my_files = [] for n, config in enumerate(configs): apath = "configs/config-%03d.cfg" % n my_files.append((apath, config),) command.append("--config=%s" % apath) command += args return pack(fdout=fdout, command=command, paths=paths, add_files=add_files + my_files, copy_files=copy_files) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/paths.py000066400000000000000000000017261415350476600157610ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os try: string_types = (basestring,) except NameError: string_types = (str,) def target_path(target, path=None): # return 'path' inside target, accepting target as None if target in (None, ""): target = "/" elif not isinstance(target, string_types): raise ValueError("Unexpected input for target: %s" % target) else: target = os.path.abspath(target) # abspath("//") returns "//" specifically for 2 slashes. if target.startswith("//"): target = target[1:] if not path: return target if not isinstance(path, string_types): raise ValueError("Unexpected input for path: %s" % path) # os.path.join("/etc", "/foo") returns "/foo". Chomp all leading /. while len(path) and path[0] == "/": path = path[1:] return os.path.join(target, path) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/reporter/000077500000000000000000000000001415350476600161245ustar00rootroot00000000000000curtin-21.3/curtin/reporter/__init__.py000066400000000000000000000022301415350476600202320ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """Reporter Abstract Base Class.""" from .registry import DictRegistry from .handlers import available_handlers DEFAULT_CONFIG = { 'logging': {'type': 'log'}, } def update_configuration(config): """Update the instanciated_handler_registry. :param config: The dictionary containing changes to apply. If a key is given with a False-ish value, the registered handler matching that name will be unregistered. """ for handler_name, handler_config in config.items(): if not handler_config: instantiated_handler_registry.unregister_item( handler_name, force=True) continue handler_config = handler_config.copy() cls = available_handlers.registered_items[handler_config.pop('type')] instantiated_handler_registry.unregister_item(handler_name) instance = cls(**handler_config) instantiated_handler_registry.register_item(handler_name, instance) instantiated_handler_registry = DictRegistry() update_configuration(DEFAULT_CONFIG) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/reporter/events.py000066400000000000000000000205101415350476600200000ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. """ cloud-init reporting framework The reporting framework is intended to allow all parts of cloud-init to report events in a structured manner. """ import base64 import os.path import time from . import instantiated_handler_registry FINISH_EVENT_TYPE = 'finish' START_EVENT_TYPE = 'start' RESULT_EVENT_TYPE = 'result' DEFAULT_EVENT_ORIGIN = 'curtin' class _nameset(set): def __getattr__(self, name): if name in self: return name raise AttributeError("%s not a valid value" % name) status = _nameset(("SUCCESS", "WARN", "FAIL")) class ReportingEvent(object): """Encapsulation of event formatting.""" def __init__(self, event_type, name, description, origin=DEFAULT_EVENT_ORIGIN, timestamp=None, level=None): self.event_type = event_type self.name = name self.description = description self.origin = origin if timestamp is None: timestamp = time.time() self.timestamp = timestamp if level is None: level = "INFO" self.level = level def as_string(self): """The event represented as a string.""" return '{0}: {1}: {2}'.format( self.event_type, self.name, self.description) def as_dict(self): """The event represented as a dictionary.""" return {'name': self.name, 'description': self.description, 'event_type': self.event_type, 'origin': self.origin, 'timestamp': self.timestamp, 'level': self.level} class FinishReportingEvent(ReportingEvent): def __init__(self, name, description, result=status.SUCCESS, post_files=None, level=None): super(FinishReportingEvent, self).__init__( FINISH_EVENT_TYPE, name, description, level=level) self.result = result if post_files is None: post_files = [] self.post_files = post_files if result not in status: raise ValueError("Invalid result: %s" % result) if self.result == status.WARN: self.level = "WARN" elif self.result == status.FAIL: self.level = "ERROR" def as_string(self): return '{0}: {1}: {2}: {3}'.format( self.event_type, self.name, self.result, self.description) def as_dict(self): """The event represented as json friendly.""" data = super(FinishReportingEvent, self).as_dict() data['result'] = self.result if self.post_files: data['files'] = _collect_file_info(self.post_files) return data def report_event(event): """Report an event to all registered event handlers. This should generally be called via one of the other functions in the reporting module. :param event_type: The type of the event; this should be a constant from the reporting module. """ for _, handler in instantiated_handler_registry.registered_items.items(): handler.publish_event(event) def report_finish_event(event_name, event_description, result=status.SUCCESS, post_files=None, level=None): """Report a "finish" event. See :py:func:`.report_event` for parameter details. """ event = FinishReportingEvent(event_name, event_description, result, post_files=post_files, level=level) return report_event(event) def report_start_event(event_name, event_description, level=None): """Report a "start" event. :param event_name: The name of the event; this should be a topic which events would share (e.g. it will be the same for start and finish events). :param event_description: A human-readable description of the event that has occurred. """ event = ReportingEvent(START_EVENT_TYPE, event_name, event_description, level=level) return report_event(event) class ReportEventStack(object): """Context Manager for using :py:func:`report_event` This enables calling :py:func:`report_start_event` and :py:func:`report_finish_event` through a context manager. :param name: the name of the event :param description: the event's description, passed on to :py:func:`report_start_event` :param message: the description to use for the finish event. defaults to :param:description. :param parent: :type parent: :py:class:ReportEventStack or None The parent of this event. The parent is populated with results of all its children. The name used in reporting is / :param reporting_enabled: Indicates if reporting events should be generated. If not provided, defaults to the parent's value, or True if no parent is provided. :param result_on_exception: The result value to set if an exception is caught. default value is FAIL. :param level: The priority level of the enter and exit messages sent. Default value is INFO. """ def __init__(self, name, description, message=None, parent=None, reporting_enabled=None, result_on_exception=status.FAIL, post_files=None, level="INFO"): self.parent = parent self.name = name self.description = description self.message = message self.result_on_exception = result_on_exception self.result = status.SUCCESS self.level = level if post_files is None: post_files = [] self.post_files = post_files # use parents reporting value if not provided if reporting_enabled is None: if parent: reporting_enabled = parent.reporting_enabled else: reporting_enabled = True self.reporting_enabled = reporting_enabled if parent: self.fullname = '/'.join((parent.fullname, name,)) else: self.fullname = self.name self.children = {} def __repr__(self): return ("ReportEventStack(%s, %s, reporting_enabled=%s)" % (self.name, self.description, self.reporting_enabled)) def __enter__(self): self.result = status.SUCCESS if self.reporting_enabled: report_start_event(self.fullname, self.description, level=self.level) if self.parent: self.parent.children[self.name] = (None, None) return self def _childrens_finish_info(self): for cand_result in (status.FAIL, status.WARN): for name, (value, msg) in self.children.items(): if value == cand_result: return (value, self.message) return (self.result, self.message) @property def result(self): return self._result @result.setter def result(self, value): if value not in status: raise ValueError("'%s' not a valid result" % value) self._result = value @property def message(self): if self._message is not None: return self._message return self.description @message.setter def message(self, value): self._message = value def _finish_info(self, exc): # return tuple of description, and value # explicitly handle sys.exit(0) as not an error if exc and not(isinstance(exc, SystemExit) and exc.code == 0): return (self.result_on_exception, self.message) return self._childrens_finish_info() def __exit__(self, exc_type, exc_value, traceback): (result, msg) = self._finish_info(exc_value) if self.parent: self.parent.children[self.name] = (result, msg) if self.reporting_enabled: report_finish_event(self.fullname, msg, result, post_files=self.post_files, level=self.level) def _collect_file_info(files): if not files: return None ret = [] for fname in files: if not os.path.isfile(fname): content = None else: with open(fname, "rb") as fp: content = base64.b64encode(fp.read()).decode() ret.append({'path': fname, 'content': content, 'encoding': 'base64'}) return ret # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/reporter/handlers.py000066400000000000000000000102461415350476600203010ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import abc from .registry import DictRegistry from .. import url_helper from .. import log as logging LOG = logging.getLogger(__name__) class ReportingHandler(object): """Base class for report handlers. Implement :meth:`~publish_event` for controlling what the handler does with an event. """ @abc.abstractmethod def publish_event(self, event): """Publish an event to the ``INFO`` log level.""" class LogHandler(ReportingHandler): """Publishes events to the curtin log at the ``DEBUG`` log level.""" def __init__(self, level="DEBUG"): super(LogHandler, self).__init__() if isinstance(level, int): pass else: input_level = level try: level = getattr(logging, level.upper()) except Exception: LOG.warn("invalid level '%s', using WARN", input_level) level = logging.WARN self.level = level def publish_event(self, event): """Publish an event to the ``DEBUG`` log level.""" logger = logging.getLogger( '.'.join(['curtin', 'reporting', event.event_type, event.name])) logger.log(self.level, event.as_string()) class PrintHandler(ReportingHandler): """Print the event as a string.""" def publish_event(self, event): print(event.as_string()) class WebHookHandler(ReportingHandler): def __init__(self, endpoint, consumer_key=None, token_key=None, token_secret=None, consumer_secret=None, timeout=None, retries=None, level="DEBUG"): super(WebHookHandler, self).__init__() self.oauth_helper = url_helper.OauthUrlHelper( consumer_key=consumer_key, token_key=token_key, token_secret=token_secret, consumer_secret=consumer_secret) self.endpoint = endpoint self.timeout = timeout self.retries = retries try: self.level = getattr(logging, level.upper()) except Exception: LOG.warn("invalid level '%s', using WARN", level) self.level = logging.WARN self.headers = {'Content-Type': 'application/json'} def publish_event(self, event): try: return self.oauth_helper.geturl( url=self.endpoint, data=event.as_dict(), headers=self.headers, retries=self.retries) except Exception as e: LOG.warn("failed posting event: %s [%s]" % (event.as_string(), e)) class JournaldHandler(ReportingHandler): def __init__(self, level="DEBUG", identifier="curtin_event"): super(JournaldHandler, self).__init__() if isinstance(level, int): pass else: input_level = level try: level = getattr(logging, level.upper()) except Exception: LOG.warn("invalid level '%s', using WARN", input_level) level = logging.WARN self.level = level self.identifier = identifier def publish_event(self, event): # Ubuntu older than precise will not have python-systemd installed. try: from systemd import journal except ImportError: raise level = str(getattr(journal, "LOG_" + event.level, journal.LOG_DEBUG)) extra = {} if hasattr(event, 'result'): extra['CURTIN_RESULT'] = event.result journal.send( event.as_string(), PRIORITY=level, SYSLOG_IDENTIFIER=self.identifier, CURTIN_EVENT_TYPE=event.event_type, CURTIN_MESSAGE=event.description, CURTIN_NAME=event.name, **extra ) available_handlers = DictRegistry() available_handlers.register_item('log', LogHandler) available_handlers.register_item('print', PrintHandler) available_handlers.register_item('webhook', WebHookHandler) # only add journald handler on systemd systems try: available_handlers.register_item('journald', JournaldHandler) except ImportError: print('journald report handler not supported; no systemd module') # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/reporter/legacy/000077500000000000000000000000001415350476600173705ustar00rootroot00000000000000curtin-21.3/curtin/reporter/legacy/__init__.py000066400000000000000000000027241415350476600215060ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin.util import ( try_import_module, ) from abc import ( ABCMeta, abstractmethod, ) from curtin.log import LOG class BaseReporter: """Skeleton for a report.""" __metaclass__ = ABCMeta @abstractmethod def report_success(self): """Report installation success.""" @abstractmethod def report_failure(self, failure): """Report installation failure.""" class EmptyReporter(BaseReporter): def report_success(self): """Empty.""" def report_failure(self, failure): """Empty.""" class LoadReporterException(Exception): """Raise exception if desired reporter not loaded.""" pass def load_reporter(config): """Loads and returns reporter instance stored in config file.""" reporter = config.get('reporter') if reporter is None: LOG.info("'reporter' not found in config file.") return EmptyReporter() name, options = reporter.popitem() module = try_import_module('curtin.reporter.legacy.%s' % name) if module is None: LOG.error( "Module for %s reporter could not load." % name) return EmptyReporter() try: return module.load_factory(options) except LoadReporterException: LOG.error( "Failed loading %s reporter with %s" % (name, options)) return EmptyReporter() # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/reporter/legacy/maas.py000066400000000000000000000101461415350476600206650ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin import url_helper from . import (BaseReporter, LoadReporterException) import mimetypes import os.path import random import string import sys class MAASReporter(BaseReporter): def __init__(self, config): """Load config dictionary and initialize object.""" self.url = config['url'] self.urlhelper = url_helper.OauthUrlHelper( consumer_key=config.get('consumer_key'), token_key=config.get('token_key'), token_secret=config.get('token_secret'), consumer_secret='', skew_data_file="/run/oauth_skew.json") self.files = [] self.retries = config.get('retries', [1, 1, 2, 4, 8, 16, 32]) def report_success(self): """Report installation success.""" status = "OK" message = "Installation succeeded." self.report(status, message, files=self.files) def report_failure(self, message): """Report installation failure.""" status = "FAILED" self.report(status, message, files=self.files) def encode_multipart_data(self, data, files): """Create a MIME multipart payload from L{data} and L{files}. @param data: A mapping of names (ASCII strings) to data (byte string). @param files: A mapping of names (ASCII strings) to file objects ready to be read. @return: A 2-tuple of C{(body, headers)}, where C{body} is a a byte string and C{headers} is a dict of headers to add to the enclosing request in which this payload will travel. """ boundary = self._random_string(30) lines = [] for name in data: lines.extend(self._encode_field(name, data[name], boundary)) for name in files: lines.extend(self._encode_file(name, files[name], boundary)) lines.extend(('--%s--' % boundary, '')) body = '\r\n'.join(lines) headers = { 'content-type': 'multipart/form-data; boundary=' + boundary, 'content-length': "%d" % len(body), } return body, headers def report(self, status, message=None, files=None): """Send the report.""" params = {} params['status'] = status if message is not None: params['error'] = message if files is None: files = [] install_files = {} for fpath in files: install_files[os.path.basename(fpath)] = open(fpath, "r") data, headers = self.encode_multipart_data(params, install_files) msg = "" if not isinstance(data, bytes): data = data.encode() try: payload = self.urlhelper.geturl( self.url, data=data, headers=headers, retries=self.retries) if payload != b'OK': raise TypeError("Unexpected result from call: %s" % payload) else: msg = "Success" except url_helper.UrlError as exc: msg = str(exc) except Exception as exc: raise exc sys.stderr.write("%s\n" % msg) def _encode_field(self, field_name, data, boundary): return ( '--' + boundary, 'Content-Disposition: form-data; name="%s"' % field_name, '', str(data), ) def _encode_file(self, name, fileObj, boundary): return ( '--' + boundary, 'Content-Disposition: form-data; name="%s"; filename="%s"' % (name, name), 'Content-Type: %s' % self._get_content_type(name), '', fileObj.read(), ) def _random_string(self, length): return ''.join(random.choice(string.ascii_letters) for ii in range(length + 1)) def _get_content_type(self, filename): return mimetypes.guess_type(filename)[0] or 'application/octet-stream' def load_factory(options): try: return MAASReporter(options) except Exception: raise LoadReporterException # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/reporter/registry.py000066400000000000000000000017711415350476600203540ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import copy class DictRegistry(object): """A simple registry for a mapping of objects.""" def __init__(self): self.reset() def reset(self): self._items = {} def register_item(self, key, item): """Add item to the registry.""" if key in self._items: raise ValueError( 'Item already registered with key {0}'.format(key)) self._items[key] = item def unregister_item(self, key, force=True): """Remove item from the registry.""" if key in self._items: del self._items[key] elif not force: raise KeyError("%s: key not present to unregister" % key) @property def registered_items(self): """All the items that have been registered. This cannot be used to modify the contents of the registry. """ return copy.copy(self._items) # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/storage_config.py000066400000000000000000001371101415350476600176300ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from collections import namedtuple, OrderedDict import copy import operator import os import re import yaml from curtin.log import LOG from curtin.block import multipath, schemas from curtin import config as curtin_config from curtin import util # map # https://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs # to # curtin/commands/block_meta.py:partition_handler()sgdisk_flags/types GPT_GUID_TO_CURTIN_MAP = { 'C12A7328-F81F-11D2-BA4B-00A0C93EC93B': ('boot', 'EF00'), '21686148-6449-6E6F-744E-656564454649': ('bios_grub', 'EF02'), '933AC7E1-2EB4-4F13-B844-0E14E2AEF915': ('home', '8302'), '0FC63DAF-8483-4772-8E79-3D69D8477DE4': ('linux', '8300'), 'E6D6D379-F507-44C2-A23C-238F2A3DF928': ('lvm', '8e00'), '024DEE41-33E7-11D3-9D69-0008C781F39F': ('mbr', ''), '9E1A2D38-C612-4316-AA26-8B49521E5A8B': ('prep', '4200'), 'A19D880F-05FC-4D3B-A006-743F0F84911E': ('raid', 'fd00'), '0657FD6D-A4AB-43C4-84E5-0933C84B4F4F': ('swap', '8200'), } # MBR types # https://www.win.tue.nl/~aeb/partitions/partition_types-2.html # to # curtin/commands/block_meta.py:partition_handler()sgdisk_flags/types MBR_TYPE_TO_CURTIN_MAP = { '0XF': ('extended', 'f'), '0X5': ('extended', 'f'), '0X83': ('linux', '83'), '0X85': ('extended', 'f'), '0XC5': ('extended', 'f'), } MBR_BOOT_FLAG = '0x80' PTABLE_TYPE_MAP = dict(GPT_GUID_TO_CURTIN_MAP, **MBR_TYPE_TO_CURTIN_MAP) StorageConfig = namedtuple('StorageConfig', ('type', 'schema')) STORAGE_CONFIG_TYPES = { 'bcache': StorageConfig(type='bcache', schema=schemas.BCACHE), 'dasd': StorageConfig(type='dasd', schema=schemas.DASD), 'disk': StorageConfig(type='disk', schema=schemas.DISK), 'dm_crypt': StorageConfig(type='dm_crypt', schema=schemas.DM_CRYPT), 'format': StorageConfig(type='format', schema=schemas.FORMAT), 'lvm_partition': StorageConfig(type='lvm_partition', schema=schemas.LVM_PARTITION), 'lvm_volgroup': StorageConfig(type='lvm_volgroup', schema=schemas.LVM_VOLGROUP), 'mount': StorageConfig(type='mount', schema=schemas.MOUNT), 'partition': StorageConfig(type='partition', schema=schemas.PARTITION), 'raid': StorageConfig(type='raid', schema=schemas.RAID), 'zfs': StorageConfig(type='zfs', schema=schemas.ZFS), 'zpool': StorageConfig(type='zpool', schema=schemas.ZPOOL), } def get_storage_types(): return copy.deepcopy(STORAGE_CONFIG_TYPES) def get_storage_type_schemas(): return [stype.schema for stype in sorted(get_storage_types().values())] STORAGE_CONFIG_SCHEMA = { '$schema': 'http://json-schema.org/draft-04/schema#', 'name': 'ASTORAGECONFIG', 'title': 'curtin storage configuration for an installation.', 'description': ( 'Declaritive syntax for specifying storage device configuration.'), 'required': ['version', 'config'], 'definitions': schemas.definitions, 'properties': { 'version': {'type': 'integer', 'enum': [1]}, 'config': { 'type': 'array', 'items': { 'oneOf': get_storage_type_schemas(), }, 'additionalItems': False, }, }, 'additionalProperties': False, } def load_and_validate(config_path): """Load and validate storage config file.""" config = curtin_config.load_config(config_path) if 'storage' not in config: LOG.info('Skipping %s, missing "storage" key' % config_path) return return validate_config(config.get('storage'), sourcefile=config_path) def validate_config(config, sourcefile=None): """Validate storage config object.""" if not sourcefile: sourcefile = '' try: import jsonschema jsonschema.validate(config, STORAGE_CONFIG_SCHEMA) except ImportError: LOG.error('Cannot validate storage config, missing jsonschema') raise except jsonschema.exceptions.ValidationError as e: if isinstance(e.instance, int): msg = 'Unexpected value (%s) for property "%s"' % (e.path[0], e.instance) raise ValueError(msg) if 'type' not in e.instance: msg = "%s in %s" % (e.message, e.instance) raise ValueError(msg) instance_type = e.instance['type'] stype = get_storage_types().get(instance_type) if stype: try: jsonschema.validate(e.instance, stype.schema) except jsonschema.exceptions.ValidationError as f: msg = "%s in %s\n%s" % (f.message, sourcefile, util.json_dumps(e.instance)) raise(ValueError(msg)) else: msg = "Unknown storage type: %s in %s" % (instance_type, e.instance) raise ValueError(msg) # FIXME: move this map to each types schema and extract these # values from each type's schema. def _stype_to_deps(stype): """ Return a set of storage_config type keys for storage_config type. The strings returned in a dep set indicate which fields reference other storage_config elements that require a lookup. config: - type: disk id: sda path: /dev/sda ptable: gpt - type: partition id: sda1 device: sda """ depends_keys = { 'bcache': {'backing_device', 'cache_device'}, 'dasd': set(), 'disk': set(), 'dm_crypt': {'volume'}, 'format': {'volume'}, 'lvm_partition': {'volgroup'}, 'lvm_volgroup': {'devices'}, 'mount': {'device'}, 'partition': {'device'}, 'raid': {'devices', 'spare_devices', 'container'}, 'zfs': {'pool'}, 'zpool': {'vdevs'}, } return depends_keys[stype] def _stype_to_order_key(stype): default_sort = {'id'} order_key = { 'bcache': {'name'}, 'dasd': default_sort, 'disk': default_sort, 'dm_crypt': default_sort, 'format': default_sort, 'lvm_partition': {'name'}, 'lvm_volgroup': {'name'}, 'mount': {'path'}, 'partition': {'number'}, 'raid': default_sort, 'zfs': {'volume'}, 'zpool': default_sort, } if stype not in order_key: raise ValueError('Unknown storage type: %s' % stype) return order_key.get(stype) # Document what each storage type can be composed from. def _validate_dep_type(source_id, dep_key, dep_id, sconfig): '''check if dependency type is in the list of allowed by source''' # FIXME: this should come from curtin.block.schemas.* depends = { 'bcache': {'bcache', 'disk', 'dm_crypt', 'lvm_partition', 'partition', 'raid'}, 'dasd': {}, 'disk': {'dasd'}, 'dm_crypt': {'bcache', 'disk', 'dm_crypt', 'lvm_partition', 'partition', 'raid'}, 'format': {'bcache', 'disk', 'dm_crypt', 'lvm_partition', 'partition', 'raid'}, 'lvm_partition': {'lvm_volgroup'}, 'lvm_volgroup': {'bcache', 'disk', 'dm_crypt', 'partition', 'raid'}, 'mount': {'format'}, 'partition': {'bcache', 'disk', 'raid', 'partition'}, 'raid': {'bcache', 'disk', 'dm_crypt', 'lvm_partition', 'partition', 'raid'}, 'zfs': {'zpool'}, 'zpool': {'disk', 'partition'}, } if source_id not in sconfig: raise ValueError( 'Invalid source_id (%s) not in storage config' % source_id) if dep_id not in sconfig: raise ValueError( 'Invalid dep_id (%s) not in storage config' % dep_id) source_type = sconfig[source_id]['type'] dep_type = sconfig[dep_id]['type'] if source_type not in depends: raise ValueError('Invalid source_type: %s' % source_type) if dep_type not in depends: raise ValueError('Invalid type in depedency: %s' % dep_type) source_deps = depends[source_type] result = dep_type in source_deps LOG.debug('Validate: %s:SourceType:%s -> (DepId:%s DepType:%s) in ' 'SourceDeps:%s ? result=%s' % (source_id, source_type, dep_id, dep_type, source_deps, result)) if not result: # Partition(sda1).device -> Partition(sda3) s_str = '%s(id=%s).%s' % (source_type.capitalize(), source_id, dep_key) d_str = '%s(id=%s)' % (dep_type.capitalize(), dep_id) dep_chain = "%s cannot depend upon on %s" % (s_str, d_str) raise ValueError(dep_chain) return result def find_item_dependencies(item_id, config, validate=True): """ Walk a storage config collecting any dependent device ids.""" if not config or not isinstance(config, OrderedDict): raise ValueError('Invalid config. Must be non-empty OrderedDict') item_cfg = config.get(item_id) if not item_cfg: return None def _find_same_dep(dep_key, dep_value, config): return [item_id for item_id, item_cfg in config.items() if item_cfg.get(dep_key) == dep_value] deps = [] item_type = item_cfg.get('type') item_order = _stype_to_order_key(item_type) for dep_key in _stype_to_deps(item_type): if dep_key in item_cfg: dep_value = item_cfg[dep_key] if not isinstance(dep_value, list): dep_value = [dep_value] deps.extend(dep_value) for dep in dep_value: if validate: _validate_dep_type(item_id, dep_key, dep, config) # find other items with the same dep_key, dep_value same_deps = _find_same_dep(dep_key, dep, config) sdeps_cfgs = [cfg for sdep, cfg in config.items() if sdep in same_deps] sorted_deps = ( sorted(sdeps_cfgs, key=operator.itemgetter(*list(item_order)))) for sdep in sorted_deps: deps.append(sdep['id']) # find lower level deps lower_deps = find_item_dependencies(dep, config) if lower_deps: deps.extend(lower_deps) return deps def get_config_tree(item, storage_config): '''Construct an OrderedDict which inserts all of the storage config dependencies required to construct the device specifed by item_id. ''' sconfig = extract_storage_ordered_dict(storage_config) # Create the OrderedDict by inserting the top-most item # and then inserting the next dependency. item_deps = OrderedDict({item: sconfig[item]}) for dep in find_item_dependencies(item, sconfig): item_deps[dep] = sconfig[dep] return item_deps def merge_config_trees_to_list(config_trees): ''' Create a registry to track each tree by device_id, and capture the dependency level and config of each tree. From this registry we can return a list that is sorted from the least to most dependent configuration item. This calculation ensures that composed devices are listed last. ''' reg = {} # reg[sda] = {level=0, config={}} # reg[sdd] = {level=0, config={}} # reg[sde] = {level=0, config={}} # reg[sdf] = {level=0, config={}} # reg[md0] = {level=3, config={'devices': [sdd, sde, sdf]}} # reg[sda5] = {level=1, config={'device': sda}} # reg[bcache1_raid] = # {level=5, config={'backing': ['md0'], 'cache': ['sda5']}} max_level = 0 for tree in config_trees: top_item_id = list(tree.keys())[0] # first insertion has the most deps level = len(tree.keys()) if level > max_level: max_level = level item_cfg = tree[top_item_id] if top_item_id in reg: LOG.warning('Dropping Duplicate id: %s' % top_item_id) continue reg[top_item_id] = {'level': level, 'config': item_cfg} def sort_level(configs): sreg = {} for cfg in configs: if cfg['type'] in sreg: sreg[cfg['type']].append(cfg) else: sreg[cfg['type']] = [cfg] result = [] for item_type in sorted(sreg.keys()): iorder = _stype_to_order_key(item_type) isorted = sorted(sreg[item_type], key=operator.itemgetter(*list(iorder))) result.extend(isorted) return result # [entry for tag in tags] merged = [] for lvl in range(0, max_level + 1): level_configs = [] for item_id, entry in reg.items(): if entry['level'] == lvl: level_configs.append(entry['config']) sconfigs = sort_level(level_configs) merged.extend(sconfigs) return merged def config_tree_to_list(config_tree): """ ConfigTrees are OrderedDicts which insert dependent storage configs from leaf to root. Reversing this insertion order creates a list of storage_configuration that is in the correct order for use by block_meta. """ return [config_tree[item] for item in reversed(config_tree)] def extract_storage_ordered_dict(config): storage_config = config.get('storage') if not storage_config: raise ValueError("no 'storage' entry in config") scfg = storage_config.get('config') if not scfg: raise ValueError("invalid storage config data") # Since storage config will often have to be searched for a value by its # id, and this can become very inefficient as storage_config grows, a dict # will be generated with the id of each component of the storage_config as # its index and the component of storage_config as its value return OrderedDict((d["id"], d) for d in scfg) class ProbertParser(object): """ Base class for parsing probert storage configuration. This will hold common methods of the various storage type parsers. """ # In subclasses 'probe_data_key' value will select a subset of # Probert probe_data if the value is present. If the probe_data # is incomplete, we raise a ValuError. This selection allows the # subclass to handle parsing one portion of the data and will be # accessed in the subclass via 'class_data' member. probe_data_key = None class_data = None def __init__(self, probe_data): if not probe_data or not isinstance(probe_data, dict): raise ValueError('Invalid probe_data: %s' % probe_data) self.probe_data = probe_data if self.probe_data_key is not None: if self.probe_data_key in probe_data: data = self.probe_data.get(self.probe_data_key) if not data: data = {} self.class_data = data else: LOG.warning('probe_data missing %s data', self.probe_data_key) self.class_data = {} # We keep a reference to the blockdev_data on the superclass # as each specific parser has common needs to reference # this data separate from the BlockdevParser class. self.blockdev_data = self.probe_data.get('blockdev', {}) if not self.blockdev_data: LOG.warning('probe_data missing valid "blockdev" data') def parse(self): raise NotImplementedError() def asdict(self, data): raise NotImplementedError() def lookup_devname(self, devname): """ Search 'blockdev' space for "devname". The device name may not be a kernel name, so if not found in the dictionary keys, search under 'DEVLINKS' of each device and return the dictionary for the kernel. """ if devname in self.blockdev_data: return devname for bd_key, bdata in self.blockdev_data.items(): devlinks = bdata.get('DEVLINKS', '').split() if devname in devlinks: return bd_key return None def is_mpath_member(self, blockdev): return multipath.is_mpath_member(blockdev.get('DEVNAME', ''), blockdev) def is_mpath_device(self, blockdev): return multipath.is_mpath_device(blockdev.get('DEVNAME', ''), blockdev) def is_mpath_partition(self, blockdev): return multipath.is_mpath_partition( blockdev.get('DEVNAME', ''), blockdev) def blockdev_to_id(self, blockdev): """ Examine a blockdev dictionary and return a tuple of curtin storage type and name that can be used as a value for storage_config ids (opaque reference to other storage_config elements). """ def is_dmcrypt(blockdev): return bool(blockdev.get('DM_UUID', '').startswith('CRYPT-LUKS')) devtype = blockdev.get('DEVTYPE', 'MISSING') devname = blockdev.get('DEVNAME', 'MISSING') name = os.path.basename(devname) if devname.startswith('/dev/dm-'): # device mapper names are composed deviecs, let's # look at udev data to see what it's really if 'DM_LV_NAME' in blockdev: devtype = 'lvm-partition' name = blockdev['DM_LV_NAME'] elif self.is_mpath_device(blockdev): devtype = 'mpath-disk' name = blockdev['DM_NAME'] elif self.is_mpath_partition(blockdev): devtype = 'mpath-partition' name = '{}-part{}'.format( blockdev['DM_MPATH'], blockdev['DM_PART']) elif is_dmcrypt(blockdev): devtype = 'dmcrypt' name = blockdev['DM_NAME'] elif devname.startswith('/dev/md'): devtype = 'raid' for key, val in {'name': name, 'devtype': devtype}.items(): if not val or val == 'MISSING': msg = 'Failed to extract %s data: %s' % (key, blockdev) raise ValueError(msg) return "%s-%s" % (devtype, name) def blockdev_byid_to_devname(self, link): """ Lookup blockdev by devlink and convert to storage_config id. """ bd_key = self.lookup_devname(link) if bd_key: return self.blockdev_to_id(self.blockdev_data[bd_key]) return None class BcacheParser(ProbertParser): probe_data_key = 'bcache' def __init__(self, probe_data): super(BcacheParser, self).__init__(probe_data) self.backing = self.class_data.get('backing', {}) self.caching = self.class_data.get('caching', {}) def parse(self): """parse probert 'bcache' data format. Collects storage config type: bcache for valid data and returns tuple of lists, configs, errors. """ configs = [] errors = [] for dev_uuid, bdata in self.backing.items(): entry = self.asdict(dev_uuid, bdata) if entry: try: validate_config(entry) except ValueError as e: errors.append(e) continue configs.append(entry) return (configs, errors) def asdict(self, backing_uuid, backing_data): """ process a specific bcache entry and return a curtin storage config dictionary. """ def _sb_get(data, attr): return data.get('superblock', {}).get(attr) def _find_cache_device(backing_data, cache_data): cset_uuid = _sb_get(backing_data, 'cset.uuid') msg = ('Invalid "blockdev" value for cache device ' 'uuid=%s' % cset_uuid) if not cset_uuid: LOG.warning(msg) return None for devuuid, config in cache_data.items(): cache = _sb_get(config, 'cset.uuid') if cache == cset_uuid: return config['blockdev'] return None def _find_bcache_devname(uuid, backing_data, blockdev_data): by_uuid = '/dev/bcache/by-uuid/' + uuid label = _sb_get(backing_data, 'dev.label') for devname, data in blockdev_data.items(): if not devname: continue if devname.startswith('/dev/bcache'): # DEVLINKS is a space separated list devlinks = data.get('DEVLINKS', '').split() if by_uuid in devlinks: return devname if label: return label LOG.warning('Failed to find bcache %s ' % (by_uuid)) def _cache_mode(dev_data): # "1 [writeback]" -> "writeback" attr = _sb_get(dev_data, 'dev.data.cache_mode') if attr: return attr.split()[1][1:-1] return None if not self.blockdev_data: return None backing_device = backing_data.get('blockdev') cache_device = _find_cache_device(backing_data, self.caching) cache_mode = _cache_mode(backing_data) bcache_name = os.path.basename(_find_bcache_devname(backing_uuid, backing_data, self.blockdev_data)) bcache_entry = {'type': 'bcache', 'id': 'disk-%s' % bcache_name, 'name': bcache_name} if cache_mode: bcache_entry['cache_mode'] = cache_mode if backing_device: bcache_entry['backing_device'] = self.blockdev_to_id( self.blockdev_data[backing_device]) if cache_device: bcache_entry['cache_device'] = self.blockdev_to_id( self.blockdev_data[cache_device]) return bcache_entry class BlockdevParser(ProbertParser): probe_data_key = 'blockdev' def parse(self): """ parse probert 'blockdev' data format. returns tuple with list of blockdev entries converted to storage config and any validation errors. """ configs = [] errors = [] for devname, data in self.blockdev_data.items(): # skip composed devices here, except partitions and multipath if data.get('DEVPATH', '').startswith('/devices/virtual/block'): if not self.is_mpath_device(data): if not self.is_mpath_partition(data): if data.get('DEVTYPE', '') != "partition": continue # skip disks that are members of multipath devices if self.is_mpath_member(data): continue entry = self.asdict(data) if entry: try: validate_config(entry) except ValueError as e: errors.append(e) continue configs.append(entry) return (configs, errors) def valid_id(self, id_value): # reject wwn=0x0+ if id_value.lower().startswith('0x'): try: return int(id_value, 16) > 0 except ValueError: return True # accept non-empty (removing whitspace) strings return len(''.join(id_value.split())) > 0 def get_unique_ids(self, blockdev): """ extract preferred ID_* keys for www and serial values. In some cases, ID_ values have duplicate values, this method returns the preferred value for a specific blockdev attribute. """ uniq = {} if self.is_mpath_device(blockdev): source_keys = { 'wwn': ['DM_WWN'], 'serial': ['DM_SERIAL'], # only present with focal+ } else: source_keys = { 'wwn': ['ID_WWN_WITH_EXTENSION', 'ID_WWN'], 'serial': ['ID_SERIAL', 'ID_SERIAL_SHORT'], } for skey, id_keys in source_keys.items(): for id_key in id_keys: if id_key in blockdev and skey not in uniq: if self.valid_id(blockdev[id_key]): uniq[skey] = blockdev[id_key] return uniq def partition_parent_devname(self, blockdev): """ Return the devname of a partition's parent. md0p1 -> /dev/md0 vda1 -> /dev/vda nvme0n1p3 -> /dev/nvme0n1 """ if blockdev['DEVTYPE'] != "partition": raise ValueError('Invalid blockdev, DEVTYPE is not partition') pdevpath = blockdev.get('DEVPATH') if pdevpath: return '/dev/' + os.path.basename(os.path.dirname(pdevpath)) def asdict(self, blockdev_data): """ process blockdev_data and return a curtin storage config dictionary. This method will return curtin storage types: disk, partition. """ dev_type = blockdev_data['DEVTYPE'] if self.is_mpath_partition(blockdev_data): dev_type = 'partition' # just disks and partitions if blockdev_data['DEVTYPE'] not in ["disk", "partition"]: return None # https://www.kernel.org/doc/Documentation/admin-guide/devices.txt # Ignore Floppy (block MAJOR=2), CDROM (block MAJOR=11) # XXX: Possible expansion on this in the future. if blockdev_data['MAJOR'] in ["11", "2"]: return None devname = blockdev_data.get('DEVNAME') entry = { 'type': dev_type, 'id': self.blockdev_to_id(blockdev_data), } if self.is_mpath_device(blockdev_data): entry['multipath'] = blockdev_data['DM_NAME'] elif self.is_mpath_partition(blockdev_data): entry['multipath'] = blockdev_data['DM_MPATH'] # default disks to gpt if entry['type'] == 'disk': uniq_ids = self.get_unique_ids(blockdev_data) # always include path, block_meta will prefer wwn/serial over path uniq_ids.update({'path': devname}) # set wwn, serial, and path entry.update(uniq_ids) # disk entry for ECKD dasds needs device_id and check for vtoc # ptable dasd_config = self.probe_data.get('dasd', {}).get(devname) if dasd_config is not None: dasd_type = dasd_config.get('type', 'ECKD') if dasd_type == 'ECKD': device_id = ( blockdev_data.get('ID_PATH', '').replace('ccw-', '')) if device_id: entry['device_id'] = device_id if dasd_type in ['ECKD', 'virt']: # if dasd has been formatted, attrs.size is non-zero # formatted ECKD dasds have ptable type of 'vtoc' dasd_size = blockdev_data.get('attrs', {}).get('size', "0") if dasd_size != "0": entry['ptable'] = 'vtoc' if 'ID_PART_TABLE_TYPE' in blockdev_data: ptype = blockdev_data['ID_PART_TABLE_TYPE'] if ptype in schemas._ptables: entry['ptable'] = ptype else: entry['ptable'] = schemas._ptable_unsupported return entry if entry['type'] == 'partition': attrs = blockdev_data['attrs'] if self.is_mpath_partition(blockdev_data): entry['number'] = int(blockdev_data['DM_PART']) parent_devname = self.lookup_devname( '/dev/mapper/' + blockdev_data['DM_MPATH']) if parent_devname is None: raise ValueError( "Cannot find parent mpath device %s for %s" % ( blockdev_data['DM_MPATH'], devname)) else: entry['number'] = int(attrs['partition']) parent_devname = self.partition_parent_devname(blockdev_data) parent_blockdev = self.blockdev_data[parent_devname] if 'ID_PART_TABLE_TYPE' not in parent_blockdev: # Exclude the fake partition that the kernel creates # for an otherwise unformatted FBA dasd. dasds = self.probe_data.get('dasd', {}) dasd_config = dasds.get(parent_devname, {}) if dasd_config.get('type', 'ECKD') == 'FBA': return None ptable = parent_blockdev.get('partitiontable') if ptable: part = None for pentry in ptable['partitions']: if self.lookup_devname(pentry['node']) == devname: part = pentry break if part is None: raise RuntimeError( "Couldn't find partition entry in table") else: part = attrs # sectors 512B sector units in both attrs and ptable offset_val = int(part['start']) * 512 if offset_val > 0: entry['offset'] = offset_val # ptable size field is in sectors entry['size'] = int(part['size']) if ptable: entry['size'] *= 512 ptype = blockdev_data.get('ID_PART_ENTRY_TYPE') flag_name, _flag_code = ptable_uuid_to_flag_entry(ptype) if ptable and ptable.get('label') == 'dos': # if the boot flag is set, use this as the flag, logical # flag is not required as we can determine logical via # partition number ptype_flag = blockdev_data.get('ID_PART_ENTRY_FLAGS') if ptype_flag in [MBR_BOOT_FLAG]: flag_name = 'boot' else: # logical partitions are not tagged in data, however # the partition number > 4 (ie, not primary nor extended) if entry['number'] > 4: flag_name = 'logical' if flag_name: entry['flag'] = flag_name # determine parent blockdev and calculate the device id if parent_blockdev: device_id = self.blockdev_to_id(parent_blockdev) if device_id: entry['device'] = device_id return entry class FilesystemParser(ProbertParser): probe_data_key = 'filesystem' def parse(self): """parse probert 'filesystem' data format. returns tuple with list entries converted to storage config type:format and any validation errors. """ configs = [] errors = [] for devname, data in self.class_data.items(): blockdev_data = self.blockdev_data.get(devname) if not blockdev_data: err = ('No probe data found for blockdev ' '%s for fs: %s' % (devname, data)) errors.append(err) continue if self.is_mpath_member(blockdev_data): continue # no floppy, no cdrom if blockdev_data['MAJOR'] in ["11", "2"]: continue volume_id = self.blockdev_to_id(blockdev_data) # don't capture non-filesystem usage # crypto is just a disguised filesystem if data['USAGE'] not in ("filesystem", "crypto"): continue entry = self.asdict(volume_id, data) if not entry: continue # allow types that we cannot create only if preserve == true if data.get('TYPE') not in schemas._fstypes: entry['preserve'] = True try: validate_config(entry) except ValueError as e: errors.append(e) continue configs.append(entry) return (configs, errors) def asdict(self, volume_id, fs_data): """ process fs_data and return a curtin storage config dict. This method will return curtin storage type: format. { 'LABEL': xxxx, 'TYPE': ext2, 'UUID': ....., } """ entry = { 'id': 'format-' + volume_id, 'type': 'format', 'volume': volume_id, 'fstype': fs_data.get('TYPE'), } uuid = fs_data.get('UUID') if uuid: valid_uuid = re.match(schemas._uuid_pattern, uuid) if valid_uuid: entry['uuid'] = uuid return entry class LvmParser(ProbertParser): probe_data_key = 'lvm' def lvm_partition_asdict(self, lv_name, lv_config): return {'type': 'lvm_partition', 'id': 'lvm-partition-%s' % lv_config['name'], 'name': lv_config['name'], 'size': lv_config['size'], 'volgroup': 'lvm-volgroup-%s' % lv_config['volgroup']} def lvm_volgroup_asdict(self, vg_name, vg_config): """ process volgroup probe structure into storage config dict.""" blockdev_ids = [] for pvol in vg_config.get('devices', []): pvol_bdev = self.lookup_devname(pvol) blockdev_data = self.blockdev_data[pvol_bdev] if blockdev_data: blockdev_ids.append(self.blockdev_to_id(blockdev_data)) return {'type': 'lvm_volgroup', 'id': 'lvm-volgroup-%s' % vg_name, 'name': vg_name, 'devices': sorted(blockdev_ids)} def parse(self): """parse probert 'lvm' data format. returns tuple with list entries converted to storage config type:lvm_partition, type:lvm_volgroup and any validation errors. """ # exit early if lvm_data is empty if 'volume_groups' not in self.class_data: return ([], []) configs = [] errors = [] for vg_name, vg_config in self.class_data['volume_groups'].items(): entry = self.lvm_volgroup_asdict(vg_name, vg_config) if entry: try: validate_config(entry) except ValueError as e: errors.append(e) continue configs.append(entry) for lv_name, lv_config in self.class_data['logical_volumes'].items(): entry = self.lvm_partition_asdict(lv_name, lv_config) if entry: try: validate_config(entry) except ValueError as e: errors.append(e) continue configs.append(entry) return (configs, errors) class DasdParser(ProbertParser): probe_data_key = 'dasd' def asdict(self, dasd_config): if dasd_config.get("type", "ECKD") != "ECKD": return None dasd_name = os.path.basename(dasd_config['name']) device_id = dasd_config['device_id'] blocksize = dasd_config['blocksize'] disk_layout = dasd_config['disk_layout'] return {'type': 'dasd', 'id': 'dasd-%s' % dasd_name, 'device_id': device_id, 'blocksize': blocksize, 'mode': 'full' if disk_layout == 'not-formatted' else 'quick', 'disk_layout': disk_layout} def parse(self): """parse probert 'dasd' data format. returns tuple of lists: (configs, errors) contain configs of type:dasd and any errors. """ configs = [] errors = [] for dasd_name, dasd_config in self.class_data.items(): entry = self.asdict(dasd_config) if entry: try: validate_config(entry) except ValueError as e: errors.append(e) continue configs.append(entry) return (configs, errors) class DmcryptParser(ProbertParser): probe_data_key = 'dmcrypt' def asdict(self, crypt_config): crypt_name = crypt_config['name'] backing_dev = crypt_config['blkdevs_used'] if not backing_dev.startswith('/dev/'): backing_dev = os.path.join('/dev', backing_dev) bdev = self.lookup_devname(backing_dev) bdev_data = self.blockdev_data[bdev] bdev_id = self.blockdev_to_id(bdev_data) if bdev_data else None if not bdev_id: raise ValueError('Cannot find blockdev id for %s' % bdev) return {'type': 'dm_crypt', 'id': 'dmcrypt-%s' % crypt_name, 'volume': bdev_id, 'key': '', 'dm_name': crypt_name} def parse(self): """parse probert 'dmcrypt' data format. returns tuple of lists: (configs, errors) contain configs of type:dmcrypt and any errors. """ configs = [] errors = [] for crypt_name, crypt_config in self.class_data.items(): entry = self.asdict(crypt_config) if entry: try: validate_config(entry) except ValueError as e: errors.append(e) continue configs.append(entry) return (configs, errors) class RaidParser(ProbertParser): probe_data_key = 'raid' def asdict(self, raid_data): devname = raid_data.get('DEVNAME', 'NODEVNAMEKEY') # FIXME, need to handle rich md_name values, rather than mdX # LP: #1803933 raidname = os.path.basename(devname) action = { 'type': 'raid', 'id': self.blockdev_to_id(raid_data), 'name': raidname, 'raidlevel': raid_data.get('raidlevel'), } if 'MD_METADATA' in raid_data: action['metadata'] = raid_data["MD_METADATA"] if 'container' in raid_data: action['container'] = self.blockdev_byid_to_devname( raid_data['container']) else: for k in 'devices', 'spare_devices': action[k] = sorted([ self.blockdev_byid_to_devname(dev) for dev in raid_data.get(k, [])]) return action def parse(self): """parse probert 'raid' data format. Collects storage config type: raid for valid data and returns tuple of lists, configs, errors. """ configs = [] errors = [] for devname, data in self.class_data.items(): entry = self.asdict(data) if entry: try: validate_config(entry) except ValueError as e: errors.append(e) continue configs.append(entry) return (configs, errors) class MountParser(ProbertParser): probe_data_key = 'mount' def asdict(self, mdata): # the source value may be a devlink alias, look it up source = self.lookup_devname(mdata.get('source')) # we can filter mounts for block devices only # this excludes lots of sys/proc/dev/cgroup # mounts that are found but not related to # storage config # XXX: bind mounts might need some work here if not source: return {} # no floppy, no cdrom if self.blockdev_data[source]['MAJOR'] in ["11", "2"]: return {} source_id = self.blockdev_to_id(self.blockdev_data[source]) return {'type': 'mount', 'id': 'mount-%s' % source_id, 'path': mdata.get('target'), 'device': 'format-%s' % source_id} def parse(self): """parse probert 'mount' data format mount : [{.. 'children': [..]}] Collects storage config type: mount for valid data and returns tuple of lists: (configs, errors) """ def collect_mounts(mdata): mounts = [self.asdict(mdata)] for child in mdata.get('children', []): mounts.extend(collect_mounts(child)) return [mnt for mnt in mounts if mnt] configs = [] errors = [] for mdata in self.class_data: collected_mounts = collect_mounts(mdata) for entry in collected_mounts: try: validate_config(entry) except ValueError as e: errors.append(e) continue configs.append(entry) return (configs, errors) class ZfsParser(ProbertParser): probe_data_key = 'zfs' def get_local_ds_properties(self, dataset): """ extract a dictionary of propertyname: value for any property that has a source of 'local' which means it's been set by configuration. """ if 'properties' not in dataset: return {} set_props = {} for prop_name, setting in dataset['properties'].items(): if setting['source'] == 'local': set_props[prop_name] = setting['value'] return set_props def zpool_asdict(self, name, zpool_data): """ convert zpool data and convert to curtin storage_config dict. """ vdevs = [] zdb = zpool_data.get('zdb', {}) for child_name, child_config in zdb.get('vdev_tree', {}).items(): if not child_name.startswith('children'): continue path = child_config.get('path') devname = self.blockdev_byid_to_devname(path) # skip any zpools not backed by blockdevices if not devname: continue vdevs.append(devname) if len(vdevs) == 0: return None id_name = 'zpool-%s-%s' % (os.path.basename(vdevs[0]), name) return {'type': 'zpool', 'id': id_name, 'pool': name, 'vdevs': sorted(vdevs)} def zfs_asdict(self, ds_name, ds_properties, zpool_data): # ignore the base pool name (rpool) vs (rpool/ROOT/zfsroot) if '/' not in ds_name or not zpool_data: return id_name = 'zfs-%s' % ds_name.replace('/', '-') parent_zpool_name = zpool_data.get('pool') return {'type': 'zfs', 'id': id_name, 'pool': zpool_data.get('id'), 'volume': ds_name.split(parent_zpool_name)[-1], 'properties': ds_properties} def parse(self): """ parse probert 'zfs' data format zfs: { 'zpools': { '': { 'datasets': { : { "properties": { "propname": {'source': "default", 'value': ""}, } } } 'zdb': { ... vdev_tree: { childrens[N]: { 'path': '/dev/disk/by-id/foo', } } version: 28, } } } } """ errors = [] zpool_configs = [] zfs_configs = [] for zp_name, zp_data in self.class_data.get('zpools', {}).items(): zpool_entry = self.zpool_asdict(zp_name, zp_data) if zpool_entry: try: validate_config(zpool_entry) except ValueError as e: errors.append(e) zpool_entry = None datasets = zp_data.get('datasets') for ds in datasets.keys(): ds_props = self.get_local_ds_properties(datasets[ds]) zfs_entry = self.zfs_asdict(ds, ds_props, zpool_entry) if zfs_entry: try: validate_config(zfs_entry) except ValueError as e: errors.append(e) continue zfs_configs.append(zfs_entry) if zpool_entry: zpool_configs.append(zpool_entry) return (zpool_configs + zfs_configs, errors) def ptable_uuid_to_flag_entry(guid): name = code = None # prefix non-uuid guid values with 0x if guid and '-' not in guid and not guid.upper().startswith('0X'): guid = '0x' + guid if guid and guid.upper() in PTABLE_TYPE_MAP: name, code = PTABLE_TYPE_MAP[guid.upper()] return (name, code) def extract_storage_config(probe_data, strict=False): """ Examine a probert storage dictionary and extract a curtin storage configuration that would recreate all of the storage devices present in the provided data. Returns a storage config dictionary """ convert_map = { 'bcache': BcacheParser, 'blockdev': BlockdevParser, 'dasd': DasdParser, 'dmcrypt': DmcryptParser, 'filesystem': FilesystemParser, 'lvm': LvmParser, 'raid': RaidParser, 'mount': MountParser, 'zfs': ZfsParser, } configs = [] errors = [] LOG.debug('Extracting storage config from probe data') for ptype, pname in convert_map.items(): parser = pname(probe_data) found_cfgs, found_errs = parser.parse() configs.extend(found_cfgs) errors.extend(found_errs) LOG.debug('Sorting extracted configurations') dasd = [cfg for cfg in configs if cfg.get('type') == 'dasd'] disk = [cfg for cfg in configs if cfg.get('type') == 'disk'] part = [cfg for cfg in configs if cfg.get('type') == 'partition'] format = [cfg for cfg in configs if cfg.get('type') == 'format'] lvols = [cfg for cfg in configs if cfg.get('type') == 'lvm_volgroup'] lparts = [cfg for cfg in configs if cfg.get('type') == 'lvm_partition'] raids = [cfg for cfg in configs if cfg.get('type') == 'raid'] dmcrypts = [cfg for cfg in configs if cfg.get('type') == 'dm_crypt'] mounts = [cfg for cfg in configs if cfg.get('type') == 'mount'] bcache = [cfg for cfg in configs if cfg.get('type') == 'bcache'] zpool = [cfg for cfg in configs if cfg.get('type') == 'zpool'] zfs = [cfg for cfg in configs if cfg.get('type') == 'zfs'] ordered = (dasd + disk + part + format + lvols + lparts + raids + dmcrypts + mounts + bcache + zpool + zfs) final_config = {'storage': {'version': 1, 'config': ordered}} try: LOG.info('Validating extracted storage config components') validate_config(final_config['storage']) except ValueError as e: errors.append(e) for e in errors: LOG.exception('Validation error: %s\n' % e) if len(errors) > 0: errmsg = "Extract storage config does not validate." LOG.warning(errmsg) if strict: raise RuntimeError(errmsg) # build and merge probed data into a valid storage config by # generating a config tree for each item in the probed data # and then merging the trees, which resolves dependencies # and produced a dependency ordered storage config LOG.debug("Extracted (unmerged) storage config:\n%s", yaml.dump({'storage': ordered}, indent=4, default_flow_style=False)) LOG.debug("Generating storage config dependencies") ctrees = [] for cfg in ordered: tree = get_config_tree(cfg.get('id'), final_config) ctrees.append(tree) LOG.debug("Merging storage config dependencies") merged_config = { 'version': 1, 'config': merge_config_trees_to_list(ctrees) } LOG.debug("Merged storage config:\n%s", yaml.dump({'storage': merged_config}, indent=4, default_flow_style=False)) return {'storage': merged_config} # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/swap.py000066400000000000000000000140561415350476600156140ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import os import resource from .log import LOG from . import util from curtin import paths from curtin import distro def suggested_swapsize(memsize=None, maxsize=None, fsys=None): # make a suggestion on the size of swap for this system. if memsize is None: memsize = util.get_meminfo()['total'] GB = 2 ** 30 sugg_max = 8 * GB if fsys is None and maxsize is None: # set max to 8GB default if no filesystem given maxsize = sugg_max elif fsys: avail = util.get_fs_use_info(fsys)[1] if maxsize is None: # set to 25% of filesystem space maxsize = min(int(avail / 4), sugg_max) elif maxsize > ((avail * .9)): # set to 90% of available disk space maxsize = int(avail * .9) formulas = [ # < 1G: swap = double memory (1 * GB, lambda x: x * 2), # < 2G: swap = 2G (2 * GB, lambda x: 2 * GB), # < 4G: swap = memory (4 * GB, lambda x: x), # < 16G: 4G (16 * GB, lambda x: 4 * GB), # < 64G: 1/2 M up to max (64 * GB, lambda x: x / 2), ] size = None for top, func in formulas: if memsize <= top: size = min(func(memsize), maxsize) if size < (memsize / 2) and size < 4 * GB: return 0 return size return maxsize def get_fstype(target, source): target_source = paths.target_path(target, source) try: out, _ = util.subp(['findmnt', '--noheading', '--target', target_source, '-o', 'FSTYPE'], capture=True) except util.ProcessExecutionError as exc: LOG.warning('Failed to query %s fstype, findmnt returned error: %s', target_source, exc) return None if out: """ $ findmnt --noheading --target /btrfs -o FSTYPE btrfs """ return out.splitlines()[-1] return None def get_target_kernel_version(target): pkg_ver = None distro_info = distro.get_distroinfo(target=target) if not distro_info: raise RuntimeError('Failed to determine target distro') osfamily = distro_info.family if osfamily == distro.DISTROS.debian: try: # check in-target version pkg_ver = distro.get_package_version('linux-image-generic', target=target) except Exception as e: LOG.warn( "failed reading linux-image-generic package version, %s", e) return pkg_ver def can_use_swapfile(target, fstype): if fstype is None: raise RuntimeError( 'Unknown target filesystem type, may not support swapfiles') if fstype in ['btrfs', 'xfs']: # check kernel version pkg_ver = get_target_kernel_version(target) if not pkg_ver: raise RuntimeError('Failed to read target kernel version') if fstype == 'btrfs' and pkg_ver['major'] < 5: raise RuntimeError( 'btrfs requiers kernel version 5.0+ to use swapfiles') elif fstype in ['zfs']: raise RuntimeError('ZFS cannot use swapfiles') def setup_swapfile(target, fstab=None, swapfile=None, size=None, maxsize=None, force=False): if size is None: size = suggested_swapsize(fsys=target, maxsize=maxsize) if size == 0: LOG.debug("Not creating swap: suggested size was 0") return if swapfile is None: swapfile = "/swap.img" if not swapfile.startswith("/"): swapfile = "/" + swapfile # query the directory in which swapfile will reside fstype = get_fstype(target, os.path.dirname(swapfile)) try: can_use_swapfile(target, fstype) except RuntimeError as err: if force: LOG.warning('swapfile may not work: %s', err) else: LOG.debug('Not creating swap: %s', err) return allocate_cmd = 'fallocate -l "${2}M" "$1"' # fallocate uses IOCTLs to allocate space in a filesystem, however it's not # clear (from curtin's POV) that it creates non-sparse files on btrfs or # xfs as required by mkswap so we'll skip fallocate for now and use dd. It # is also plain not supported on ext2 and ext3. if fstype in ['btrfs', 'ext2', 'ext3', 'xfs']: allocate_cmd = 'dd if=/dev/zero "of=$1" bs=1M "count=$2"' mbsize = str(int(size / (2 ** 20))) msg = "creating swap file '%s' of %sMB" % (swapfile, mbsize) fpath = os.path.sep.join([target, swapfile]) try: util.ensure_dir(os.path.dirname(fpath)) with util.LogTimer(LOG.debug, msg): util.subp( ['sh', '-c', ('rm -f "$1" && umask 0066 && truncate -s 0 "$1" && ' '{ chattr +C "$1" || true; } && ') + allocate_cmd + (' && mkswap "$1" || { r=$?; rm -f "$1"; exit $r; }'), 'setup_swap', fpath, mbsize]) except Exception: LOG.warn("failed %s" % msg) raise if fstab is None: return try: line = '\t'.join([swapfile, 'none', 'swap', 'sw', '0', '0']) with open(fstab, "a") as fp: fp.write(line + "\n") except Exception: os.unlink(fpath) raise def is_swap_device(path): """ Determine if specified device is a swap device. Linux swap devices write a magic header value on kernel PAGESIZE - 10. https://github.com/torvalds/linux/blob/master/include/linux/swap.h#L111 """ LOG.debug('Checking if %s is a swap device', path) pagesize = resource.getpagesize() magic_offset = pagesize - 10 size = util.file_size(path) if size < magic_offset: LOG.debug("%s is to small for swap (size=%d < pagesize=%d)", path, size, pagesize) return False magic = util.load_file( path, read_len=10, offset=magic_offset, decode=False) LOG.debug('Found swap magic: %s' % magic) return magic in [b'SWAPSPACE2', b'SWAP-SPACE'] # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/udev.py000066400000000000000000000105751415350476600156070ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import shlex import os from curtin import util from curtin.log import logged_call, LOG try: shlex_quote = shlex.quote except AttributeError: # python2.7 uses pipes.quote import pipes shlex_quote = pipes.quote def compose_udev_equality(key, value): """Return a udev comparison clause, like `ACTION=="add"`.""" assert key == key.upper() return '%s=="%s"' % (key, value) def compose_udev_attr_equality(attribute, value): """Return a udev attribute comparison clause, like `ATTR{type}=="1"`.""" assert attribute == attribute.lower() return 'ATTR{%s}=="%s"' % (attribute, value) def compose_udev_setting(key, value): """Return a udev assignment clause, like `NAME="eth0"`.""" assert key == key.upper() return '%s="%s"' % (key, value) def generate_udev_rule(interface, mac): """Return a udev rule to set the name of network interface with `mac`. The rule ends up as a single line looking something like: SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}="ff:ee:dd:cc:bb:aa", NAME="eth0" """ rule = ', '.join([ compose_udev_equality('SUBSYSTEM', 'net'), compose_udev_equality('ACTION', 'add'), compose_udev_equality('DRIVERS', '?*'), compose_udev_attr_equality('address', mac), compose_udev_setting('NAME', interface), ]) return '%s\n' % rule @logged_call() def udevadm_settle(exists=None, timeout=None): settle_cmd = ["udevadm", "settle"] if exists: # skip the settle if the requested path already exists if os.path.exists(exists): return settle_cmd.extend(['--exit-if-exists=%s' % exists]) if timeout: settle_cmd.extend(['--timeout=%s' % timeout]) util.subp(settle_cmd) def udevadm_trigger(devices): if devices is None: devices = [] util.subp(['udevadm', 'trigger'] + list(devices)) udevadm_settle() def udevadm_info(path=None): """ Return a dictionary populated by properties of the device specified in the `path` variable via querying udev 'property' database. :params: path: path to device, either /dev or /sys :returns: dictionary of key=value pairs as exported from the udev database :raises: ValueError path is None, ProcessExecutionError on exec error. """ if not path: raise ValueError('Invalid path: "%s"' % path) info_cmd = ['udevadm', 'info', '--query=property', '--export', path] output, _ = util.subp(info_cmd, capture=True) # strip for trailing empty line info = {} for line in output.splitlines(): if not line: continue # maxsplit=1 gives us key and remaininng part of line is value # py2.7 on Trusty doesn't have keyword, pass as argument key, value = line.split('=', 1) if not value: value = None if value: # preserve spaces in values to match udev database try: parsed = shlex.split(value) except ValueError: # strip the leading/ending single tick from udev output before # escaping the value to prevent their inclusion in the result. trimmed_value = value[1:-1] try: quoted = shlex_quote(trimmed_value) LOG.debug('udevadm_info: quoting shell-escape chars ' 'in %s=%s -> %s', key, value, quoted) parsed = shlex.split(quoted) except ValueError: escaped_value = ( trimmed_value.replace("'", "_").replace('"', "_")) LOG.debug('udevadm_info: replacing shell-escape chars ' 'in %s=%s -> %s', key, value, escaped_value) parsed = shlex.split(escaped_value) if ' ' not in value: info[key] = parsed[0] else: # special case some known entries with spaces, e.g. ID_SERIAL # and DEVLINKS, see tests/unittests/test_udev.py if key == "DEVLINKS": info[key] = shlex.split(parsed[0]) elif key == 'ID_SERIAL': info[key] = parsed[0] else: info[key] = parsed return info # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/url_helper.py000066400000000000000000000367141415350476600170100ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from email.utils import parsedate import json import os import socket import sys import time import uuid from functools import partial from curtin import version try: from urllib import request as _u_re # pylint: disable=no-name-in-module from urllib import error as _u_e # pylint: disable=no-name-in-module from urllib.parse import urlparse # pylint: disable=no-name-in-module urllib_request = _u_re urllib_error = _u_e except ImportError: # python2 import urllib2 as urllib_request import urllib2 as urllib_error from urlparse import urlparse # pylint: disable=import-error from .log import LOG error = urllib_error DEFAULT_HEADERS = {'User-Agent': 'Curtin/' + version.version_string()} class _ReRaisedException(Exception): exc = None """this exists only as an exception type that was re-raised by an exception_cb, so code can know to handle it specially""" def __init__(self, exc): self.exc = exc class UrlReader(object): fp = None def __init__(self, url, headers=None, data=None): headers = _get_headers(headers) self.url = url try: req = urllib_request.Request(url=url, data=data, headers=headers) self.fp = urllib_request.urlopen(req) except urllib_error.HTTPError as exc: raise UrlError(exc, code=exc.code, headers=exc.headers, url=url, reason=exc.reason) except Exception as exc: raise UrlError(exc, code=None, headers=None, url=url, reason="unknown") self.info = self.fp.info() self.size = self.info.get('content-length', -1) def read(self, buflen): try: return self.fp.read(buflen) except urllib_error.HTTPError as exc: raise UrlError(exc, code=exc.code, headers=exc.headers, url=self.url, reason=exc.reason) except Exception as exc: raise UrlError(exc, code=None, headers=None, url=self.url, reason="unknown") def close(self): if not self.fp: return try: self.fp.close() finally: self.fp = None def __enter__(self): return self def __exit__(self, etype, value, trace): self.close() def download(url, path, reporthook=None, data=None, retries=0, retry_delay=3): """Download url to path. reporthook is compatible with py3 urllib.request.urlretrieve. urlretrieve does not exist in py2.""" buflen = 8192 attempts = 0 while True: wfp = open(path, "wb") try: buf = None blocknum = 0 fsize = 0 start = time.time() with UrlReader(url) as rfp: if reporthook: reporthook(blocknum, buflen, rfp.size) while True: buf = rfp.read(buflen) if not buf: break blocknum += 1 if reporthook: reporthook(blocknum, buflen, rfp.size) wfp.write(buf) fsize += len(buf) timedelta = time.time() - start LOG.debug("Downloaded %d bytes from %s to %s in %.2fs (%.2fMbps)", fsize, url, path, timedelta, fsize / timedelta / 1024 / 1024) return path, rfp.info except UrlError as e: # retry on internal server errors up to "retries #" if (e.code < 500 or attempts >= retries): raise e LOG.debug("Current download failed with error: %s. Retrying in" " %d seconds.", e.code, retry_delay) attempts += 1 time.sleep(retry_delay) finally: wfp.close() def get_maas_version(endpoint): """ Attempt to return the MAAS version via api calls to the specified endpoint. MAAS endpoint url looks like this: http://10.245.168.2/MAAS/metadata/status/node-f0462064-20f6-11e5-990a-d4bed9a84493 We need the MAAS_URL, which is http://10.245.168.2 Returns a maas version dictionary: {'subversion': '16.04.1', 'capabilities': ['networks-management', 'static-ipaddresses', 'ipv6-deployment-ubuntu', 'devices-management', 'storage-deployment-ubuntu', 'network-deployment-ubuntu', 'bridging-interface-ubuntu', 'bridging-automatic-ubuntu'], 'version': '2.1.5+bzr5596-0ubuntu1' } """ # https://docs.ubuntu.com/maas/devel/en/api indicates that # we leave 1.0 in here for maas 1.9 endpoints MAAS_API_SUPPORTED_VERSIONS = ["1.0", "2.0"] try: parsed = urlparse(endpoint) except AttributeError as e: LOG.warn('Failed to parse endpoint URL: %s', e) return None maas_host = "%s://%s" % (parsed.scheme, parsed.netloc) maas_api_version_url = "%s/MAAS/api/version/" % (maas_host) try: result = geturl(maas_api_version_url) except UrlError as e: LOG.warn('Failed to query MAAS API version URL: %s', e) return None api_version = result.decode('utf-8') if api_version not in MAAS_API_SUPPORTED_VERSIONS: LOG.warn('Endpoint "%s" API version "%s" not in MAAS supported' 'versions: "%s"', endpoint, api_version, MAAS_API_SUPPORTED_VERSIONS) return None maas_version_url = "%s/MAAS/api/%s/version/" % (maas_host, api_version) maas_version = None try: result = geturl(maas_version_url) maas_version = json.loads(result.decode('utf-8')) except UrlError as e: LOG.warn('Failed to query MAAS version via URL: %s', e) except (ValueError, TypeError): LOG.warn('Failed to load MAAS version result: %s', result) return maas_version def _get_headers(headers=None): allheaders = DEFAULT_HEADERS.copy() if headers is not None: allheaders.update(headers) return allheaders def _geturl(url, headers=None, headers_cb=None, exception_cb=None, data=None): headers = _get_headers(headers) if headers_cb: headers.update(headers_cb(url)) if data and isinstance(data, dict): data = json.dumps(data).encode() try: req = urllib_request.Request(url=url, data=data, headers=headers) r = urllib_request.urlopen(req).read() # python2, we want to return bytes, which is what python3 does if isinstance(r, str): return r.decode() return r except urllib_error.HTTPError as exc: myexc = UrlError(exc, code=exc.code, headers=exc.headers, url=url, reason=exc.reason) except Exception as exc: myexc = UrlError(exc, code=None, headers=None, url=url, reason="unknown") if exception_cb: try: exception_cb(myexc) except Exception as e: myexc = _ReRaisedException(e) raise myexc def geturl(url, headers=None, headers_cb=None, exception_cb=None, data=None, retries=None, log=LOG.warn): """return the content of the url in binary_type. (py3: bytes, py2: str)""" if retries is None: retries = [] curexc = None for trynum, naptime in enumerate(retries): try: return _geturl(url=url, headers=headers, headers_cb=headers_cb, exception_cb=exception_cb, data=data) except _ReRaisedException: raise curexc.exc except Exception as e: curexc = e if log: msg = ("try %d of request to %s failed. sleeping %d: %s" % (naptime, url, naptime, curexc)) log(msg) time.sleep(naptime) try: return _geturl(url=url, headers=headers, headers_cb=headers_cb, exception_cb=exception_cb, data=data) except _ReRaisedException as e: raise e.exc class UrlError(IOError): def __init__(self, cause, code=None, headers=None, url=None, reason=None): IOError.__init__(self, str(cause)) self.cause = cause self.code = code self.headers = headers if self.headers is None: self.headers = {} self.url = url self.reason = reason def __str__(self): if isinstance(self.cause, urllib_error.HTTPError): msg = "http error: %s" % self.cause.code elif isinstance(self.cause, urllib_error.URLError): msg = "url error: %s" % self.cause.reason elif isinstance(self.cause, socket.timeout): msg = "socket timeout: %s" % self.cause else: msg = "Unknown Exception: %s" % self.cause return "[%s] " % self.url + msg class OauthUrlHelper(object): def __init__(self, consumer_key=None, token_key=None, token_secret=None, consumer_secret=None, skew_data_file="/run/oauth_skew.json"): self.consumer_key = consumer_key self.consumer_secret = consumer_secret or "" self.token_key = token_key self.token_secret = token_secret self.skew_data_file = skew_data_file self._do_oauth = True self.skew_change_limit = 5 required = (self.token_key, self.token_secret, self.consumer_key) if not any(required): self._do_oauth = False elif not all(required): raise ValueError("all or none of token_key, token_secret, or " "consumer_key can be set") old = self.read_skew_file() self.skew_data = old or {} def __str__(self): fields = ['consumer_key', 'consumer_secret', 'token_key', 'token_secret'] masked = fields def r(name): if not hasattr(self, name): rval = "_unset" else: val = getattr(self, name) if val is None: rval = "None" elif name in masked: rval = '"%s"' % ("*" * len(val)) else: rval = '"%s"' % val return '%s=%s' % (name, rval) return ("OauthUrlHelper(" + ','.join([r(f) for f in fields]) + ")") def read_skew_file(self): if self.skew_data_file and os.path.isfile(self.skew_data_file): with open(self.skew_data_file, mode="r") as fp: return json.load(fp) return None def update_skew_file(self, host, value): # this is not atomic if not self.skew_data_file: return cur = self.read_skew_file() if cur is None: cur = {} cur[host] = value with open(self.skew_data_file, mode="w") as fp: fp.write(json.dumps(cur)) def exception_cb(self, exception): if not (isinstance(exception, UrlError) and (exception.code == 403 or exception.code == 401)): return if 'date' not in exception.headers: LOG.warn("Missing header 'date' in %s response", exception.code) return date = exception.headers['date'] try: remote_time = time.mktime(parsedate(date)) except Exception as e: LOG.warn("Failed to convert datetime '%s': %s", date, e) return skew = int(remote_time - time.time()) host = urlparse(exception.url).netloc old_skew = self.skew_data.get(host, 0) if abs(old_skew - skew) > self.skew_change_limit: self.update_skew_file(host, skew) LOG.warn("Setting oauth clockskew for %s to %d", host, skew) self.skew_data[host] = skew return def headers_cb(self, url): if not self._do_oauth: return {} host = urlparse(url).netloc clockskew = None if self.skew_data and host in self.skew_data: clockskew = self.skew_data[host] return oauth_headers( url=url, consumer_key=self.consumer_key, token_key=self.token_key, token_secret=self.token_secret, consumer_secret=self.consumer_secret, clockskew=clockskew) def _wrapped(self, wrapped_func, args, kwargs): kwargs['headers_cb'] = partial( self._headers_cb, kwargs.get('headers_cb')) kwargs['exception_cb'] = partial( self._exception_cb, kwargs.get('exception_cb')) return wrapped_func(*args, **kwargs) def geturl(self, *args, **kwargs): return self._wrapped(geturl, args, kwargs) def _exception_cb(self, extra_exception_cb, exception): ret = None try: if extra_exception_cb: ret = extra_exception_cb(exception) finally: self.exception_cb(exception) return ret def _headers_cb(self, extra_headers_cb, url): headers = {} if extra_headers_cb: headers = extra_headers_cb(url) headers.update(self.headers_cb(url)) return headers def _oauth_headers_none(url, consumer_key, token_key, token_secret, consumer_secret, clockskew=0): """oauth_headers implementation when no oauth is available""" if not any([token_key, token_secret, consumer_key]): return {} pkg = "'python3-oauthlib'" if sys.version_info[0] == 2: pkg = "'python-oauthlib' or 'python-oauth'" raise ValueError( "Oauth was necessary but no oauth library is available. " "Please install package " + pkg + ".") def _oauth_headers_oauth(url, consumer_key, token_key, token_secret, consumer_secret, clockskew=0): """Build OAuth headers with oauth using given credentials.""" consumer = oauth.OAuthConsumer(consumer_key, consumer_secret) token = oauth.OAuthToken(token_key, token_secret) if clockskew is None: clockskew = 0 timestamp = int(time.time()) + clockskew params = { 'oauth_version': "1.0", 'oauth_nonce': uuid.uuid4().hex, 'oauth_timestamp': timestamp, 'oauth_token': token.key, 'oauth_consumer_key': consumer.key, } req = oauth.OAuthRequest(http_url=url, parameters=params) req.sign_request( oauth.OAuthSignatureMethod_PLAINTEXT(), consumer, token) return(req.to_header()) def _oauth_headers_oauthlib(url, consumer_key, token_key, token_secret, consumer_secret, clockskew=0): """Build OAuth headers with oauthlib using given credentials.""" if clockskew is None: clockskew = 0 timestamp = int(time.time()) + clockskew client = oauth1.Client( consumer_key, client_secret=consumer_secret, resource_owner_key=token_key, resource_owner_secret=token_secret, signature_method=oauth1.SIGNATURE_PLAINTEXT, timestamp=str(timestamp)) uri, signed_headers, body = client.sign(url) return signed_headers oauth_headers = _oauth_headers_none try: # prefer to use oauthlib. (python-oauthlib) import oauthlib.oauth1 as oauth1 oauth_headers = _oauth_headers_oauthlib except ImportError: # no oauthlib was present, try using oauth (python-oauth) try: import oauth.oauth as oauth oauth_headers = _oauth_headers_oauth except ImportError: # we have no oauth libraries available, use oauth_headers_none pass # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/util.py000066400000000000000000001224331415350476600156160ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. import argparse import collections from contextlib import contextmanager import errno import json import os import platform import re import shlex import shutil import socket import subprocess import stat import sys import tempfile import time # avoid the dependency to python3-six as used in cloud-init try: from urlparse import urlparse except ImportError: # python3 # avoid triggering pylint, https://github.com/PyCQA/pylint/issues/769 # pylint:disable=import-error,no-name-in-module from urllib.parse import urlparse try: string_types = (basestring,) except NameError: string_types = (str,) try: numeric_types = (int, float, long) except NameError: # python3 does not have a long type. numeric_types = (int, float) try: FileMissingError = FileNotFoundError except NameError: FileMissingError = IOError from . import paths from .log import LOG, log_call binary_type = bytes if sys.version_info[0] < 3: binary_type = str _INSTALLED_HELPERS_PATH = 'usr/lib/curtin/helpers' _INSTALLED_MAIN = 'usr/bin/curtin' _USES_SYSTEMD = None _HAS_UNSHARE_PID = None _DNS_REDIRECT_IP = None # matcher used in template rendering functions BASIC_MATCHER = re.compile(r'\$\{([A-Za-z0-9_.]+)\}|\$([A-Za-z0-9_.]+)') def _subp(args, data=None, rcs=None, env=None, capture=False, combine_capture=False, shell=False, logstring=False, decode="replace", target=None, cwd=None, log_captured=False, unshare_pid=None): if rcs is None: rcs = [0] devnull_fp = None tpath = paths.target_path(target) chroot_args = [] if tpath == "/" else ['chroot', target] sh_args = ['sh', '-c'] if shell else [] if isinstance(args, string_types): args = [args] try: unshare_args = _get_unshare_pid_args(unshare_pid, tpath) except RuntimeError as e: raise RuntimeError("Unable to unshare pid (cmd=%s): %s" % (args, e)) args = unshare_args + chroot_args + sh_args + list(args) if not logstring: LOG.debug( "Running command %s with allowed return codes %s (capture=%s)", args, rcs, 'combine' if combine_capture else capture) else: LOG.debug(("Running hidden command to protect sensitive " "input/output logstring: %s"), logstring) try: stdin = None stdout = None stderr = None if capture: stdout = subprocess.PIPE stderr = subprocess.PIPE if combine_capture: stdout = subprocess.PIPE stderr = subprocess.STDOUT if data is None: devnull_fp = open(os.devnull) stdin = devnull_fp else: stdin = subprocess.PIPE sp = subprocess.Popen(args, stdout=stdout, stderr=stderr, stdin=stdin, env=env, shell=False, cwd=cwd) # communicate in python2 returns str, python3 returns bytes (out, err) = sp.communicate(data) # Just ensure blank instead of none. if capture or combine_capture: if not out: out = b'' if not err: err = b'' if decode: def ldecode(data, m='utf-8'): if not isinstance(data, bytes): return data return data.decode(m, errors=decode) out = ldecode(out) err = ldecode(err) except OSError as e: raise ProcessExecutionError(cmd=args, reason=e) finally: if devnull_fp: devnull_fp.close() if capture and log_captured: LOG.debug("Command returned stdout=%s, stderr=%s", out, err) rc = sp.returncode # pylint: disable=E1101 if rc not in rcs: raise ProcessExecutionError(stdout=out, stderr=err, exit_code=rc, cmd=args) return (out, err) def _has_unshare_pid(): global _HAS_UNSHARE_PID if _HAS_UNSHARE_PID is not None: return _HAS_UNSHARE_PID if not which('unshare'): _HAS_UNSHARE_PID = False return False out, err = subp(["unshare", "--help"], capture=True, decode=False, unshare_pid=False) joined = b'\n'.join([out, err]) _HAS_UNSHARE_PID = b'--fork' in joined and b'--pid' in joined return _HAS_UNSHARE_PID def _get_unshare_pid_args(unshare_pid=None, target=None, euid=None): """Get args for calling unshare for a pid. If unshare_pid is False, return empty list. If unshare_pid is True, check if it is usable. If not, raise exception. if unshare_pid is None, then unshare if * euid is 0 * 'unshare' with '--fork' and '--pid' is available. * target != / """ if unshare_pid is not None and not unshare_pid: # given a false-ish other than None means no. return [] if euid is None: euid = os.geteuid() tpath = paths.target_path(target) unshare_pid_in = unshare_pid if unshare_pid is None: unshare_pid = False if tpath != "/" and euid == 0: if _has_unshare_pid(): unshare_pid = True if not unshare_pid: return [] # either unshare was passed in as True, or None and turned to True. if euid != 0: raise RuntimeError( "given unshare_pid=%s but euid (%s) != 0." % (unshare_pid_in, euid)) if not _has_unshare_pid(): raise RuntimeError( "given unshare_pid=%s but no unshare command." % unshare_pid_in) return ['unshare', '--fork', '--pid', '--'] def subp(*args, **kwargs): """Run a subprocess. :param args: command to run in a list. [cmd, arg1, arg2...] :param data: input to the command, made available on its stdin. :param rcs: a list of allowed return codes. If subprocess exits with a value not in this list, a ProcessExecutionError will be raised. By default, data is returned as a string. See 'decode' parameter. :param env: a dictionary for the command's environment. :param capture: boolean indicating if output should be captured. If True, then stderr and stdout will be returned. If False, they will not be redirected. :param combine_capture: boolean indicating if stderr should be redirected to stdout. When True, interleaved stderr and stdout will be returned as the first element of a tuple. if combine_capture is True, then output is captured independent of the value of capture. :param log_captured: boolean indicating if output should be logged on capture. If True, then stderr and stdout will be logged at DEBUG level. If False, they will not be logged. :param shell: boolean indicating if this should be run with a shell. :param logstring: the command will be logged to DEBUG. If it contains info that should not be logged, then logstring will be logged instead. :param decode: if False, no decoding will be done and returned stdout and stderr will be bytes. Other allowed values are 'strict', 'ignore', and 'replace'. These values are passed through to bytes().decode() as the 'errors' parameter. There is no support for decoding to other than utf-8. :param retries: a list of times to sleep in between retries. After each failure subp will sleep for N seconds and then try again. A value of [1, 3] means to run, sleep 1, run, sleep 3, run and then return exit code. :param target: run the command as 'chroot target ' :param unshare_pid: unshare the pid namespace. default value (None) is to unshare pid namespace if possible and target != / :return if not capturing, return is (None, None) if capturing, stdout and stderr are returned. if decode: python2 unicode or python3 string if not decode: python2 string or python3 bytes """ retries = [] if "retries" in kwargs: retries = kwargs.pop("retries") if not retries: # allow retries=None retries = [] if args: cmd = args[0] if 'args' in kwargs: cmd = kwargs['args'] # Retry with waits between the retried command. for num, wait in enumerate(retries): try: return _subp(*args, **kwargs) except ProcessExecutionError as e: LOG.debug("try %s: command %s failed, rc: %s", num, cmd, e.exit_code) time.sleep(wait) # Final try without needing to wait or catch the error. If this # errors here then it will be raised to the caller. return _subp(*args, **kwargs) def wait_for_removal(path, retries=[1, 3, 5, 7]): if not path: raise ValueError('wait_for_removal: missing path parameter') # Retry with waits between checking for existence LOG.debug('waiting for %s to be removed', path) for num, wait in enumerate(retries): if not os.path.exists(path): LOG.debug('%s has been removed', path) return LOG.debug('sleeping %s', wait) time.sleep(wait) # final check if not os.path.exists(path): LOG.debug('%s has been removed', path) return raise OSError('Timeout exceeded for removal of %s', path) def load_command_environment(env=os.environ, strict=False): mapping = {'scratch': 'WORKING_DIR', 'fstab': 'OUTPUT_FSTAB', 'interfaces': 'OUTPUT_INTERFACES', 'config': 'CONFIG', 'target': 'TARGET_MOUNT_POINT', 'network_state': 'OUTPUT_NETWORK_STATE', 'network_config': 'OUTPUT_NETWORK_CONFIG', 'report_stack_prefix': 'CURTIN_REPORTSTACK'} if strict: missing = [k for k in mapping.values() if k not in env] if len(missing): raise KeyError("missing environment vars: %s" % missing) return {k: env.get(v) for k, v in mapping.items()} def is_kmod_loaded(module): """Test if kernel module 'module' is current loaded by checking sysfs""" if not module: raise ValueError('is_kmod_loaded: invalid module: "%s"', module) return os.path.isdir('/sys/module/%s' % module) def load_kernel_module(module, check_loaded=True): """Install kernel module via modprobe. Optionally check if it's already loaded . """ if not module: raise ValueError('load_kernel_module: invalid module: "%s"', module) if check_loaded: if is_kmod_loaded(module): LOG.debug('Skipping kernel module load, %s already loaded', module) return LOG.debug('Loading kernel module %s via modprobe', module) subp(['modprobe', '--use-blacklist', module]) class BadUsage(Exception): pass class ProcessExecutionError(IOError): MESSAGE_TMPL = ('%(description)s\n' 'Command: %(cmd)s\n' 'Exit code: %(exit_code)s\n' 'Reason: %(reason)s\n' 'Stdout: %(stdout)s\n' 'Stderr: %(stderr)s') stdout_indent_level = 8 def __init__(self, stdout=None, stderr=None, exit_code=None, cmd=None, description=None, reason=None): if not cmd: self.cmd = '-' else: self.cmd = cmd if not description: self.description = 'Unexpected error while running command.' else: self.description = description if not isinstance(exit_code, int): self.exit_code = '-' else: self.exit_code = exit_code if not stderr: self.stderr = "''" else: self.stderr = self._indent_text(stderr) if not stdout: self.stdout = "''" else: self.stdout = self._indent_text(stdout) if reason: self.reason = reason else: self.reason = '-' message = self.MESSAGE_TMPL % { 'description': self.description, 'cmd': self.cmd, 'exit_code': self.exit_code, 'stdout': self.stdout, 'stderr': self.stderr, 'reason': self.reason, } IOError.__init__(self, message) def _indent_text(self, text): if type(text) == bytes: text = text.decode() return text.replace('\n', '\n' + ' ' * self.stdout_indent_level) class LogTimer(object): def __init__(self, logfunc, msg): self.logfunc = logfunc self.msg = msg def __enter__(self): self.start = time.time() return self def __exit__(self, etype, value, trace): self.logfunc("%s took %0.3f seconds" % (self.msg, time.time() - self.start)) def is_mounted(target, src=None, opts=None): # return whether or not src is mounted on target mounts = "" with open("/proc/mounts", "r") as fp: mounts = fp.read() for line in mounts.splitlines(): if line.split()[1] == os.path.abspath(target): return True return False def list_device_mounts(device): # return mount entry if device is in /proc/mounts mounts = "" with open("/proc/mounts", "r") as fp: mounts = fp.read() dev_mounts = [] for line in mounts.splitlines(): if line.split()[0] == device: dev_mounts.append(line) return dev_mounts def fuser_mount(path): """ Execute fuser to determine open file handles from mountpoint path Use verbose mode and then combine stdout, stderr from fuser into a dictionary: {pid: "fuser-details"} path may also be a kernel devpath (e.g. /dev/sda) """ fuser_output = {} try: stdout, stderr = subp(['fuser', '--verbose', '--mount', path], capture=True) except ProcessExecutionError as e: LOG.debug('fuser returned non-zero: %s', e.stderr) return None pidlist = stdout.split() """ fuser writes a header in verbose mode, we'll ignore that but the order if the input is note that is not present in stderr, it's only in stdout. Also only the entry with pid=kernel entry will contain the mountpoint # Combined stdout and stderr look like: # USER PID ACCESS COMMAND # /home: root kernel mount / # root 1 .rce. systemd # # This would return # { 'kernel': ['/home', 'root', 'mount', '/'], '1': ['root', '1', '.rce.', 'systemd'], } """ # Note that fuser only writes PIDS to stdout. Each PID value is # 'kernel' or an integer and indicates a process which has an open # file handle against the path specified path. All other output # is sent to stderr. This code below will merge the two as needed. for (pid, status) in zip(pidlist, stderr.splitlines()[1:]): fuser_output[pid] = status.split() return fuser_output @contextmanager def chdir(dirname): curdir = os.getcwd() try: os.chdir(dirname) yield dirname finally: os.chdir(curdir) def do_mount(src, target, opts=None): # mount src at target with opts and return True # if already mounted, return False if opts is None: opts = [] if isinstance(opts, str): opts = [opts] if is_mounted(target, src, opts): return False ensure_dir(target) cmd = ['mount'] + opts + [src, target] subp(cmd) return True def do_umount(mountpoint, recursive=False, private=False): mp = os.path.abspath(mountpoint) # unmount mountpoint. # # if recursive, unmount all mounts under it. if private (which # implies recursive), mark all mountpoints private before # unmounting. # # To explain the 'private' parameter, consider the following sequence: # # mkdir a a/b c # mount --bind a c # mount -t sysfs sysfs a/b # # "umount c" will fail with "mountpoint is busy" because the mount # of a/b "propagates" into the subtree at c, i.e. creates a mount # at "c/b". But if you run "umount -R c" the unmount of c/b will # propagate to back to a and unmount a/b (and as in pratice "a/b" # can be something like driver specific mounts in /sys, this would # be Bad(tm)). So what we do here is iterate over the mountpoints # under `mountpoint` and mark them as "private" doing any # unmounting, which means the unmounts do not propagate to any # mount tree they were cloned from. # # So why not do this always? Well! Several systemd services (like # udevd) run in private mount namespaces, set up so that mount # operations on the default namespace propagate into the service's # namespace (but not the other way). This means if you do this: # # mount /dev/sda2 /tmp/my-mount # mount --make-private /tmp/my-mount # umount /tmp/my-mount # # then the mount operation propagates into udevd's mount namespace # but the unmount operation does not and so /dev/sda2 remains # mounted, which causes problems down the road. # # It would be nice to not have to have the caller care about all # this detail. In particular imagine the target system is set up # at /target and has host directories bind-mounted into it, so the # mount tree looks something like this: # # /dev/vda2 is mounted at /target # /dev/vda1 is mounted at /target/boot # /sys is bind-mounted at /target/sys # # And for whatever reason, a mount has appeared at /sys/foo since # this was setup, so there is now an additional mount at # /target/sys/foo. # # What I would like is to be able to run something like "curtin # unmount /target" and have curtin figure out that the mountpoint # at /target/sys/foo should be made private before unmounting and # the others should not. But I don't know how to do that. # # See "Shared subtree operations" in mount(8) for more on all # this. # # You might also think we could replace all this with a call to # "mount --make-rprivate" followed by a call to "umount # --recursive" but that would fail in the case where `mountpoint` # is not actually a mount point, and not doing that is actually # relied on by other parts of curtin. # # Related bug reports: # https://bugs.launchpad.net/maas/+bug/1928839 # https://bugs.launchpad.net/subiquity/+bug/1934775 # # return boolean indicating if mountpoint was previously mounted. ret = False mountpoints = [ line.split()[1] for line in load_file("/proc/mounts", decode=True).splitlines()] if private: recursive = True for curmp in mountpoints: if curmp == mp or curmp.startswith(mp + os.path.sep): subp(['mount', '--make-private', curmp]) for curmp in reversed(mountpoints): if curmp == mp or (recursive and curmp.startswith(mp + os.path.sep)): subp(['umount', curmp]) if curmp == mp: ret = True return ret def ensure_dir(path, mode=None): if path == "": path = "." try: os.makedirs(path) except OSError as e: if e.errno != errno.EEXIST: raise if mode is not None: os.chmod(path, mode) def write_file(filename, content, mode=0o644, omode="w"): """ write 'content' to file at 'filename' using python open mode 'omode'. if mode is not set, then chmod file to mode. mode is 644 by default """ ensure_dir(os.path.dirname(filename)) with open(filename, omode) as fp: fp.write(content) if mode: os.chmod(filename, mode) def load_file(path, read_len=None, offset=0, decode=True): with open(path, "rb") as fp: if offset: fp.seek(offset) contents = fp.read(read_len) if read_len else fp.read() if decode: return decode_binary(contents) else: return contents def decode_binary(blob, encoding='utf-8', errors='replace'): # Converts a binary type into a text type using given encoding. return blob.decode(encoding, errors=errors) def load_json(text, root_types=(dict,)): decoded = json.loads(text) if not isinstance(decoded, tuple(root_types)): expected_types = ", ".join([str(t) for t in root_types]) raise TypeError("(%s) root types expected, got %s instead" % (expected_types, type(decoded))) return decoded def file_size(path): """get the size of a file""" with open(path, 'rb') as fp: fp.seek(0, 2) return fp.tell() def del_file(path): try: os.unlink(path) LOG.debug("del_file: removed %s", path) except OSError as e: LOG.exception("del_file: %s did not exist.", path) if e.errno != errno.ENOENT: raise e def disable_daemons_in_root(target): contents = "\n".join( ['#!/bin/sh', '# see invoke-rc.d for exit codes. 101 is "do not run"', 'while true; do', ' case "$1" in', ' -*) shift;;', ' makedev|x11-common) exit 0;;', ' *) exit 101;;', ' esac', 'done', '']) fpath = paths.target_path(target, "/usr/sbin/policy-rc.d") if os.path.isfile(fpath): return False write_file(fpath, mode=0o755, content=contents) return True def undisable_daemons_in_root(target): try: os.unlink(paths.target_path(target, "/usr/sbin/policy-rc.d")) except OSError as e: if e.errno != errno.ENOENT: raise return False return True class ChrootableTarget(object): def __init__(self, target, allow_daemons=False, sys_resolvconf=True, mounts=None): if target is None: target = "/" self.target = paths.target_path(target) if mounts is not None: self.mounts = mounts else: self.mounts = ["/dev", "/proc", "/run", "/sys"] if is_uefi_bootable(): self.mounts.append('/sys/firmware/efi/efivars') self.umounts = [] self.disabled_daemons = False self.allow_daemons = allow_daemons self.sys_resolvconf = sys_resolvconf self.rconf_d = None self.rc_tmp = None def __enter__(self): for p in self.mounts: tpath = paths.target_path(self.target, p) if do_mount(p, tpath, opts='--bind'): self.umounts.append(tpath) if not self.allow_daemons: self.disabled_daemons = disable_daemons_in_root(self.target) rconf = paths.target_path(self.target, "/etc/resolv.conf") target_etc = os.path.dirname(rconf) if self.target != "/" and os.path.isdir(target_etc): # never muck with resolv.conf on / rconf = os.path.join(target_etc, "resolv.conf") rtd = None try: rtd = tempfile.mkdtemp(dir=target_etc) if os.path.lexists(rconf): self.rc_tmp = os.path.join(rtd, "resolv.conf") os.rename(rconf, self.rc_tmp) self.rconf_d = rtd shutil.copy("/etc/resolv.conf", rconf) except Exception: if rtd: # if we renamed, but failed later we need to restore if self.rc_tmp and os.path.lexists(self.rc_tmp): os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf) shutil.rmtree(rtd) self.rconf_d = None self.rc_tmp = None raise return self def __exit__(self, etype, value, trace): if self.disabled_daemons: undisable_daemons_in_root(self.target) # if /dev is to be unmounted, udevadm settle (LP: #1462139) if paths.target_path(self.target, "/dev") in self.umounts: log_call(subp, ['udevadm', 'settle']) for p in reversed(self.umounts): do_umount(p, private=True) rconf = paths.target_path(self.target, "/etc/resolv.conf") if self.sys_resolvconf and self.rconf_d: if self.rc_tmp and os.path.lexists(self.rc_tmp): os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf) shutil.rmtree(self.rconf_d) def subp(self, *args, **kwargs): kwargs['target'] = self.target return subp(*args, **kwargs) def path(self, path): return paths.target_path(self.target, path) def is_exe(fpath): # Return path of program for execution if found in path return os.path.isfile(fpath) and os.access(fpath, os.X_OK) def which(program, search=None, target=None): target = paths.target_path(target) if os.path.sep in program: # if program had a '/' in it, then do not search PATH # 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls # so effectively we set cwd to / (or target) if is_exe(paths.target_path(target, program)): return program if search is None: candpaths = [p.strip('"') for p in os.environ.get("PATH", "").split(os.pathsep)] if target == "/": search = candpaths else: search = [p for p in candpaths if p.startswith("/")] # normalize path input search = [os.path.abspath(p) for p in search] for path in search: ppath = os.path.sep.join((path, program)) if is_exe(paths.target_path(target, ppath)): return ppath return None def _installed_file_path(path, check_file=None): # check the install root for the file 'path'. # if 'check_file', then path is a directory that contains file. # return absolute path or None. inst_pre = "/" if os.environ.get('SNAP'): inst_pre = os.path.abspath(os.environ['SNAP']) inst_path = os.path.join(inst_pre, path) if check_file: check_path = os.path.sep.join((inst_path, check_file)) else: check_path = inst_path if os.path.isfile(check_path): return os.path.abspath(inst_path) return None def get_paths(curtin_exe=None, lib=None, helpers=None): # return a dictionary with paths for 'curtin_exe', 'helpers' and 'lib' # that represent where 'curtin' executable lives, where the 'curtin' module # directory is (containing __init__.py) and where the 'helpers' directory. mydir = os.path.realpath(os.path.dirname(__file__)) tld = os.path.realpath(mydir + os.path.sep + "..") if curtin_exe is None: if os.path.isfile(os.path.join(tld, "bin", "curtin")): curtin_exe = os.path.join(tld, "bin", "curtin") if (curtin_exe is None and (os.path.basename(sys.argv[0]).startswith("curtin") and os.path.isfile(sys.argv[0]))): curtin_exe = os.path.realpath(sys.argv[0]) if curtin_exe is None: found = which('curtin') if found: curtin_exe = found if curtin_exe is None: curtin_exe = _installed_file_path(_INSTALLED_MAIN) # "common" is a file in helpers cfile = "common" if (helpers is None and os.path.isfile(os.path.join(tld, "helpers", cfile))): helpers = os.path.join(tld, "helpers") if helpers is None: helpers = _installed_file_path(_INSTALLED_HELPERS_PATH, cfile) return({'curtin_exe': curtin_exe, 'lib': mydir, 'helpers': helpers}) def find_newer(src, files): mtime = os.stat(src).st_mtime return [f for f in files if os.path.exists(f) and os.stat(f).st_mtime > mtime] def set_unexecutable(fname, strict=False): """set fname so it is not executable. if strict, raise an exception if the file does not exist. return the current mode, or None if no change is needed. """ if not os.path.exists(fname): if strict: raise ValueError('%s: file does not exist' % fname) return None cur = stat.S_IMODE(os.lstat(fname).st_mode) target = cur & (~stat.S_IEXEC & ~stat.S_IXGRP & ~stat.S_IXOTH) if cur == target: return None os.chmod(fname, target) return cur def is_uefi_bootable(): return os.path.exists('/sys/firmware/efi') is True def parse_efibootmgr(content): efikey_to_dict_key = { 'BootCurrent': 'current', 'Timeout': 'timeout', 'BootOrder': 'order', } output = {} for line in content.splitlines(): split = line.split(':') if len(split) == 2: key = split[0].strip() output_key = efikey_to_dict_key.get(key, None) if output_key: output[output_key] = split[1].strip() if output_key == 'order': output[output_key] = output[output_key].split(',') output['entries'] = { entry: { 'name': name.strip(), 'path': path.strip(), } for entry, name, path in re.findall( r"^Boot(?P[0-9a-fA-F]{4})\*?\s(?P.+)\t" r"(?P.*)$", content, re.MULTILINE) } if 'order' in output: new_order = [item for item in output['order'] if item in output['entries']] output['order'] = new_order return output def get_efibootmgr(target=None): """Return mapping of EFI information. Calls `efibootmgr` inside the `target`. Example output: { 'current': '0000', 'timeout': '1 seconds', 'order': ['0000', '0001'], 'entries': { '0000': { 'name': 'ubuntu', 'path': ( 'HD(1,GPT,0,0x8,0x1)/File(\\EFI\\ubuntu\\shimx64.efi)'), }, '0001': { 'name': 'UEFI:Network Device', 'path': 'BBS(131,,0x0)', } } } """ with ChrootableTarget(target=target) as in_chroot: stdout, _ = in_chroot.subp(['efibootmgr', '-v'], capture=True) output = parse_efibootmgr(stdout) return output def run_hook_if_exists(target, hook): """ Look for "hook" in "target" and run it """ target_hook = paths.target_path(target, '/curtin/' + hook) if os.path.isfile(target_hook): LOG.debug("running %s" % target_hook) subp([target_hook]) return True return False def sanitize_source(source): """ Check the install source for type information If no type information is present or it is an invalid type, we default to the standard tgz format """ if type(source) is dict: # already sanitized? return source supported = ['tgz', 'dd-tgz', 'tbz', 'dd-tbz', 'txz', 'dd-txz', 'dd-tar', 'dd-bz2', 'dd-gz', 'dd-xz', 'dd-raw', 'fsimage', 'fsimage-layered'] deftype = 'tgz' for i in supported: prefix = i + ":" if source.startswith(prefix): return {'type': i, 'uri': source[len(prefix):]} # translate squashfs: to fsimage type. if source.startswith("squashfs://"): return {'type': 'fsimage', 'uri': source[len("squashfs://"):]} elif source.startswith("squashfs:"): LOG.warning("The squashfs: prefix is deprecated and" "will be removed in a future release." "Please use squashfs:// instead.") return {'type': 'fsimage', 'uri': source[len("squashfs:"):]} if source.endswith("squashfs") or source.endswith("squash"): return {'type': 'fsimage', 'uri': source} LOG.debug("unknown type for url '%s', assuming type '%s'", source, deftype) # default to tgz for unknown types return {'type': deftype, 'uri': source} def get_dd_images(sources): """ return all disk images in sources list """ src = [] if type(sources) is not dict: return src for i in sources: if type(sources[i]) is not dict: continue if sources[i]['type'].startswith('dd-'): src.append(sources[i]) return src def get_meminfo(meminfo="/proc/meminfo", raw=False): mpliers = {'kB': 2**10, 'mB': 2 ** 20, 'B': 1, 'gB': 2 ** 30} kmap = {'MemTotal:': 'total', 'MemFree:': 'free', 'MemAvailable:': 'available'} ret = {} with open(meminfo, "r") as fp: for line in fp: try: key, value, unit = line.split() except ValueError: key, value = line.split() unit = 'B' if raw: ret[key] = int(value) * mpliers[unit] elif key in kmap: ret[kmap[key]] = int(value) * mpliers[unit] return ret def get_fs_use_info(path): # return some filesystem usage info as tuple of (size_in_bytes, free_bytes) statvfs = os.statvfs(path) return (statvfs.f_frsize * statvfs.f_blocks, statvfs.f_frsize * statvfs.f_bfree) def human2bytes(size): # convert human 'size' to integer size_in = size if isinstance(size, int): return size elif isinstance(size, float): if int(size) != size: raise ValueError("'%s': resulted in non-integer (%s)" % (size_in, int(size))) return size elif not isinstance(size, str): raise TypeError("cannot convert type %s ('%s')." % (type(size), size)) if size.endswith("B"): size = size[:-1] mpliers = {'B': 1, 'K': 2 ** 10, 'M': 2 ** 20, 'G': 2 ** 30, 'T': 2 ** 40} num = size mplier = 'B' for m in mpliers: if size.endswith(m): mplier = m num = size[0:-len(m)] try: num = float(num) except ValueError: raise ValueError("'%s' is not valid input." % size_in) if num < 0: raise ValueError("'%s': cannot be negative" % size_in) val = num * mpliers[mplier] if int(val) != val: raise ValueError("'%s': resulted in non-integer (%s)" % (size_in, val)) return val def bytes2human(size): """convert size in bytes to human readable""" if not isinstance(size, numeric_types): raise ValueError('size must be a numeric value, not %s', type(size)) isize = int(size) if isize != size: raise ValueError('size "%s" is not a whole number.' % size) if isize < 0: raise ValueError('size "%d" < 0.' % isize) mpliers = {'B': 1, 'K': 2 ** 10, 'M': 2 ** 20, 'G': 2 ** 30, 'T': 2 ** 40} unit_order = sorted(mpliers, key=lambda x: -1 * mpliers[x]) unit = next((u for u in unit_order if (isize / mpliers[u]) >= 1), 'B') return str(int(isize / mpliers[unit])) + unit def import_module(import_str): """Import a module.""" __import__(import_str) return sys.modules[import_str] def try_import_module(import_str, default=None): """Try to import a module.""" try: return import_module(import_str) except ImportError: return default def is_file_not_found_exc(exc): return (isinstance(exc, (IOError, OSError)) and hasattr(exc, 'errno') and exc.errno in (errno.ENOENT, errno.EIO, errno.ENXIO)) class MergedCmdAppend(argparse.Action): """This appends to a list in order of appearence both the option string and the value""" def __call__(self, parser, namespace, values, option_string=None): if getattr(namespace, self.dest, None) is None: setattr(namespace, self.dest, []) getattr(namespace, self.dest).append((option_string, values,)) def json_dumps(data): return json.dumps(data, indent=1, sort_keys=True, separators=(',', ': ')) def get_platform_arch(): platform2arch = { 'i586': 'i386', 'i686': 'i386', 'x86_64': 'amd64', 'ppc64le': 'ppc64el', 'aarch64': 'arm64', } return platform2arch.get(platform.machine(), platform.machine()) def basic_template_render(content, params): """This does simple replacement of bash variable like templates. It identifies patterns like ${a} or $a and can also identify patterns like ${a.b} or $a.b which will look for a key 'b' in the dictionary rooted by key 'a'. """ def replacer(match): """ replacer replacer used in regex match to replace content """ # Only 1 of the 2 groups will actually have a valid entry. name = match.group(1) if name is None: name = match.group(2) if name is None: raise RuntimeError("Match encountered but no valid group present") path = collections.deque(name.split(".")) selected_params = params while len(path) > 1: key = path.popleft() if not isinstance(selected_params, dict): raise TypeError("Can not traverse into" " non-dictionary '%s' of type %s while" " looking for subkey '%s'" % (selected_params, selected_params.__class__.__name__, key)) selected_params = selected_params[key] key = path.popleft() if not isinstance(selected_params, dict): raise TypeError("Can not extract key '%s' from non-dictionary" " '%s' of type %s" % (key, selected_params, selected_params.__class__.__name__)) return str(selected_params[key]) return BASIC_MATCHER.sub(replacer, content) def render_string(content, params): """ render_string render a string following replacement rules as defined in basic_template_render returning the string """ if not params: params = {} return basic_template_render(content, params) def is_resolvable(name): """determine if a url is resolvable, return a boolean This also attempts to be resilent against dns redirection. Note, that normal nsswitch resolution is used here. So in order to avoid any utilization of 'search' entries in /etc/resolv.conf we have to append '.'. The top level 'invalid' domain is invalid per RFC. And example.com should also not exist. The random entry will be resolved inside the search list. """ global _DNS_REDIRECT_IP if _DNS_REDIRECT_IP is None: badips = set() badnames = ("does-not-exist.example.com.", "example.invalid.") badresults = {} for iname in badnames: try: result = socket.getaddrinfo(iname, None, 0, 0, socket.SOCK_STREAM, socket.AI_CANONNAME) badresults[iname] = [] for (_, _, _, cname, sockaddr) in result: badresults[iname].append("%s: %s" % (cname, sockaddr[0])) badips.add(sockaddr[0]) except (socket.gaierror, socket.error): pass _DNS_REDIRECT_IP = badips if badresults: LOG.debug("detected dns redirection: %s", badresults) try: result = socket.getaddrinfo(name, None) # check first result's sockaddr field addr = result[0][4][0] if addr in _DNS_REDIRECT_IP: LOG.debug("dns %s in _DNS_REDIRECT_IP", name) return False LOG.debug("dns %s resolved to '%s'", name, result) return True except (socket.gaierror, socket.error): LOG.debug("dns %s failed to resolve", name) return False def is_valid_ipv6_address(addr): try: socket.inet_pton(socket.AF_INET6, addr) except socket.error: return False return True def is_resolvable_url(url): """determine if this url is resolvable (existing or ip).""" return is_resolvable(urlparse(url).hostname) class RunInChroot(ChrootableTarget): """Backwards compatibility for RunInChroot (LP: #1617375). It needs to work like: with RunInChroot("/target") as in_chroot: in_chroot(["your", "chrooted", "command"])""" __call__ = ChrootableTarget.subp def shlex_split(str_in): # shlex.split takes a string # but in python2 if input here is a unicode, encode it to a string. # http://stackoverflow.com/questions/2365411/ # python-convert-unicode-to-ascii-without-errors if sys.version_info.major == 2: try: if isinstance(str_in, unicode): str_in = str_in.encode('utf-8') except NameError: pass return shlex.split(str_in) else: return shlex.split(str_in) def load_shell_content(content, add_empty=False, empty_val=None): """Given shell like syntax (key=value\nkey2=value2\n) in content return the data in dictionary form. If 'add_empty' is True then add entries in to the returned dictionary for 'VAR=' variables. Set their value to empty_val.""" data = {} for line in shlex_split(content): key, value = line.split("=", 1) if not value: value = empty_val if add_empty or value: data[key] = value return data def uses_systemd(): """ Check if current enviroment uses systemd by testing if /run/systemd/system is a directory; only present if systemd is available on running system. """ global _USES_SYSTEMD if _USES_SYSTEMD is None: _USES_SYSTEMD = os.path.isdir('/run/systemd/system') return _USES_SYSTEMD # vi: ts=4 expandtab syntax=python curtin-21.3/curtin/version.py000066400000000000000000000017411415350476600163240ustar00rootroot00000000000000# This file is part of curtin. See LICENSE file for copyright and license info. from curtin import __version__ as old_version import os import subprocess _PACKAGED_VERSION = '@@PACKAGED_VERSION@@' _PACKED_VERSION = '@@PACKED_VERSION@@' def version_string(): """ Extract a version string from curtin source or version file""" if not _PACKAGED_VERSION.startswith('@@'): return _PACKAGED_VERSION if not _PACKED_VERSION.startswith('@@'): return _PACKED_VERSION version = old_version gitdir = os.path.abspath(os.path.join(__file__, '..', '..', '.git')) if os.path.exists(gitdir): try: out = subprocess.check_output( ['git', 'describe', '--long', '--abbrev=8', "--match=[0-9][0-9]*"], cwd=os.path.dirname(gitdir)) version = out.decode('utf-8').strip() except subprocess.CalledProcessError: pass return version # vi: ts=4 expandtab syntax=python curtin-21.3/debian/000077500000000000000000000000001415350476600142005ustar00rootroot00000000000000curtin-21.3/debian/changelog.trunk000066400000000000000000000002221415350476600172100ustar00rootroot00000000000000curtin (UPSTREAM_VER-0ubuntu1) UNRELEASED; urgency=low * Initial release -- Scott Moser Mon, 29 Jul 2013 16:12:09 -0400 curtin-21.3/debian/clean000066400000000000000000000000211415350476600151760ustar00rootroot00000000000000build/ .coverage curtin-21.3/debian/compat000066400000000000000000000000021415350476600153760ustar00rootroot000000000000007 curtin-21.3/debian/control000066400000000000000000000031121415350476600156000ustar00rootroot00000000000000Source: curtin Section: admin Priority: extra Standards-Version: 3.9.6 Maintainer: Ubuntu Developers Build-Depends: debhelper (>= 7), dh-python, python3, python3-apt, python3-coverage, python3-mock, python3-nose, python3-oauthlib, python3-setuptools, python3-yaml Homepage: http://launchpad.net/curtin X-Python3-Version: >= 3.2 Package: curtin Architecture: all Priority: extra Depends: bcache-tools, btrfs-progs | btrfs-tools, dosfstools, file, gdisk, lvm2, mdadm, parted, probert-storage | probert, python3-curtin (= ${binary:Version}), udev, xfsprogs, ${misc:Depends} Description: Library and tools for the curtin installer This package provides the curtin installer. . Curtin is an installer that is blunt, brief, snappish, snippety and unceremonious. Package: curtin-common Architecture: all Priority: extra Depends: ${misc:Depends} Description: Library and tools for curtin installer This package contains utilities for the curtin installer. Package: python3-curtin Section: python Architecture: all Priority: extra Depends: curtin-common (= ${binary:Version}), python3-apt, python3-oauthlib, python3-yaml, wget, ${misc:Depends}, ${python3:Depends} Description: Library and tools for curtin installer This package provides python3 library for use by curtin. curtin-21.3/debian/copyright000066400000000000000000000011401415350476600161270ustar00rootroot00000000000000Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: curtin Upstream-Contact: Scott Moser Source: https://launchpad.net/curtin Files: * Copyright: 2013, Canonical Ltd. License: AGPL-3 GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 . Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. . The complete text of the AGPL version 3 can be seen in http://www.gnu.org/licenses/agpl-3.0.html curtin-21.3/debian/curtin-common.install000066400000000000000000000000311415350476600203540ustar00rootroot00000000000000usr/lib/curtin/helpers/* curtin-21.3/debian/curtin.install000066400000000000000000000000121415350476600170650ustar00rootroot00000000000000usr/bin/* curtin-21.3/debian/python-curtin.install000066400000000000000000000000451415350476600204120ustar00rootroot00000000000000usr/lib/python2*/*-packages/curtin/* curtin-21.3/debian/python3-curtin.install000066400000000000000000000000451415350476600204750ustar00rootroot00000000000000usr/lib/python3*/*-packages/curtin/* curtin-21.3/debian/rules000077500000000000000000000013101415350476600152530ustar00rootroot00000000000000#!/usr/bin/make -f PY3VERS := $(shell py3versions -r) DEB_VERSION := $(shell dpkg-parsechangelog --show-field=Version) %: dh $@ --with=python3 override_dh_auto_install: dh_auto_install set -ex; for python in $(PY3VERS); do \ $$python setup.py build --executable=/usr/bin/python && \ $$python setup.py install --root=$(CURDIR)/debian/tmp --install-layout=deb; \ done chmod 755 $(CURDIR)/debian/tmp/usr/lib/curtin/helpers/* for f in $$(find $(CURDIR)/debian/tmp/usr/lib -type f -name version.py); do [ -f "$$f" ] || continue; sed -i 's,@@PACKAGED_VERSION@@,$(DEB_VERSION),' "$$f"; done override_dh_auto_test: make unittest3 override_dh_clean: dh_clean find . -name __pycache__ -exec rm -rf {} + curtin-21.3/debian/source/000077500000000000000000000000001415350476600155005ustar00rootroot00000000000000curtin-21.3/debian/source/format000066400000000000000000000000141415350476600167060ustar00rootroot000000000000003.0 (quilt) curtin-21.3/doc/000077500000000000000000000000001415350476600135235ustar00rootroot00000000000000curtin-21.3/doc/Makefile000066400000000000000000000126741415350476600151750ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/curtin.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/curtin.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/curtin" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/curtin" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." curtin-21.3/doc/conf.py000066400000000000000000000202601415350476600150220ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # curtin documentation build configuration file, created by # sphinx-quickstart on Thu May 30 16:03:34 2013. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # Fix path so we can import curtin.__version__ sys.path.insert(1, os.path.realpath(os.path.join( os.path.dirname(__file__), '..'))) import curtin # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [] # Add any paths that contain templates here, relative to this directory. templates_path = ['templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'curtin' copyright = u'2016, Scott Moser, Ryan Harper' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = curtin.__version__ # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'classic' # on_rtd is whether we are on readthedocs.org, this line of code grabbed from # docs.readthedocs.org on_rtd = os.environ.get('READTHEDOCS', None) == 'True' if not on_rtd: # only import and set the theme if we're building docs locally import sphinx_rtd_theme html_theme = 'sphinx_rtd_theme' html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # otherwise, readthedocs.org uses their theme by default, so no need to specify # it # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". #html_static_path = ['static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'curtindoc' # -- Options for LaTeX output -------------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'curtin.tex', u'curtin Documentation', u'Scott Moser', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'curtin', u'curtin Documentation', [u'Scott Moser'], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------------ # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'curtin', u'curtin Documentation', u'Scott Moser', 'curtin', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' curtin-21.3/doc/devel/000077500000000000000000000000001415350476600146225ustar00rootroot00000000000000curtin-21.3/doc/devel/clear_holders_doc.txt000066400000000000000000000077741415350476600210350ustar00rootroot00000000000000The new version of clear_holders is based around a data structure called a holder_tree which represents the current storage hirearchy above a specified starting device. Each node in a holders tree contains data about the node and a key 'holders' which contains a list of all nodes that depend on it. The keys in a holdres_tree node are: - device: the path to the device in /sys/class/block - dev_type: what type of storage layer the device is. possible values: - disk - lvm - crypt - raid - bcache - disk - name: the kname of the device (used for display) - holders: holders_trees for devices depending on the current device A holders tree can be generated for a device using the function clear_holders.gen_holders_tree. The device can be specified either as a path in /sys/class/block or as a path in /dev. The new implementation of block.clear_holders shuts down storage devices in a holders tree starting from the leaves of the tree and ascending towards the root. The old implementation of clear_holders ascended up each path of the tree separately, in a pattern similar to depth first search. The problem with the old implementation is that in some cases either an attempt would be made to remove one storage device while other devices depended on it or clear_holders would attempt to shut down the same storage device several times. In order to cope with this the old version of clear_holders had logic to handle expected failures and hope for the best moving forward. The new version of clear_holders is able to run without many anticipated failures. The logic to plan what order to shut down storage layers in is in clear_holders.plan_shutdown_holders_trees. This function accepts either a single holders tree or a list of holders trees. When run with a list of holders trees, it assumes that all of these trees start at basically the same layer in the overall storage hirearcy for the system (i.e. a list of holders trees starting from all of the target installation disks). This function returns a list of dictionaries, with each dictionary containing the keys: - device: the path to the device in /sys/class/block - dev_type: what type of storage layer the device is. possible values: - disk - lvm - crypt - raid - bcache - disk - level: the level of the device in the current storage hirearchy (starting from 0) The items in the list returned by clear_holders.plan_shutdown_holders_trees should be processed in order to make sure the holders trees are shutdown fully The main interface for clear_holders is the function clear_holders.clear_holders. If the system has just been booted it could be beneficial to run the function clear_holders.start_clear_holders_deps before using clear_holders.clear_holders. This ensures clear_holders will be able to properly storage devices. The function clear_holders.clear_holders can be passed either a single device or a list of devices and will shut down all storage devices above the device(s). The devices can be specified either by path in /dev or by path in /sys/class/block. In order to test if a device or devices are free to be partitioned/formatted, the function clear_holders.assert_clear can be passed either a single device or a list of devices, with devices specified either by path in /dev or by path in /sys/class/block. If there are any storage devices that depend on one of the devices passed to clear_holders.assert_clear, then an OSError will be raised. If clear_holders.assert_clear does not raise any errors, then the devices specified should be ready for partitioning. It is possible to query further information about storage devices using clear_holders. Holders for a individual device can be queried using clear_holders.get_holders. Results are returned as a list or knames for holding devices. A holders tree can be printed in a human readable format using clear_holders.format_holders_tree(). Example output: sda |-- sda1 |-- sda2 `-- sda5 `-- dm-0 |-- dm-1 `-- dm-2 `-- dm-3 curtin-21.3/doc/index.rst000066400000000000000000000014031415350476600153620ustar00rootroot00000000000000.. curtin documentation master file, created by sphinx-quickstart on Thu May 30 16:03:34 2013. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to curtin's documentation! ================================== This is 'curtin', the curt installer. It is blunt, brief, snappish, snippety and unceremonious. Its goal is to install an operating system as quick as possible. Contents: .. toctree:: :maxdepth: 2 topics/overview topics/config topics/apt_source topics/networking topics/storage topics/curthooks topics/reporting topics/hacking topics/integration-testing Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` curtin-21.3/doc/topics/000077500000000000000000000000001415350476600150245ustar00rootroot00000000000000curtin-21.3/doc/topics/apt_source.rst000066400000000000000000000217351415350476600177320ustar00rootroot00000000000000========== APT Source ========== This part of curtin is meant to allow influencing the apt behaviour and configuration. By default - if no apt config is provided - it does nothing. That keeps behavior compatible on upgrades. The feature has an optional target argument which - by default - is used to modify the environment that curtin currently installs (@TARGET_MOUNT_POINT). Features ~~~~~~~~ * Add PGP keys to the APT trusted keyring - add via short keyid - add via long key fingerprint - specify a custom keyserver to pull from - add raw keys (which makes you independent of keyservers) * Influence global apt configuration - adding ppa's - replacing mirror, security mirror and release in sources.list - able to provide a fully custom template for sources.list - add arbitrary apt.conf settings - provide debconf configurations - disabling suites (=pockets) - disabling components (multiverse, universe, restricted) - per architecture mirror definition Configuration ~~~~~~~~~~~~~ The general configuration of the apt feature is under an element called ``apt``. This can have various "global" subelements as listed in the examples below. The file ``apt-source.yaml`` holds more examples. These global configurations are valid throughput all of the apt feature. So for exmaple a global specification of a ``primary`` mirror will apply to all rendered sources entries. Then there is a section ``sources`` which can hold any number of source subelements itself. The key is the filename and will be prepended by /etc/apt/sources.list.d/ if it doesn't start with a ``/``. There are certain cases - where no content is written into a source.list file where the filename will be ignored - yet it can still be used as index for merging. The values inside the entries consist of the following optional entries * ``source``: a sources.list entry (some variable replacements apply) * ``keyid``: providing a key to import via shortid or fingerprint * ``key``: providing a raw PGP key * ``keyserver``: specify an alternate keyserver to pull keys from that were specified by keyid The section "sources" is is a dictionary (unlike most block/net configs which are lists). This format allows merging between multiple input files than a list like :: sources: s1: {'key': 'key1', 'source': 'source1'} sources: s2: {'key': 'key2'} s1: {'keyserver': 'foo'} This would be merged into s1: {'key': 'key1', 'source': 'source1', keyserver: 'foo'} s2: {'key': 'key2'} Here is just one of the most common examples for this feature: install with curtin in an isolated environment (derived repository): For that we need to: * insert the PGP key of the local repository to be trusted - since you are locked down you can't pull from keyserver.ubuntu.com - if you have an internal keyserver you could pull from there, but let us assume you don't even have that; so you have to provide the raw key - in the example I'll use the key of the "Ubuntu CD Image Automatic Signing Key" which makes no sense as it is in the trusted keyring anyway, but it is a good example. (Also the key is shortened to stay readable) :: -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1 mQGiBEFEnz8RBAC7LstGsKD7McXZgd58oN68KquARLBl6rjA2vdhwl77KkPPOr3O RwIbDAAKCRBAl26vQ30FtdxYAJsFjU+xbex7gevyGQ2/mhqidES4MwCggqQyo+w1 Twx6DKLF+3rF5nf1F3Q= =PBAe -----END PGP PUBLIC KEY BLOCK----- * replace the mirrors used to some mirrors available inside the isolated environment for apt to pull repository data from. - lets consider we have a local mirror at ``mymirror.local`` but otherwise following the usual paths - make an example with a partial mirror that doesn't mirror the backports suite, so backports have to be disabled That would be specified as :: apt: primary: - arches [default] uri: http://mymirror.local/ubuntu/ disable_suites: [backports] sources: localrepokey: key: | # full key as block -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1 mQGiBEFEnz8RBAC7LstGsKD7McXZgd58oN68KquARLBl6rjA2vdhwl77KkPPOr3O RwIbDAAKCRBAl26vQ30FtdxYAJsFjU+xbex7gevyGQ2/mhqidES4MwCggqQyo+w1 Twx6DKLF+3rF5nf1F3Q= =PBAe -----END PGP PUBLIC KEY BLOCK----- The file examples/apt-source.yaml holds various further examples that can be configured with this feature. Common snippets ~~~~~~~~~~~~~~~ This is a collection of additional ideas people can use the feature for customizing their to-be-installed system. * enable proposed on installing :: apt: sources: proposed.list: source: | deb $MIRROR $RELEASE-proposed main restricted universe multiverse * Make debug symbols available :: apt: sources: ddebs.list: source: | deb http://ddebs.ubuntu.com $RELEASE main restricted universe multiverse  deb http://ddebs.ubuntu.com $RELEASE-updates main restricted universe multiverse  deb http://ddebs.ubuntu.com $RELEASE-security main restricted universe multiverse deb http://ddebs.ubuntu.com $RELEASE-proposed main restricted universe multiverse Timing ~~~~~~ The feature is implemented at the stage of curthooks_commands, which runs just after curtin has extracted the image to the target. Additionally it can be ran as standalong command "curtin -v --config apt-config". This will pick up the target from the environment variable that is set by curtin, if you want to use it to a different target or outside of usual curtin handling you can add ``--target `` to it to overwrite the target path. This target should have at least a minimal system with apt, apt-add-repository and dpkg being installed for the functionality to work. Dependencies ~~~~~~~~~~~~ Cloud-init might need to resolve dependencies and install packages in the ephemeral environment to run curtin. Therefore it is recommended to not only provide an apt configuration to curtin for the target, but also one to the install environment via cloud-init. apt preserve_sources_list setting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cloud-init and curtin treat the ``preserve_sources_list`` setting slightly differently, and thus this setting deserves its own section. Interpretation / Meaning ------------------------ curtin reads ``preserve_sources_list`` to indicate whether or not it should update the target systems' ``/etc/apt/sources.list``. This includes replacing the mirrors used (apt/primary...). cloud-init reads ``preserve_sources_list`` to indicate whether or not it should *render* ``/etc/apt/sources.list`` from its built-in template. defaults -------- Just for reference, the ``preserve_sources_list`` defaults in curtin and cloud-init are: * curtin: **true** By default curtin will not modify ``/etc/apt/sources.list`` in the installed OS. It is assumed that this file is intentionally as it is. * cloud-init: **false** * cloud-init in ephemeral environment: **false** * cloud-init system installed by curtin: **true** (curtin writes this to a file ``/etc/cloud/cloud.cfg.d/curtin-preserve-sources.cfg`` in the target). It does this because we have already written the sources.list that is desired in the installer. We do not want cloud-init to overwrite it when it boots. preserve_sources_list in MAAS ----------------------------- Curtin and cloud-init use the same ``apt`` configuration language. MAAS provides apt config in three different scenarios. 1. To cloud-init in ephemeral environment (rescue, install or commissioning) Here MAAS **should not send a value**. If it wants to be explicit it should send ``preserve_sources_list: false``. 2. To curtin in curtin config MAAS **should send ``preserve_sources_list: false``**. curtin will correctly read and update mirrors in official Ubuntu images, so setting this to 'false' is correct. In some cases for custom images, the user might want to be able to have their /etc/apt/sources.list left untouched entirely. In such cases they may want to override this value. 3. To cloud-init via curtin config in debconf_selections. MAAS should **not send a value**. Curtin will handle telling cloud-init to not update /etc/apt/sources.list. MAAS does not need to do this. 4. To installed system via vendor-data or user-data. MAAS should **not send a value**. MAAS does not currently send a value. The user could send one in user-data, but then if they did presumably they did that for a reason. Legacy format ------------- Versions of cloud-init in 14.04 and older only support: .. code-block:: yaml apt_preserve_sources_list: VALUE Versions of cloud-init present 16.04+ read the "new" style apt configuration, but support the old style configuration also. The new style configuration is: .. code-block:: yaml apt: preserve_sources_list: VALUE **Note**: If versions of cloud-init that support the new style config receive conflicting values in old style and new style, cloud-init will raise exception and exit failure. It simplly doesn't know what behavior is desired. curtin-21.3/doc/topics/config.rst000066400000000000000000000566051415350476600170370ustar00rootroot00000000000000==================== Curtin Configuration ==================== Curtin exposes a number of configuration options for controlling Curtin behavior during installation. Configuration options --------------------- Curtin's top level config keys are as follows: - apt_mirrors (``apt_mirrors``) - apt_proxy (``apt_proxy``) - block-meta (``block``) - curthooks (``curthooks``) - debconf_selections (``debconf_selections``) - disable_overlayroot (``disable_overlayroot``) - grub (``grub``) - http_proxy (``http_proxy``) - install (``install``) - kernel (``kernel``) - kexec (``kexec``) - multipath (``multipath``) - network (``network``) - pollinate (``pollinate``) - power_state (``power_state``) - proxy (``proxy``) - reporting (``reporting``) - restore_dist_interfaces: (``restore_dist_interfaces``) - sources (``sources``) - stages (``stages``) - storage (``storage``) - swap (``swap``) - system_upgrade (``system_upgrade``) - write_files (``write_files``) apt_mirrors ~~~~~~~~~~~ Configure APT mirrors for ``ubuntu_archive`` and ``ubuntu_security`` **ubuntu_archive**: ** **ubuntu_security**: ** If the target OS includes /etc/apt/sources.list, Curtin will replace the default values for each key set with the supplied mirror URL. **Example**:: apt_mirrors: ubuntu_archive: http://local.archive/ubuntu ubuntu_security: http://local.archive/ubuntu apt_proxy ~~~~~~~~~ Curtin will configure an APT HTTP proxy in the target OS **apt_proxy**: ** **Example**:: apt_proxy: http://squid.mirror:3267/ block-meta ~~~~~~~~~~ Configure how Curtin selects and configures disks on the target system without providing a custom configuration (mode=simple). **devices**: ** The ``devices`` parameter is a list of block device paths that Curtin may select from with choosing where to install the OS. **boot-partition**: ** The ``boot-partition`` parameter controls how to configure the boot partition with the following parameters: **enabled**: ** Enabled will forcibly setup a partition on the target device for booting. **format**: *<['uefi', 'gpt', 'prep', 'mbr']>* Specify the partition format. Some formats, like ``uefi`` and ``prep`` are restricted by platform characteristics. **fstype**: ** Specify the filesystem format on the boot partition. **label**: ** Specify the filesystem label on the boot partition. **Example**:: block-meta: devices: - /dev/sda - /dev/sdb boot-partition: - enabled: True format: gpt fstype: ext4 label: my-boot-partition curthooks ~~~~~~~~~ Configure how Curtin determines what :ref:`curthooks` to run during the installation process. **mode**: *<['auto', 'builtin', 'target']>* The default mode is ``auto``. In ``auto`` mode, curtin will execute curthooks within the image if present. For images without curthooks inside, curtin will execute its built-in hooks. Currently the built-in curthooks support the following OS families: - Ubuntu - Centos When specifying ``builtin``, curtin will only run the curthooks present in Curtin ignoring any curthooks that may be present in the target operating system. When specifying ``target``, curtin will attempt run the curthooks in the target operating system. If the target does NOT contain any curthooks, then the built-in curthooks will be run instead. Any errors during execution of curthooks (built-in or target) will fail the installation. **Example**:: # ignore any target curthooks curthooks: mode: builtin # Only run target curthooks, fall back to built-in curthooks: mode: target debconf_selections ~~~~~~~~~~~~~~~~~~ Curtin will update the target with debconf set-selection values. Users will need to be familiar with the package debconf options. Users can probe a packages' debconf settings by using ``debconf-get-selections``. **selection_name**: ** ``debconf-set-selections`` is in the form:: **Example**:: debconf_selections: set1: | cloud-init cloud-init/datasources multiselect MAAS lxd lxd/bridge-name string lxdbr0 set2: lxd lxd/setup-bridge boolean true disable_overlayroot ~~~~~~~~~~~~~~~~~~~ Curtin disables overlayroot in the target by default. **disable_overlayroot**: ** **Example**:: disable_overlayroot: False grub ~~~~ Curtin configures grub as the target machine's boot loader. Users can control a few options to tailor how the system will boot after installation. **install_devices**: ** Specify a list of devices onto which grub will attempt to install. **replace_linux_default**: ** Controls whether grub-install will update the Linux Default target value during installation. **update_nvram**: ** Certain platforms, like ``uefi`` and ``prep`` systems utilize NVRAM to hold boot configuration settings which control the order in which devices are booted. Curtin by default will enable NVRAM updates to boot configuration settings. Users may disable NVRAM updates by setting the ``update_nvram`` value to ``False``. **probe_additional_os**: ** This setting controls grub's os-prober functionality and Curtin will disable this feature by default to prevent grub from searching for other operating systems and adding them to the grub menu. When False, curtin writes "GRUB_DISABLE_OS_PROBER=true" to target system in /etc/default/grub.d/50-curtin-settings.cfg. If True, curtin won't modify the grub configuration value in the target system. **terminal**: *<['unmodified', 'console', ...]>* Configure target system grub option GRUB_TERMINAL ``terminal`` value which is written to /etc/default/grub.d/50-curtin-settings.cfg. Curtin does not attempt to validate this string, grub2 has many values that it accepts and the list is platform dependent. If ``terminal`` is not provided, Curtin will set the value to 'console'. If the ``terminal`` value is 'unmodified' then Curtin will not set any value at all and will use Grub defaults. **reorder_uefi**: ** Curtin is typically used with MAAS where the systems are configured to boot from the network leaving MAAS in control. On UEFI systems, after installing a bootloader the systems BootOrder may be updated to boot from the new entry. This breaks MAAS control over the system as all subsequent reboots of the node will no longer boot over the network. Therefore, if ``reorder_uefi`` is True curtin will modify the UEFI BootOrder settings to place the currently booted entry (BootCurrent) to the first option after installing the new target OS into the UEFI boot menu. The result is that the system will boot from the same device that it booted to run curtin; for MAAS this will be a network device. On some UEFI systems the BootCurrent entry may not be present. This can cause a system to not boot to the same device that it was previously booting. If BootCurrent is not present, curtin will update the BootOrder such that all Network related entries are placed before the newly installed boot entry and all other entries are placed at the end. This enables the system to network boot first and on failure will boot the most recently installed entry. This setting is ignored if *update_nvram* is False. **reorder_uefi_force_fallback**: ** The fallback reodering mechanism is only active if BootCurrent is not present in the efibootmgr output. The fallback reordering method may be enabled even if BootCurrent is present if *reorder_uefi_force_fallback* is True. This setting is ignored if *update_nvram* or *reorder_uefi* are False. **remove_duplicate_entries**: <*boolean: default True>* When curtin updates UEFI NVRAM it will remove duplicate entries that are present in the UEFI menu. If you do not wish for curtin to remove duplicate entries setting *remove_duplicate_entries* to False. This setting is ignored if *update_nvram* is False. **Example**:: grub: install_devices: - /dev/sda1 replace_linux_default: False update_nvram: True terminal: serial remove_duplicate_entries: True **Default terminal value, GRUB_TERMINAL=console**:: grub: install_devices: - /dev/sda1 **Don't set GRUB_TERMINAL in target**:: grub: install_devices: - /dev/sda1 terminal: unmodified **Allow grub to probe for additional OSes**:: grub: install_devices: - /dev/sda1 probe_additional_os: True **Avoid writting any settings to etc/default/grub.d/50-curtin-settings.cfg**:: grub: install_devices: - /dev/sda1 probe_additional_os: True terminal: unmodified **Enable Fallback UEFI Reordering**:: grub: reorder_uefi: true reorder_uefi_force_fallback: true http_proxy ~~~~~~~~~~ Curtin will export ``http_proxy`` value into the installer environment. **Deprecated**: This setting is deprecated in favor of ``proxy`` below. **http_proxy**: ** **Example**:: http_proxy: http://squid.proxy:3728/ install ~~~~~~~ Configure Curtin's install options. **log_file**: ** Curtin logs install progress by default to /var/log/curtin/install.log **error_tarfile**: ** If error_tarfile is not None and curtin encounters an error, this tarfile will be created. It includes logs, configuration and system info to aid triage and bug filing. When unset, error_tarfile defaults to /var/log/curtin/curtin-logs.tar. **post_files**: ** Curtin by default will post the ``log_file`` value to any configured reporter. **save_install_config**: ** Curtin will save the merged configuration data into the target OS at the path of ``save_install_config``. This defaults to /root/curtin-install-cfg.yaml **save_install_logs**: ** Curtin will copy the install log to a specific path in the target filesystem. This defaults to /root/install.log **target**: ** Control where curtin mounts the target device for installing the OS. If this value is unset, curtin picks a suitable path under a temporary directory. If a value is set, then curtin will utilize the ``target`` value instead. **unmount**: *disabled* If this key is set to the string 'disabled' then curtin will not unmount the target filesystem when install is complete. This skips unmounting in all cases of install success or failure. **Example**:: install: log_file: /tmp/install.log error_tarfile: /var/log/curtin/curtin-error-logs.tar post_files: - /tmp/install.log - /var/log/syslog save_install_config: /root/myconf.yaml save_install_log: /var/log/curtin-install.log target: /my_mount_point unmount: disabled kernel ~~~~~~ Configure how Curtin selects which kernel to install into the target image. If ``kernel`` is not configured, Curtin will use the default mapping below and determine which ``package`` value by looking up the current release and current kernel version running. **fallback-package**: ** Specify a kernel package name to be used if the default package is not available. **mapping**: ** Default mapping for Releases to package names is as follows:: precise: 3.2.0: 3.5.0: -lts-quantal 3.8.0: -lts-raring 3.11.0: -lts-saucy 3.13.0: -lts-trusty trusty: 3.13.0: 3.16.0: -lts-utopic 3.19.0: -lts-vivid 4.2.0: -lts-wily 4.4.0: -lts-xenial xenial: 4.3.0: 4.4.0: **package**: ** Specify the exact package to install in the target OS. **Example**:: kernel: fallback-package: linux-image-generic package: linux-image-generic-lts-xenial mapping: - xenial: - 4.4.0: -my-custom-kernel kexec ~~~~~ Curtin can use kexec to "reboot" into the target OS. **mode**: ** Enable rebooting with kexec. **Example**:: kexec: mode: "on" multipath ~~~~~~~~~ Curtin will detect and autoconfigure multipath by default to enable boot for systems with multipath. Curtin does not apply any advanced configuration or tuning, rather it uses distro defaults and provides enough configuration to enable booting. **mode**: *<['auto', ['disabled']>* Defaults to auto which will configure enough to enable booting on multipath devices. Disabled will prevent curtin from installing or configuring multipath. **overwrite_bindings**: ** If ``overwrite_bindings`` is True then Curtin will generate new bindings file for multipath, overriding any existing binding in the target image. **Example**:: multipath: mode: auto overwrite_bindings: True network ~~~~~~~ Configure networking (see Networking section for details). **network_option_1**: *