curtin-0.1.0~bzr365/LICENSE 0000644 0000000 0000000 00000103330 12673006714 013327 0 ustar 0000000 0000000 GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see .
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
.
curtin-0.1.0~bzr365/Makefile 0000644 0000000 0000000 00000001644 12673006714 013767 0 ustar 0000000 0000000 TOP := $(abspath $(dir $(lastword $(MAKEFILE_LIST))))
CWD := $(shell pwd)
PYTHON ?= python3
CURTIN_VMTEST_IMAGE_SYNC ?= False
export CURTIN_VMTEST_IMAGE_SYNC
noseopts ?= -vv --nologcapture
build:
bin/curtin: curtin/pack.py tools/write-curtin
$(PYTHON) tools/write-curtin bin/curtin
check: pep8 pyflakes pyflakes3 unittest
pep8:
@$(CWD)/tools/run-pep8
pyflakes:
@$(CWD)/tools/run-pyflakes
pyflakes3:
@$(CWD)/tools/run-pyflakes3
unittest:
nosetests $(noseopts) tests/unittests
nosetests3 $(noseopts) tests/unittests
docs:
@which sphinx-build || \
{ echo "need sphinx-build. get it:"; \
echo " apt-get install -qy python3-sphinx"; exit 1; } 1>&2
make -C doc html
# By default don't sync images when running all tests.
vmtest:
nosetests3 $(noseopts) tests/vmtests
vmtest-deps:
@$(CWD)/tools/vmtest-system-setup
sync-images:
@$(CWD)/tools/vmtest-sync-images
.PHONY: all test pyflakes pyflakes3 pep8 build
curtin-0.1.0~bzr365/README 0000644 0000000 0000000 00000000243 12673006714 013201 0 ustar 0000000 0000000 This is 'curtin', the curt installer. It is blunt, brief, snappish, snippety
and unceremonious.
Its goal is to install an operating system as quick as possible.
curtin-0.1.0~bzr365/bin/ 0000755 0000000 0000000 00000000000 12673006714 013072 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/curtin/ 0000755 0000000 0000000 00000000000 12673006714 013626 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/debian/ 0000755 0000000 0000000 00000000000 12673006714 013544 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/doc/ 0000755 0000000 0000000 00000000000 12673006714 013067 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/examples/ 0000755 0000000 0000000 00000000000 12673006714 014140 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/helpers/ 0000755 0000000 0000000 00000000000 12673006714 013764 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/requirements.txt 0000644 0000000 0000000 00000000162 12673006714 015605 0 ustar 0000000 0000000 # See: https://bugs.launchpad.net/pbr/+bug/1384919 for why this is here...
pbr>=0.11,<2.0
pyyaml
urllib3
oauthlib
curtin-0.1.0~bzr365/setup.py 0000644 0000000 0000000 00000001400 12673006714 014027 0 ustar 0000000 0000000 from distutils.core import setup
from glob import glob
import os
VERSION = '0.1.0'
def is_f(p):
return os.path.isfile(p)
setup(
name="curtin",
description='The curtin installer',
version=VERSION,
author='Scott Moser',
author_email='scott.moser@canonical.com',
license="AGPL",
url='http://launchpad.net/curtin/',
packages=[
'curtin',
'curtin.block',
'curtin.deps',
'curtin.commands',
'curtin.net',
'curtin.reporter',
'curtin.reporter.legacy',
],
scripts=glob('bin/*'),
data_files=[
('/usr/share/doc/curtin',
[f for f in glob('doc/*') if is_f(f)]),
('/usr/lib/curtin/helpers',
[f for f in glob('helpers/*') if is_f(f)])
]
)
curtin-0.1.0~bzr365/test-requirements.txt 0000644 0000000 0000000 00000000023 12673006714 016556 0 ustar 0000000 0000000 mock
nose
pyflakes
curtin-0.1.0~bzr365/tests/ 0000755 0000000 0000000 00000000000 12673006714 013464 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/tools/ 0000755 0000000 0000000 00000000000 12673006714 013462 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/tox.ini 0000644 0000000 0000000 00000004074 12673006714 013642 0 ustar 0000000 0000000 [tox]
minversion = 1.6
skipsdist = True
envlist = py27, py3, py3-flake8, py3-pylint, py27-pylint, trusty-check
[tox:jenkins]
downloadcache = ~/cache/pip
[testenv]
usedevelop = True
# LC_ALL see https://github.com/gabrielfalcao/HTTPretty/issues/223
setenv = VIRTUAL_ENV={envdir}
LC_ALL = en_US.utf-8
deps = -r{toxinidir}/test-requirements.txt
-r{toxinidir}/requirements.txt
commands = {envpython} {toxinidir}/tools/noproxy nosetests {posargs} tests/unittests
[testenv:py3]
basepython = python3
# tox uses '--pre' by default to pip install. We don't want that, and
# 'pip_pre=False' isn't available until tox version 1.9.
install_command = pip install {opts} {packages}
[testenv:py2-flake8]
basepython = python2
deps = {[testenv]deps}
flake8
commands = {envpython} -m flake8 {posargs:curtin}
[testenv:py3-flake8]
basepython = python3
deps = {[testenv]deps}
flake8
commands = {envpython} -m flake8 {posargs:curtin tests/vmtests}
[testenv:py3-pylint]
# set basepython because tox 1.6 (trusty) does not support generated environments
basepython = python3
deps = {[testenv]deps}
pylint
bzr+lp:simplestreams
commands = {envpython} -m pylint --errors-only {posargs:curtin tests/vmtests}
[testenv:py27-pylint]
# set basepython because tox 1.6 (trusty) does not support generated environments
basepython = python2.7
deps = {[testenv]deps}
pylint
commands = {envpython} -m pylint --errors-only {posargs:curtin}
[testenv:docs]
deps = {[testenv]deps}
sphinx
commands =
sphinx-build -b html -d doc/_build/doctrees doc/ doc/_build/html
[testenv:trusty-check]
# this environment provides roughly a trusty build environment where
# where 'make check' is run during package build. This protects against
# package build errors on trusty where pep8 and pyflakes there have subtly
# different behavior. Note, we do only run pyflakes3, though.
basepython = python3
deps = pyflakes==0.8.1
pep8==1.4.6
commands =
{toxinidir}/tools/run-pyflakes3 {posargs}
{toxinidir}/tools/run-pep8 {posargs}
[flake8]
builtins = _
exclude = .venv,.bzr,.tox,dist,doc,*lib/python*,*egg,build
curtin-0.1.0~bzr365/bin/curtin 0000755 0000000 0000000 00000003510 12673006714 014323 0 ustar 0000000 0000000 #!/bin/sh
PY3OR2_MAIN="curtin.commands.main"
PY3OR2_MCHECK="curtin.deps.check"
PY3OR2_PYTHONS=${PY3OR2_PYTHONS:-"python3:python"}
PYTHON=${PY3OR2_PYTHON}
PY3OR2_DEBUG=${PY3OR2_DEBUG:-0}
debug() {
[ "${PY3OR2_DEBUG}" != "0" ] || return 0
echo "$@" 1>&2
}
fail() { echo "$@" 1>&2; exit 1; }
# if $0 is is bin/ and dirname($0)/../module exists, then prepend PYTHONPATH
mydir=${0%/*}
updir=${mydir%/*}
if [ "${mydir#${updir}/}" = "bin" -a -d "$updir/${PY3OR2_MCHECK%%.*}" ]; then
updir=$(cd "$mydir/.." && pwd)
case "$PYTHONPATH" in
*:$updir:*|$updir:*|*:$updir) :;;
*) export PYTHONPATH="$updir${PYTHONPATH:+:$PYTHONPATH}"
debug "adding '$updir' to PYTHONPATH"
;;
esac
fi
if [ ! -n "$PYTHON" ]; then
first_exe=""
oifs="$IFS"; IFS=":"
best=0
best_exe=""
[ "${PY3OR2_DEBUG}" = "0" ] && _v="" || _v="-v"
for p in $PY3OR2_PYTHONS; do
command -v "$p" >/dev/null 2>&1 ||
{ debug "$p: not in path"; continue; }
[ -z "$PY3OR2_MCHECK" ] && PYTHON=$p && break
out=$($p -m "$PY3OR2_MCHECK" $_v -- "$@" 2>&1) && PYTHON="$p" &&
{ debug "$p is good [$p -m $PY3OR2_MCHECK $_v -- $*]"; break; }
ret=$?
debug "$p [$ret]: $out"
# exit code of 1 is unuseable
[ $ret -eq 1 ] && continue
[ -n "$first_exe" ] || first_exe="$p"
# higher non-zero exit values indicate more plausible usability
[ $best -lt $ret ] && best_exe="$p" && best=$ret &&
debug "current best: $best_exe"
done
IFS="$oifs"
[ -z "$best_exe" -a -n "$first_exe" ] && best_exe="$first_exe"
[ -n "$PYTHON" ] || PYTHON="$best_exe"
[ -n "$PYTHON" ] ||
fail "no availble python? [PY3OR2_DEBUG=1 for more info]"
fi
debug "executing: $PYTHON -m \"$PY3OR2_MAIN\" $*"
exec $PYTHON -m "$PY3OR2_MAIN" "$@"
curtin-0.1.0~bzr365/curtin/__init__.py 0000644 0000000 0000000 00000002765 12673006714 015751 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
# This constant is made available so a caller can read it
# it must be kept the same as that used in helpers/common:get_carryover_params
KERNEL_CMDLINE_COPY_TO_INSTALL_SEP = "---"
# The 'FEATURES' variable is provided so that users of curtin
# can determine which features are supported. Each entry should have
# a consistent meaning.
FEATURES = [
# install supports the 'network' config version 1
'NETWORK_CONFIG_V1',
# reporter supports 'webhook' type
'REPORTING_EVENTS_WEBHOOK',
# install supports the 'storage' config version 1
'STORAGE_CONFIG_V1',
# subcommand 'system-install' is present
'SUBCOMMAND_SYSTEM_INSTALL',
# subcommand 'system-upgrade' is present
'SUBCOMMAND_SYSTEM_UPGRADE',
]
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/block/ 0000755 0000000 0000000 00000000000 12673006714 014720 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/curtin/commands/ 0000755 0000000 0000000 00000000000 12673006714 015427 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/curtin/config.py 0000644 0000000 0000000 00000007734 12673006714 015460 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import yaml
import json
ARCHIVE_HEADER = "#curtin-config-archive"
ARCHIVE_TYPE = "text/curtin-config-archive"
CONFIG_HEADER = "#curtin-config"
CONFIG_TYPE = "text/curtin-config"
try:
# python2
_STRING_TYPES = (str, basestring, unicode)
except NameError:
# python3
_STRING_TYPES = (str,)
def merge_config_fp(cfgin, fp):
merge_config_str(cfgin, fp.read())
def merge_config_str(cfgin, cfgstr):
cfg2 = yaml.safe_load(cfgstr)
if not isinstance(cfg2, dict):
raise TypeError("Failed reading config. not a dictionary: %s" % cfgstr)
merge_config(cfgin, cfg2)
def merge_config(cfg, cfg2):
# update cfg by merging cfg2 over the top
for k, v in cfg2.items():
if isinstance(v, dict) and isinstance(cfg.get(k, None), dict):
merge_config(cfg[k], v)
else:
cfg[k] = v
def merge_cmdarg(cfg, cmdarg, delim="/"):
merge_config(cfg, cmdarg2cfg(cmdarg, delim))
def cmdarg2cfg(cmdarg, delim="/"):
if '=' not in cmdarg:
raise ValueError('no "=" in "%s"' % cmdarg)
key, val = cmdarg.split("=", 1)
cfg = {}
cur = cfg
is_json = False
if key.startswith("json:"):
is_json = True
key = key[5:]
items = key.split(delim)
for item in items[:-1]:
cur[item] = {}
cur = cur[item]
if is_json:
try:
val = json.loads(val)
except (ValueError, TypeError):
raise ValueError("setting of key '%s' had invalid json: %s" %
(key, val))
# this would occur if 'json:={"topkey": "topval"}'
if items[-1] == "":
cfg = val
else:
cur[items[-1]] = val
return cfg
def load_config_archive(content):
archive = yaml.load(content)
config = {}
for part in archive:
if isinstance(part, (str,)):
if part.startswith(ARCHIVE_HEADER):
merge_config(config, load_config_archive(part))
elif part.startswith(CONFIG_HEADER):
merge_config_str(config, part)
elif isinstance(part, dict) and isinstance(part.get('content'), str):
payload = part.get('content')
if (part.get('type') == ARCHIVE_TYPE or
payload.startswith(ARCHIVE_HEADER)):
merge_config(config, load_config_archive(payload))
elif (part.get('type') == CONFIG_TYPE or
payload.startswith(CONFIG_HEADER)):
merge_config_str(config, payload)
return config
def load_config(cfg_file):
with open(cfg_file, "r") as fp:
content = fp.read()
if not content.startswith(ARCHIVE_HEADER):
return yaml.safe_load(content)
else:
return load_config_archive(content)
def load_command_config(args, state):
if hasattr(args, 'config') and args.config:
return args.config
else:
# state 'config' points to a file with fully rendered config
cfg_file = state.get('config')
if not cfg_file:
cfg = {}
else:
cfg = load_config(cfg_file)
return cfg
def dump_config(config):
return yaml.dump(config, default_flow_style=False, indent=2)
def value_as_boolean(value):
if value in (False, None, '0', 0, 'False', 'false', ''):
return False
return True
curtin-0.1.0~bzr365/curtin/deps/ 0000755 0000000 0000000 00000000000 12673006714 014561 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/curtin/futil.py 0000644 0000000 0000000 00000004415 12673006714 015327 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import grp
import pwd
import os
from .util import write_file
def chownbyid(fname, uid=None, gid=None):
if uid in [None, -1] and gid in [None, -1]:
return
os.chown(fname, uid, gid)
def decode_perms(perm, default=0o644):
try:
if perm is None:
return default
if isinstance(perm, (int, float)):
# Just 'downcast' it (if a float)
return int(perm)
else:
# Force to string and try octal conversion
return int(str(perm), 8)
except (TypeError, ValueError):
return default
def chownbyname(fname, user=None, group=None):
uid = -1
gid = -1
try:
if user:
uid = pwd.getpwnam(user).pw_uid
if group:
gid = grp.getgrnam(group).gr_gid
except KeyError as e:
raise OSError("Unknown user or group: %s" % (e))
chownbyid(fname, uid, gid)
def extract_usergroup(ug_pair):
if not ug_pair:
return (None, None)
ug_parted = ug_pair.split(':', 1)
u = ug_parted[0].strip()
if len(ug_parted) == 2:
g = ug_parted[1].strip()
else:
g = None
if not u or u == "-1" or u.lower() == "none":
u = None
if not g or g == "-1" or g.lower() == "none":
g = None
return (u, g)
def write_finfo(path, content, owner="-1:-1", perms="0644"):
(u, g) = extract_usergroup(owner)
omode = "w"
if isinstance(content, bytes):
omode = "wb"
write_file(path, content, mode=decode_perms(perms), omode=omode)
chownbyname(path, u, g)
curtin-0.1.0~bzr365/curtin/log.py 0000644 0000000 0000000 00000004356 12673006714 014771 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import logging
# Logging items for easy access
getLogger = logging.getLogger
CRITICAL = logging.CRITICAL
FATAL = logging.FATAL
ERROR = logging.ERROR
WARNING = logging.WARNING
WARN = logging.WARN
INFO = logging.INFO
DEBUG = logging.DEBUG
NOTSET = logging.NOTSET
class NullHandler(logging.Handler):
def emit(self, record):
pass
def basicConfig(**kwargs):
# basically like logging.basicConfig but only output for our logger
if kwargs.get('filename'):
handler = logging.FileHandler(filename=kwargs['filename'],
mode=kwargs.get('filemode', 'a'))
elif kwargs.get('stream'):
handler = logging.StreamHandler(stream=kwargs['stream'])
else:
handler = NullHandler()
if 'verbosity' in kwargs:
level = ((logging.ERROR, logging.INFO, logging.DEBUG)
[min(kwargs['verbosity'], 2)])
else:
level = kwargs.get('level', logging.NOTSET)
handler.setFormatter(logging.Formatter(fmt=kwargs.get('format'),
datefmt=kwargs.get('datefmt')))
handler.setLevel(level)
logging.getLogger().setLevel(level)
logger = _getLogger()
for h in list(logger.handlers):
logger.removeHandler(h)
logger.setLevel(level)
logger.addHandler(handler)
def _getLogger(name='curtin'):
return logging.getLogger(name)
if not logging.getLogger().handlers:
logging.getLogger().addHandler(NullHandler())
LOG = _getLogger()
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/net/ 0000755 0000000 0000000 00000000000 12673006714 014414 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/curtin/pack.py 0000644 0000000 0000000 00000016757 12673006714 015136 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import errno
import os
import shutil
import tempfile
from . import util
CALL_ENTRY_POINT_SH_HEADER = """
#!/bin/sh
PY3OR2_MAIN="%(ep_main)s"
PY3OR2_MCHECK="%(ep_mcheck)s"
PY3OR2_PYTHONS=${PY3OR2_PYTHONS:-"%(python_exe_list)s"}
PYTHON=${PY3OR2_PYTHON}
PY3OR2_DEBUG=${PY3OR2_DEBUG:-0}
""".strip()
CALL_ENTRY_POINT_SH_BODY = """
debug() {
[ "${PY3OR2_DEBUG}" != "0" ] || return 0
echo "$@" 1>&2
}
fail() { echo "$@" 1>&2; exit 1; }
# if $0 is is bin/ and dirname($0)/../module exists, then prepend PYTHONPATH
mydir=${0%/*}
updir=${mydir%/*}
if [ "${mydir#${updir}/}" = "bin" -a -d "$updir/${PY3OR2_MCHECK%%.*}" ]; then
updir=$(cd "$mydir/.." && pwd)
case "$PYTHONPATH" in
*:$updir:*|$updir:*|*:$updir) :;;
*) export PYTHONPATH="$updir${PYTHONPATH:+:$PYTHONPATH}"
debug "adding '$updir' to PYTHONPATH"
;;
esac
fi
if [ ! -n "$PYTHON" ]; then
first_exe=""
oifs="$IFS"; IFS=":"
best=0
best_exe=""
[ "${PY3OR2_DEBUG}" = "0" ] && _v="" || _v="-v"
for p in $PY3OR2_PYTHONS; do
command -v "$p" >/dev/null 2>&1 ||
{ debug "$p: not in path"; continue; }
[ -z "$PY3OR2_MCHECK" ] && PYTHON=$p && break
out=$($p -m "$PY3OR2_MCHECK" $_v -- "$@" 2>&1) && PYTHON="$p" &&
{ debug "$p is good [$p -m $PY3OR2_MCHECK $_v -- $*]"; break; }
ret=$?
debug "$p [$ret]: $out"
# exit code of 1 is unuseable
[ $ret -eq 1 ] && continue
[ -n "$first_exe" ] || first_exe="$p"
# higher non-zero exit values indicate more plausible usability
[ $best -lt $ret ] && best_exe="$p" && best=$ret &&
debug "current best: $best_exe"
done
IFS="$oifs"
[ -z "$best_exe" -a -n "$first_exe" ] && best_exe="$first_exe"
[ -n "$PYTHON" ] || PYTHON="$best_exe"
[ -n "$PYTHON" ] ||
fail "no availble python? [PY3OR2_DEBUG=1 for more info]"
fi
debug "executing: $PYTHON -m \\"$PY3OR2_MAIN\\" $*"
exec $PYTHON -m "$PY3OR2_MAIN" "$@"
"""
def write_exe_wrapper(entrypoint, path=None, interpreter=None,
deps_check_entry=None, mode=0o755):
if not interpreter:
interpreter = "python3:python"
subs = {
'ep_main': entrypoint,
'ep_mcheck': deps_check_entry if deps_check_entry else "",
'python_exe_list': interpreter,
}
content = '\n'.join(
(CALL_ENTRY_POINT_SH_HEADER % subs, CALL_ENTRY_POINT_SH_BODY))
if path is not None:
with open(path, "w") as fp:
fp.write(content)
if mode is not None:
os.chmod(path, mode)
else:
return content
def pack(fdout=None, command=None, paths=None, copy_files=None,
add_files=None):
# write to 'fdout' a self extracting file to execute 'command'
# if fdout is None, return content that would be written to fdout.
# add_files is a list of (archive_path, file_content) tuples.
# copy_files is a list of (archive_path, file_path) tuples.
if paths is None:
paths = util.get_paths()
if add_files is None:
add_files = []
if copy_files is None:
copy_files = []
tmpd = None
try:
tmpd = tempfile.mkdtemp()
exdir = os.path.join(tmpd, 'curtin')
os.mkdir(exdir)
bindir = os.path.join(exdir, 'bin')
os.mkdir(bindir)
def not_dot_py(input_d, flist):
# include .py files and directories other than __pycache__
return [f for f in flist if not
(f.endswith(".py") or
(f != "__pycache__" and
os.path.isdir(os.path.join(input_d, f))))]
shutil.copytree(paths['helpers'], os.path.join(exdir, "helpers"))
shutil.copytree(paths['lib'], os.path.join(exdir, "curtin"),
ignore=not_dot_py)
write_exe_wrapper(entrypoint='curtin.commands.main',
path=os.path.join(bindir, 'curtin'),
deps_check_entry="curtin.deps.check")
for archpath, filepath in copy_files:
target = os.path.abspath(os.path.join(exdir, archpath))
if not target.startswith(exdir + os.path.sep):
raise ValueError("'%s' resulted in path outside archive" %
archpath)
try:
os.mkdir(os.path.dirname(target))
except OSError as e:
if e.errno == errno.EEXIST:
pass
if os.path.isfile(filepath):
shutil.copy(filepath, target)
else:
shutil.copytree(filepath, target)
for archpath, content in add_files:
target = os.path.abspath(os.path.join(exdir, archpath))
if not target.startswith(exdir + os.path.sep):
raise ValueError("'%s' resulted in path outside archive" %
archpath)
try:
os.mkdir(os.path.dirname(target))
except OSError as e:
if e.errno == errno.EEXIST:
pass
with open(target, "w") as fp:
fp.write(content)
archcmd = os.path.join(paths['helpers'], 'shell-archive')
archout = None
args = [archcmd]
if fdout is not None:
archout = os.path.join(tmpd, 'output')
args.append("--output=%s" % archout)
args.extend(["--bin-path=_pwd_/bin", "--python-path=_pwd_", exdir,
"curtin", "--"])
if command is not None:
args.extend(command)
(out, _err) = util.subp(args, capture=True)
if fdout is None:
if isinstance(out, bytes):
out = out.decode()
return out
else:
with open(archout, "r") as fp:
while True:
buf = fp.read(4096)
fdout.write(buf)
if len(buf) != 4096:
break
finally:
if tmpd:
shutil.rmtree(tmpd)
def pack_install(fdout=None, configs=None, paths=None,
add_files=None, copy_files=None, args=None,
install_deps=True):
if configs is None:
configs = []
if add_files is None:
add_files = []
if args is None:
args = []
if install_deps:
dep_flags = ["--install-deps"]
else:
dep_flags = []
command = ["curtin"] + dep_flags + ["install"]
my_files = []
for n, config in enumerate(configs):
apath = "configs/config-%03d.cfg" % n
my_files.append((apath, config),)
command.append("--config=%s" % apath)
command += args
return pack(fdout=fdout, command=command, paths=paths,
add_files=add_files + my_files, copy_files=copy_files)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/reporter/ 0000755 0000000 0000000 00000000000 12673006714 015470 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/curtin/swap.py 0000644 0000000 0000000 00000006456 12673006714 015165 0 ustar 0000000 0000000 # Copyright (C) 2014 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import os
from .log import LOG
from . import util
def suggested_swapsize(memsize=None, maxsize=None, fsys=None):
# make a suggestion on the size of swap for this system.
if memsize is None:
memsize = util.get_meminfo()['total']
GB = 2 ** 30
sugg_max = 8 * GB
if fsys is None and maxsize is None:
# set max to 8GB default if no filesystem given
maxsize = sugg_max
elif fsys:
avail = util.get_fs_use_info(fsys)[1]
if maxsize is None:
# set to 25% of filesystem space
maxsize = min(int(avail / 4), sugg_max)
elif maxsize > ((avail * .9)):
# set to 90% of available disk space
maxsize = int(avail * .9)
formulas = [
# < 1G: swap = double memory
(1 * GB, lambda x: x * 2),
# < 2G: swap = 2G
(2 * GB, lambda x: 2 * GB),
# < 4G: swap = memory
(4 * GB, lambda x: x),
# < 16G: 4G
(16 * GB, lambda x: 4 * GB),
# < 64G: 1/2 M up to max
(64 * GB, lambda x: x / 2),
]
size = None
for top, func in formulas:
if memsize <= top:
size = min(func(memsize), maxsize)
if size < (memsize / 2) and size < 4 * GB:
return 0
return size
return maxsize
def setup_swapfile(target, fstab=None, swapfile=None, size=None, maxsize=None):
if size is None:
size = suggested_swapsize(fsys=target, maxsize=maxsize)
if size == 0:
LOG.debug("Not creating swap: suggested size was 0")
return
if swapfile is None:
swapfile = "/swap.img"
if not swapfile.startswith("/"):
swapfile = "/" + swapfile
mbsize = str(int(size / (2 ** 20)))
msg = "creating swap file '%s' of %sMB" % (swapfile, mbsize)
fpath = os.path.sep.join([target, swapfile])
try:
util.ensure_dir(os.path.dirname(fpath))
with util.LogTimer(LOG.debug, msg):
util.subp(
['sh', '-c',
('rm -f "$1" && umask 0066 && '
'{ fallocate -l "${2}M" "$1" || '
' dd if=/dev/zero "of=$1" bs=1M "count=$2"; } && '
'mkswap "$1" || { r=$?; rm -f "$1"; exit $r; }'),
'setup_swap', fpath, mbsize])
except Exception:
LOG.warn("failed %s" % msg)
raise
if fstab is None:
return
try:
line = '\t'.join([swapfile, 'none', 'swap', 'sw', '0', '0'])
with open(fstab, "a") as fp:
fp.write(line + "\n")
except Exception:
os.unlink(fpath)
raise
curtin-0.1.0~bzr365/curtin/udev.py 0000644 0000000 0000000 00000003634 12673006714 015151 0 ustar 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Ryan Harper
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
def compose_udev_equality(key, value):
"""Return a udev comparison clause, like `ACTION=="add"`."""
assert key == key.upper()
return '%s=="%s"' % (key, value)
def compose_udev_attr_equality(attribute, value):
"""Return a udev attribute comparison clause, like `ATTR{type}=="1"`."""
assert attribute == attribute.lower()
return 'ATTR{%s}=="%s"' % (attribute, value)
def compose_udev_setting(key, value):
"""Return a udev assignment clause, like `NAME="eth0"`."""
assert key == key.upper()
return '%s="%s"' % (key, value)
def generate_udev_rule(interface, mac):
"""Return a udev rule to set the name of network interface with `mac`.
The rule ends up as a single line looking something like:
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
ATTR{address}="ff:ee:dd:cc:bb:aa", NAME="eth0"
"""
rule = ', '.join([
compose_udev_equality('SUBSYSTEM', 'net'),
compose_udev_equality('ACTION', 'add'),
compose_udev_equality('DRIVERS', '?*'),
compose_udev_attr_equality('address', mac),
compose_udev_setting('NAME', interface),
])
return '%s\n' % rule
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/url_helper.py 0000644 0000000 0000000 00000024362 12673006714 016350 0 ustar 0000000 0000000 from email.utils import parsedate
import json
import os
import socket
import sys
import time
import uuid
from functools import partial
try:
from urllib import request as _u_re # pylint: disable=no-name-in-module
from urllib import error as _u_e # pylint: disable=no-name-in-module
from urllib.parse import urlparse # pylint: disable=no-name-in-module
urllib_request = _u_re
urllib_error = _u_e
except ImportError:
# python2
import urllib2 as urllib_request
import urllib2 as urllib_error
from urlparse import urlparse # pylint: disable=import-error
from .log import LOG
error = urllib_error
class _ReRaisedException(Exception):
exc = None
"""this exists only as an exception type that was re-raised by
an exception_cb, so code can know to handle it specially"""
def __init__(self, exc):
self.exc = exc
def _geturl(url, headers=None, headers_cb=None, exception_cb=None, data=None):
def_headers = {'User-Agent': 'Curtin/0.1'}
if headers is not None:
def_headers.update(headers)
headers = def_headers
if headers_cb:
headers.update(headers_cb(url))
if data and isinstance(data, dict):
data = json.dumps(data).encode()
try:
req = urllib_request.Request(url=url, data=data, headers=headers)
r = urllib_request.urlopen(req).read()
# python2, we want to return bytes, which is what python3 does
if isinstance(r, str):
return r.decode()
return r
except urllib_error.HTTPError as exc:
myexc = UrlError(exc, code=exc.code, headers=exc.headers, url=url,
reason=exc.reason)
except Exception as exc:
myexc = UrlError(exc, code=None, headers=None, url=url,
reason="unknown")
if exception_cb:
try:
exception_cb(myexc)
except Exception as e:
myexc = _ReRaisedException(e)
raise myexc
def geturl(url, headers=None, headers_cb=None, exception_cb=None,
data=None, retries=None, log=LOG.warn):
"""return the content of the url in binary_type. (py3: bytes, py2: str)"""
if retries is None:
retries = []
curexc = None
for trynum, naptime in enumerate(retries):
try:
return _geturl(url=url, headers=headers, headers_cb=headers_cb,
exception_cb=exception_cb, data=data)
except _ReRaisedException as e:
raise curexc.exc
except Exception as e:
curexc = e
if log:
msg = ("try %d of request to %s failed. sleeping %d: %s" %
(naptime, url, naptime, curexc))
log(msg)
time.sleep(naptime)
try:
return _geturl(url=url, headers=headers, headers_cb=headers_cb,
exception_cb=exception_cb, data=data)
except _ReRaisedException as e:
raise e.exc
class UrlError(IOError):
def __init__(self, cause, code=None, headers=None, url=None, reason=None):
IOError.__init__(self, str(cause))
self.cause = cause
self.code = code
self.headers = headers
if self.headers is None:
self.headers = {}
self.url = url
self.reason = reason
def __str__(self):
if isinstance(self.cause, urllib_error.HTTPError):
msg = "http error: %s" % self.cause.code
elif isinstance(self.cause, urllib_error.URLError):
msg = "url error: %s" % self.cause.reason
elif isinstance(self.cause, socket.timeout):
msg = "socket timeout: %s" % self.cause
else:
msg = "Unknown Exception: %s" % self.cause
return "[%s] " % self.url + msg
class OauthUrlHelper(object):
def __init__(self, consumer_key=None, token_key=None,
token_secret=None, consumer_secret=None,
skew_data_file="/run/oauth_skew.json"):
self.consumer_key = consumer_key
self.consumer_secret = consumer_secret or ""
self.token_key = token_key
self.token_secret = token_secret
self.skew_data_file = skew_data_file
self._do_oauth = True
self.skew_change_limit = 5
required = (self.token_key, self.token_secret, self.consumer_key)
if not any(required):
self._do_oauth = False
elif not all(required):
raise ValueError("all or none of token_key, token_secret, or "
"consumer_key can be set")
old = self.read_skew_file()
self.skew_data = old or {}
def __str__(self):
fields = ['consumer_key', 'consumer_secret',
'token_key', 'token_secret']
masked = fields
def r(name):
if not hasattr(self, name):
rval = "_unset"
else:
val = getattr(self, name)
if val is None:
rval = "None"
elif name in masked:
rval = '"%s"' % ("*" * len(val))
else:
rval = '"%s"' % val
return '%s=%s' % (name, rval)
return ("OauthUrlHelper(" + ','.join([r(f) for f in fields]) + ")")
def read_skew_file(self):
if self.skew_data_file and os.path.isfile(self.skew_data_file):
with open(self.skew_data_file, mode="r") as fp:
return json.load(fp)
return None
def update_skew_file(self, host, value):
# this is not atomic
if not self.skew_data_file:
return
cur = self.read_skew_file()
if cur is None:
cur = {}
cur[host] = value
with open(self.skew_data_file, mode="w") as fp:
fp.write(json.dumps(cur))
def exception_cb(self, exception):
if not (isinstance(exception, UrlError) and
(exception.code == 403 or exception.code == 401)):
return
if 'date' not in exception.headers:
LOG.warn("Missing header 'date' in %s response", exception.code)
return
date = exception.headers['date']
try:
remote_time = time.mktime(parsedate(date))
except Exception as e:
LOG.warn("Failed to convert datetime '%s': %s", date, e)
return
skew = int(remote_time - time.time())
host = urlparse(exception.url).netloc
old_skew = self.skew_data.get(host, 0)
if abs(old_skew - skew) > self.skew_change_limit:
self.update_skew_file(host, skew)
LOG.warn("Setting oauth clockskew for %s to %d", host, skew)
self.skew_data[host] = skew
return
def headers_cb(self, url):
if not self._do_oauth:
return {}
host = urlparse(url).netloc
clockskew = None
if self.skew_data and host in self.skew_data:
clockskew = self.skew_data[host]
return oauth_headers(
url=url, consumer_key=self.consumer_key,
token_key=self.token_key, token_secret=self.token_secret,
consumer_secret=self.consumer_secret, clockskew=clockskew)
def _wrapped(self, wrapped_func, args, kwargs):
kwargs['headers_cb'] = partial(
self._headers_cb, kwargs.get('headers_cb'))
kwargs['exception_cb'] = partial(
self._exception_cb, kwargs.get('exception_cb'))
return wrapped_func(*args, **kwargs)
def geturl(self, *args, **kwargs):
return self._wrapped(geturl, args, kwargs)
def _exception_cb(self, extra_exception_cb, exception):
ret = None
try:
if extra_exception_cb:
ret = extra_exception_cb(exception)
finally:
self.exception_cb(exception)
return ret
def _headers_cb(self, extra_headers_cb, url):
headers = {}
if extra_headers_cb:
headers = extra_headers_cb(url)
headers.update(self.headers_cb(url))
return headers
def _oauth_headers_none(url, consumer_key, token_key, token_secret,
consumer_secret, clockskew=0):
"""oauth_headers implementation when no oauth is available"""
if not any([token_key, token_secret, consumer_key]):
return {}
pkg = "'python3-oauthlib'"
if sys.version_info[0] == 2:
pkg = "'python-oauthlib' or 'python-oauth'"
raise ValueError(
"Oauth was necessary but no oauth library is available. "
"Please install package " + pkg + ".")
def _oauth_headers_oauth(url, consumer_key, token_key, token_secret,
consumer_secret, clockskew=0):
"""Build OAuth headers with oauth using given credentials."""
consumer = oauth.OAuthConsumer(consumer_key, consumer_secret)
token = oauth.OAuthToken(token_key, token_secret)
if clockskew is None:
clockskew = 0
timestamp = int(time.time()) + clockskew
params = {
'oauth_version': "1.0",
'oauth_nonce': uuid.uuid4().hex,
'oauth_timestamp': timestamp,
'oauth_token': token.key,
'oauth_consumer_key': consumer.key,
}
req = oauth.OAuthRequest(http_url=url, parameters=params)
req.sign_request(
oauth.OAuthSignatureMethod_PLAINTEXT(), consumer, token)
return(req.to_header())
def _oauth_headers_oauthlib(url, consumer_key, token_key, token_secret,
consumer_secret, clockskew=0):
"""Build OAuth headers with oauthlib using given credentials."""
if clockskew is None:
clockskew = 0
timestamp = int(time.time()) + clockskew
client = oauth1.Client(
consumer_key,
client_secret=consumer_secret,
resource_owner_key=token_key,
resource_owner_secret=token_secret,
signature_method=oauth1.SIGNATURE_PLAINTEXT,
timestamp=str(timestamp))
uri, signed_headers, body = client.sign(url)
return signed_headers
oauth_headers = _oauth_headers_none
try:
# prefer to use oauthlib. (python-oauthlib)
import oauthlib.oauth1 as oauth1
oauth_headers = _oauth_headers_oauthlib
except ImportError:
# no oauthlib was present, try using oauth (python-oauth)
try:
import oauth.oauth as oauth
oauth_headers = _oauth_headers_oauth
except ImportError:
# we have no oauth libraries available, use oauth_headers_none
pass
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/util.py 0000644 0000000 0000000 00000061320 12673006714 015157 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import argparse
import errno
import glob
import json
import os
import shutil
import subprocess
import stat
import sys
import tempfile
import time
from .log import LOG
_INSTALLED_HELPERS_PATH = '/usr/lib/curtin/helpers'
_INSTALLED_MAIN = '/usr/bin/curtin'
_LSB_RELEASE = {}
def _subp(args, data=None, rcs=None, env=None, capture=False, shell=False,
logstring=False, decode="replace"):
if rcs is None:
rcs = [0]
devnull_fp = None
try:
if not logstring:
LOG.debug(("Running command %s with allowed return codes %s"
" (shell=%s, capture=%s)"), args, rcs, shell, capture)
else:
LOG.debug(("Running hidden command to protect sensitive "
"input/output logstring: %s"), logstring)
stdin = None
stdout = None
stderr = None
if capture:
stdout = subprocess.PIPE
stderr = subprocess.PIPE
if data is None:
devnull_fp = open(os.devnull)
stdin = devnull_fp
else:
stdin = subprocess.PIPE
sp = subprocess.Popen(args, stdout=stdout,
stderr=stderr, stdin=stdin,
env=env, shell=shell)
(out, err) = sp.communicate(data)
# Just ensure blank instead of none.
if not out and capture:
out = b''
if not err and capture:
err = b''
if decode:
def ldecode(data, m='utf-8'):
if not isinstance(data, bytes):
return data
return data.decode(m, errors=decode)
out = ldecode(out)
err = ldecode(err)
except OSError as e:
raise ProcessExecutionError(cmd=args, reason=e)
finally:
if devnull_fp:
devnull_fp.close()
rc = sp.returncode # pylint: disable=E1101
if rc not in rcs:
raise ProcessExecutionError(stdout=out, stderr=err,
exit_code=rc,
cmd=args)
return (out, err)
def subp(*args, **kwargs):
"""Run a subprocess.
:param args: command to run in a list. [cmd, arg1, arg2...]
:param data: input to the command, made available on its stdin.
:param rcs:
a list of allowed return codes. If subprocess exits with a value not
in this list, a ProcessExecutionError will be raised. By default,
data is returned as a string. See 'decode' parameter.
:param env: a dictionary for the command's environment.
:param capture:
boolean indicating if output should be captured. If True, then stderr
and stdout will be returned. If False, they will not be redirected.
:param shell: boolean indicating if this should be run with a shell.
:param logstring:
the command will be logged to DEBUG. If it contains info that should
not be logged, then logstring will be logged instead.
:param decode:
if False, no decoding will be done and returned stdout and stderr will
be bytes. Other allowed values are 'strict', 'ignore', and 'replace'.
These values are passed through to bytes().decode() as the 'errors'
parameter. There is no support for decoding to other than utf-8.
:param retries:
a list of times to sleep in between retries. After each failure
subp will sleep for N seconds and then try again. A value of [1, 3]
means to run, sleep 1, run, sleep 3, run and then return exit code.
"""
retries = []
if "retries" in kwargs:
retries = kwargs.pop("retries")
if args:
cmd = args[0]
if 'args' in kwargs:
cmd = kwargs['args']
# Retry with waits between the retried command.
for num, wait in enumerate(retries):
try:
return _subp(*args, **kwargs)
except ProcessExecutionError as e:
LOG.debug("try %s: command %s failed, rc: %s", num,
cmd, e.exit_code)
time.sleep(wait)
# Final try without needing to wait or catch the error. If this
# errors here then it will be raised to the caller.
return _subp(*args, **kwargs)
def load_command_environment(env=os.environ, strict=False):
mapping = {'scratch': 'WORKING_DIR', 'fstab': 'OUTPUT_FSTAB',
'interfaces': 'OUTPUT_INTERFACES', 'config': 'CONFIG',
'target': 'TARGET_MOUNT_POINT',
'network_state': 'OUTPUT_NETWORK_STATE',
'network_config': 'OUTPUT_NETWORK_CONFIG'}
if strict:
missing = [k for k in mapping if k not in env]
if len(missing):
raise KeyError("missing environment vars: %s" % missing)
return {k: env.get(v) for k, v in mapping.items()}
class BadUsage(Exception):
pass
class ProcessExecutionError(IOError):
MESSAGE_TMPL = ('%(description)s\n'
'Command: %(cmd)s\n'
'Exit code: %(exit_code)s\n'
'Reason: %(reason)s\n'
'Stdout: %(stdout)r\n'
'Stderr: %(stderr)r')
def __init__(self, stdout=None, stderr=None,
exit_code=None, cmd=None,
description=None, reason=None):
if not cmd:
self.cmd = '-'
else:
self.cmd = cmd
if not description:
self.description = 'Unexpected error while running command.'
else:
self.description = description
if not isinstance(exit_code, int):
self.exit_code = '-'
else:
self.exit_code = exit_code
if not stderr:
self.stderr = ''
else:
self.stderr = stderr
if not stdout:
self.stdout = ''
else:
self.stdout = stdout
if reason:
self.reason = reason
else:
self.reason = '-'
message = self.MESSAGE_TMPL % {
'description': self.description,
'cmd': self.cmd,
'exit_code': self.exit_code,
'stdout': self.stdout,
'stderr': self.stderr,
'reason': self.reason,
}
IOError.__init__(self, message)
class LogTimer(object):
def __init__(self, logfunc, msg):
self.logfunc = logfunc
self.msg = msg
def __enter__(self):
self.start = time.time()
return self
def __exit__(self, etype, value, trace):
self.logfunc("%s took %0.3f seconds" %
(self.msg, time.time() - self.start))
def is_mounted(target, src=None, opts=None):
# return whether or not src is mounted on target
mounts = ""
with open("/proc/mounts", "r") as fp:
mounts = fp.read()
for line in mounts.splitlines():
if line.split()[1] == os.path.abspath(target):
return True
return False
def do_mount(src, target, opts=None):
# mount src at target with opts and return True
# if already mounted, return False
if opts is None:
opts = []
if isinstance(opts, str):
opts = [opts]
if is_mounted(target, src, opts):
return False
ensure_dir(target)
cmd = ['mount'] + opts + [src, target]
subp(cmd)
return True
def do_umount(mountpoint):
if not is_mounted(mountpoint):
return False
subp(['umount', mountpoint])
return True
def ensure_dir(path, mode=None):
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if mode is not None:
os.chmod(path, mode)
def write_file(filename, content, mode=0o644, omode="w"):
ensure_dir(os.path.dirname(filename))
with open(filename, omode) as fp:
fp.write(content)
os.chmod(filename, mode)
def load_file(path, mode="r"):
with open(path, mode) as fp:
return fp.read()
def disable_daemons_in_root(target):
contents = "\n".join(
['#!/bin/sh',
'# see invoke-rc.d for exit codes. 101 is "do not run"',
'while true; do',
' case "$1" in',
' -*) shift;;',
' makedev|x11-common) exit 0;;',
' *) exit 101;;',
' esac',
'done',
''])
fpath = os.path.join(target, "usr/sbin/policy-rc.d")
if os.path.isfile(fpath):
return False
write_file(fpath, mode=0o755, content=contents)
return True
def undisable_daemons_in_root(target):
try:
os.unlink(os.path.join(target, "usr/sbin/policy-rc.d"))
except OSError as e:
if e.errno != errno.ENOENT:
raise
return False
return True
class ChrootableTarget(object):
def __init__(self, target, allow_daemons=False, sys_resolvconf=True):
if target is None:
target = "/"
self.target = os.path.abspath(target)
self.mounts = ["/dev", "/proc", "/sys"]
self.umounts = []
self.disabled_daemons = False
self.allow_daemons = allow_daemons
self.sys_resolvconf = sys_resolvconf
self.rconf_d = None
def __enter__(self):
for p in self.mounts:
tpath = os.path.join(self.target, p[1:])
if do_mount(p, tpath, opts='--bind'):
self.umounts.append(tpath)
if not self.allow_daemons:
self.disabled_daemons = disable_daemons_in_root(self.target)
target_etc = os.path.join(self.target, "etc")
if self.target != "/" and os.path.isdir(target_etc):
# never muck with resolv.conf on /
rconf = os.path.join(target_etc, "resolv.conf")
rtd = None
try:
rtd = tempfile.mkdtemp(dir=os.path.dirname(rconf))
tmp = os.path.join(rtd, "resolv.conf")
os.rename(rconf, tmp)
self.rconf_d = rtd
shutil.copy("/etc/resolv.conf", rconf)
except:
if rtd:
shutil.rmtree(rtd)
self.rconf_d = None
raise
return self
def __exit__(self, etype, value, trace):
if self.disabled_daemons:
undisable_daemons_in_root(self.target)
# if /dev is to be unmounted, udevadm settle (LP: #1462139)
if os.path.join(self.target, "dev") in self.umounts:
subp(['udevadm', 'settle'])
for p in reversed(self.umounts):
do_umount(p)
rconf = os.path.join(self.target, "etc", "resolv.conf")
if self.sys_resolvconf and self.rconf_d:
os.rename(os.path.join(self.rconf_d, "resolv.conf"), rconf)
shutil.rmtree(self.rconf_d)
class RunInChroot(ChrootableTarget):
def __call__(self, args, **kwargs):
if self.target != "/":
chroot = ["chroot", self.target]
else:
chroot = []
return subp(chroot + args, **kwargs)
def is_exe(fpath):
# Return path of program for execution if found in path
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
def which(program, search=None, target=None):
if target is None or os.path.realpath(target) == "/":
target = "/"
if os.path.sep in program:
# if program had a '/' in it, then do not search PATH
# 'which' does consider cwd here. (cd / && which bin/ls) = bin/ls
# so effectively we set cwd to / (or target)
if is_exe(os.path.sep.join((target, program,))):
return program
if search is None:
paths = [p.strip('"') for p in
os.environ.get("PATH", "").split(os.pathsep)]
if target == "/":
search = paths
else:
search = [p for p in paths if p.startswith("/")]
# normalize path input
search = [os.path.abspath(p) for p in search]
for path in search:
if is_exe(os.path.sep.join((target, path, program,))):
return os.path.sep.join((path, program,))
return None
def get_paths(curtin_exe=None, lib=None, helpers=None):
# return a dictionary with paths for 'curtin_exe', 'helpers' and 'lib'
# that represent where 'curtin' executable lives, where the 'curtin' module
# directory is (containing __init__.py) and where the 'helpers' directory.
mydir = os.path.realpath(os.path.dirname(__file__))
tld = os.path.realpath(mydir + os.path.sep + "..")
if curtin_exe is None:
if os.path.isfile(os.path.join(tld, "bin", "curtin")):
curtin_exe = os.path.join(tld, "bin", "curtin")
if (curtin_exe is None and
(os.path.basename(sys.argv[0]).startswith("curtin") and
os.path.isfile(sys.argv[0]))):
curtin_exe = os.path.realpath(sys.argv[0])
if curtin_exe is None:
found = which('curtin')
if found:
curtin_exe = found
if (curtin_exe is None and os.path.exists(_INSTALLED_MAIN)):
curtin_exe = _INSTALLED_MAIN
cfile = "common" # a file in 'helpers'
if (helpers is None and
os.path.isfile(os.path.join(tld, "helpers", cfile))):
helpers = os.path.join(tld, "helpers")
if (helpers is None and
os.path.isfile(os.path.join(_INSTALLED_HELPERS_PATH, cfile))):
helpers = _INSTALLED_HELPERS_PATH
return({'curtin_exe': curtin_exe, 'lib': mydir, 'helpers': helpers})
def get_architecture(target=None):
chroot = []
if target is not None:
chroot = ['chroot', target]
out, _ = subp(chroot + ['dpkg', '--print-architecture'],
capture=True)
return out.strip()
def has_pkg_available(pkg, target=None):
chroot = []
if target is not None:
chroot = ['chroot', target]
out, _ = subp(chroot + ['apt-cache', 'pkgnames'], capture=True)
for item in out.splitlines():
if pkg == item.strip():
return True
return False
def has_pkg_installed(pkg, target=None):
chroot = []
if target is not None:
chroot = ['chroot', target]
try:
out, _ = subp(chroot + ['dpkg-query', '--show', '--showformat',
'${db:Status-Abbrev}', pkg],
capture=True)
return out.rstrip() == "ii"
except ProcessExecutionError:
return False
def find_newer(src, files):
mtime = os.stat(src).st_mtime
return [f for f in files if
os.path.exists(f) and os.stat(f).st_mtime > mtime]
def set_unexecutable(fname, strict=False):
"""set fname so it is not executable.
if strict, raise an exception if the file does not exist.
return the current mode, or None if no change is needed.
"""
if not os.path.exists(fname):
if strict:
raise ValueError('%s: file does not exist' % fname)
return None
cur = stat.S_IMODE(os.lstat(fname).st_mode)
target = cur & (~stat.S_IEXEC & ~stat.S_IXGRP & ~stat.S_IXOTH)
if cur == target:
return None
os.chmod(fname, target)
return cur
def apt_update(target=None, env=None, force=False, comment=None,
retries=None):
marker = "tmp/curtin.aptupdate"
if target is None:
target = "/"
if env is None:
env = os.environ.copy()
if retries is None:
# by default run apt-update up to 3 times to allow
# for transient failures
retries = (1, 2, 3)
if comment is None:
comment = "no comment provided"
if comment.endswith("\n"):
comment = comment[:-1]
marker = os.path.join(target, marker)
# if marker exists, check if there are files that would make it obsolete
listfiles = [os.path.join(target, "etc/apt/sources.list")]
listfiles += glob.glob(
os.path.join(target, "etc/apt/sources.list.d/*.list"))
if os.path.exists(marker) and not force:
if len(find_newer(marker, listfiles)) == 0:
return
restore_perms = []
abs_tmpdir = tempfile.mkdtemp(dir=os.path.join(target, 'tmp'))
try:
abs_slist = abs_tmpdir + "/sources.list"
abs_slistd = abs_tmpdir + "/sources.list.d"
ch_tmpdir = "/tmp/" + os.path.basename(abs_tmpdir)
ch_slist = ch_tmpdir + "/sources.list"
ch_slistd = ch_tmpdir + "/sources.list.d"
# this file gets executed on apt-get update sometimes. (LP: #1527710)
motd_update = os.path.join(
target, "usr/lib/update-notifier/update-motd-updates-available")
pmode = set_unexecutable(motd_update)
if pmode is not None:
restore_perms.append((motd_update, pmode),)
# create tmpdir/sources.list with all lines other than deb-src
# avoid apt complaining by using existing and empty dir for sourceparts
os.mkdir(abs_slistd)
with open(abs_slist, "w") as sfp:
for sfile in listfiles:
with open(sfile, "r") as fp:
contents = fp.read()
for line in contents.splitlines():
line = line.lstrip()
if not line.startswith("deb-src"):
sfp.write(line + "\n")
update_cmd = [
'apt-get', '--quiet',
'--option=Acquire::Languages=none',
'--option=Dir::Etc::sourcelist=%s' % ch_slist,
'--option=Dir::Etc::sourceparts=%s' % ch_slistd,
'update']
# do not using 'run_apt_command' so we can use 'retries' to subp
with RunInChroot(target, allow_daemons=True) as inchroot:
inchroot(update_cmd, env=env, retries=retries)
finally:
for fname, perms in restore_perms:
os.chmod(fname, perms)
if abs_tmpdir:
shutil.rmtree(abs_tmpdir)
with open(marker, "w") as fp:
fp.write(comment + "\n")
def run_apt_command(mode, args=None, aptopts=None, env=None, target=None,
execute=True, allow_daemons=False):
opts = ['--quiet', '--assume-yes',
'--option=Dpkg::options::=--force-unsafe-io',
'--option=Dpkg::Options::=--force-confold']
if args is None:
args = []
if aptopts is None:
aptopts = []
if env is None:
env = os.environ.copy()
env['DEBIAN_FRONTEND'] = 'noninteractive'
if which('eatmydata', target=target):
emd = ['eatmydata']
else:
emd = []
cmd = emd + ['apt-get'] + opts + aptopts + [mode] + args
if not execute:
return env, cmd
apt_update(target, env=env, comment=' '.join(cmd))
ric = RunInChroot(target, allow_daemons=allow_daemons)
with ric as inchroot:
return inchroot(cmd, env=env)
def system_upgrade(aptopts=None, target=None, env=None, allow_daemons=False):
LOG.debug("Upgrading system in %s", target)
for mode in ('dist-upgrade', 'autoremove'):
ret = run_apt_command(
mode, aptopts=aptopts, target=target,
env=env, allow_daemons=allow_daemons)
return ret
def install_packages(pkglist, aptopts=None, target=None, env=None,
allow_daemons=False):
if isinstance(pkglist, str):
pkglist = [pkglist]
return run_apt_command(
'install', args=pkglist,
aptopts=aptopts, target=target, env=env, allow_daemons=allow_daemons)
def is_uefi_bootable():
return os.path.exists('/sys/firmware/efi') is True
def run_hook_if_exists(target, hook):
"""
Look for "hook" in "target" and run it
"""
target_hook = os.path.join(target, 'curtin', hook)
if os.path.isfile(target_hook):
LOG.debug("running %s" % target_hook)
subp([target_hook])
return True
return False
def sanitize_source(source):
"""
Check the install source for type information
If no type information is present or it is an invalid
type, we default to the standard tgz format
"""
if type(source) is dict:
# already sanitized?
return source
supported = ['tgz', 'dd-tgz']
deftype = 'tgz'
for i in supported:
prefix = i + ":"
if source.startswith(prefix):
return {'type': i, 'uri': source[len(prefix):]}
LOG.debug("unknown type for url '%s', assuming type '%s'", source, deftype)
# default to tgz for unknown types
return {'type': deftype, 'uri': source}
def get_dd_images(sources):
"""
return all disk images in sources list
"""
src = []
if type(sources) is not dict:
return src
for i in sources:
if type(sources[i]) is not dict:
continue
if sources[i]['type'].startswith('dd-'):
src.append(sources[i]['uri'])
return src
def get_meminfo(meminfo="/proc/meminfo", raw=False):
mpliers = {'kB': 2**10, 'mB': 2 ** 20, 'B': 1, 'gB': 2 ** 30}
kmap = {'MemTotal:': 'total', 'MemFree:': 'free',
'MemAvailable:': 'available'}
ret = {}
with open(meminfo, "r") as fp:
for line in fp:
try:
key, value, unit = line.split()
except ValueError:
key, value = line.split()
unit = 'B'
if raw:
ret[key] = int(value) * mpliers[unit]
elif key in kmap:
ret[kmap[key]] = int(value) * mpliers[unit]
return ret
def get_fs_use_info(path):
# return some filesystem usage info as tuple of (size_in_bytes, free_bytes)
statvfs = os.statvfs(path)
return (statvfs.f_frsize * statvfs.f_blocks,
statvfs.f_frsize * statvfs.f_bfree)
def human2bytes(size):
# convert human 'size' to integer
size_in = size
if isinstance(size, int):
return size
elif isinstance(size, float):
if int(size) != size:
raise ValueError("'%s': resulted in non-integer (%s)" %
(size_in, int(size)))
return size
elif not isinstance(size, str):
raise TypeError("cannot convert type %s ('%s')." % (type(size), size))
if size.endswith("B"):
size = size[:-1]
mpliers = {'B': 1, 'K': 2 ** 10, 'M': 2 ** 20, 'G': 2 ** 30, 'T': 2 ** 40}
num = size
mplier = 'B'
for m in mpliers:
if size.endswith(m):
mplier = m
num = size[0:-len(m)]
try:
num = float(num)
except ValueError:
raise ValueError("'%s' is not valid input." % size_in)
if num < 0:
raise ValueError("'%s': cannot be negative" % size_in)
val = num * mpliers[mplier]
if int(val) != val:
raise ValueError("'%s': resulted in non-integer (%s)" % (size_in, val))
return val
def import_module(import_str):
"""Import a module."""
__import__(import_str)
return sys.modules[import_str]
def try_import_module(import_str, default=None):
"""Try to import a module."""
try:
return import_module(import_str)
except ImportError:
return default
def is_file_not_found_exc(exc):
return (isinstance(exc, IOError) and exc.errno == errno.ENOENT)
def lsb_release():
fmap = {'Codename': 'codename', 'Description': 'description',
'Distributor ID': 'id', 'Release': 'release'}
global _LSB_RELEASE
if not _LSB_RELEASE:
data = {}
try:
out, err = subp(['lsb_release', '--all'], capture=True)
for line in out.splitlines():
fname, tok, val = line.partition(":")
if fname in fmap:
data[fmap[fname]] = val.strip()
missing = [k for k in fmap.values() if k not in data]
if len(missing):
LOG.warn("Missing fields in lsb_release --all output: %s",
','.join(missing))
except ProcessExecutionError as e:
LOG.warn("Unable to get lsb_release --all: %s", e)
data = {v: "UNAVAILABLE" for v in fmap.values()}
_LSB_RELEASE.update(data)
return _LSB_RELEASE
class MergedCmdAppend(argparse.Action):
"""This appends to a list in order of appearence both the option string
and the value"""
def __call__(self, parser, namespace, values, option_string=None):
if getattr(namespace, self.dest, None) is None:
setattr(namespace, self.dest, [])
getattr(namespace, self.dest).append((option_string, values,))
def json_dumps(data):
return json.dumps(data, indent=1, sort_keys=True,
separators=(',', ': ')).encode('utf-8')
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/block/__init__.py 0000644 0000000 0000000 00000040127 12673006714 017035 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import errno
import os
import stat
import shlex
import tempfile
import itertools
from curtin import util
from curtin.log import LOG
def get_dev_name_entry(devname):
bname = devname.split('/dev/')[-1]
return (bname, "/dev/" + bname)
def is_valid_device(devname):
devent = get_dev_name_entry(devname)[1]
try:
return stat.S_ISBLK(os.stat(devent).st_mode)
except OSError as e:
if e.errno != errno.ENOENT:
raise
return False
def dev_short(devname):
if os.path.sep in devname:
return os.path.basename(devname)
return devname
def dev_path(devname):
if devname.startswith('/dev/'):
return devname
else:
return '/dev/' + devname
def sys_block_path(devname, add=None, strict=True):
toks = ['/sys/class/block', dev_short(devname)]
if add is not None:
toks.append(add)
path = os.sep.join(toks)
if strict and not os.path.exists(path):
err = OSError(
"devname '{}' did not have existing syspath '{}'".format(
devname, path))
err.errno = errno.ENOENT
raise err
return os.path.normpath(path)
def _lsblock_pairs_to_dict(lines):
ret = {}
for line in lines.splitlines():
toks = shlex.split(line)
cur = {}
for tok in toks:
k, v = tok.split("=", 1)
cur[k] = v
cur['device_path'] = get_dev_name_entry(cur['NAME'])[1]
ret[cur['NAME']] = cur
return ret
def _lsblock(args=None):
# lsblk --help | sed -n '/Available/,/^$/p' |
# sed -e 1d -e '$d' -e 's,^[ ]\+,,' -e 's, .*,,' | sort
keys = ['ALIGNMENT', 'DISC-ALN', 'DISC-GRAN', 'DISC-MAX', 'DISC-ZERO',
'FSTYPE', 'GROUP', 'KNAME', 'LABEL', 'LOG-SEC', 'MAJ:MIN',
'MIN-IO', 'MODE', 'MODEL', 'MOUNTPOINT', 'NAME', 'OPT-IO', 'OWNER',
'PHY-SEC', 'RM', 'RO', 'ROTA', 'RQ-SIZE', 'SCHED', 'SIZE', 'STATE',
'TYPE', 'UUID']
if args is None:
args = []
args = [x.replace('!', '/') for x in args]
# in order to avoid a very odd error with '-o' and all output fields above
# we just drop one. doesn't really matter which one.
keys.remove('SCHED')
basecmd = ['lsblk', '--noheadings', '--bytes', '--pairs',
'--output=' + ','.join(keys)]
(out, _err) = util.subp(basecmd + list(args), capture=True)
out = out.replace('!', '/')
return _lsblock_pairs_to_dict(out)
def get_unused_blockdev_info():
# return a list of unused block devices. These are devices that
# do not have anything mounted on them.
# get a list of top level block devices, then iterate over it to get
# devices dependent on those. If the lsblk call for that specific
# call has nothing 'MOUNTED", then this is an unused block device
bdinfo = _lsblock(['--nodeps'])
unused = {}
for devname, data in bdinfo.items():
cur = _lsblock([data['device_path']])
mountpoints = [x for x in cur if cur[x].get('MOUNTPOINT')]
if len(mountpoints) == 0:
unused[devname] = data
return unused
def get_devices_for_mp(mountpoint):
# return a list of devices (full paths) used by the provided mountpoint
bdinfo = _lsblock()
found = set()
for devname, data in bdinfo.items():
if data['MOUNTPOINT'] == mountpoint:
found.add(data['device_path'])
if found:
return list(found)
# for some reason, on some systems, lsblk does not list mountpoint
# for devices that are mounted. This happens on /dev/vdc1 during a run
# using tools/launch.
mountpoint = [os.path.realpath(dev)
for (dev, mp, vfs, opts, freq, passno) in
get_proc_mounts() if mp == mountpoint]
return mountpoint
def get_installable_blockdevs(include_removable=False, min_size=1024**3):
good = []
unused = get_unused_blockdev_info()
for devname, data in unused.items():
if not include_removable and data.get('RM') == "1":
continue
if data.get('RO') != "0" or data.get('TYPE') != "disk":
continue
if min_size is not None and int(data.get('SIZE', '0')) < min_size:
continue
good.append(devname)
return good
def get_blockdev_for_partition(devpath):
# convert an entry in /dev/ to parent disk and partition number
# if devpath is a block device and not a partition, return (devpath, None)
# input of /dev/vdb or /dev/disk/by-label/foo
# rpath is hopefully a real-ish path in /dev (vda, sdb..)
rpath = os.path.realpath(devpath)
bname = os.path.basename(rpath)
syspath = "/sys/class/block/%s" % bname
if not os.path.exists(syspath):
syspath2 = "/sys/class/block/cciss!%s" % bname
if not os.path.exists(syspath2):
raise ValueError("%s had no syspath (%s)" % (devpath, syspath))
syspath = syspath2
ptpath = os.path.join(syspath, "partition")
if not os.path.exists(ptpath):
return (rpath, None)
ptnum = util.load_file(ptpath).rstrip()
# for a partition, real syspath is something like:
# /sys/devices/pci0000:00/0000:00:04.0/virtio1/block/vda/vda1
rsyspath = os.path.realpath(syspath)
disksyspath = os.path.dirname(rsyspath)
diskmajmin = util.load_file(os.path.join(disksyspath, "dev")).rstrip()
diskdevpath = os.path.realpath("/dev/block/%s" % diskmajmin)
# diskdevpath has something like 253:0
# and udev has put links in /dev/block/253:0 to the device name in /dev/
return (diskdevpath, ptnum)
def get_pardevs_on_blockdevs(devs):
# return a dict of partitions with their info that are on provided devs
if devs is None:
devs = []
devs = [get_dev_name_entry(d)[1] for d in devs]
found = _lsblock(devs)
ret = {}
for short in found:
if found[short]['device_path'] not in devs:
ret[short] = found[short]
return ret
def stop_all_unused_multipath_devices():
"""
Stop all unused multipath devices.
"""
multipath = util.which('multipath')
# Command multipath is not available only when multipath-tools package
# is not installed. Nothing needs to be done in this case because system
# doesn't create multipath devices without this package installed and we
# have nothing to stop.
if not multipath:
return
# Command multipath -F flushes all unused multipath device maps
cmd = [multipath, '-F']
try:
# unless multipath cleared *everything* it will exit with 1
util.subp(cmd, rcs=[0, 1])
except util.ProcessExecutionError as e:
LOG.warn("Failed to stop multipath devices: %s", e)
def rescan_block_devices():
# run 'blockdev --rereadpt' for all block devices not currently mounted
unused = get_unused_blockdev_info()
devices = []
for devname, data in unused.items():
if data.get('RM') == "1":
continue
if data.get('RO') != "0" or data.get('TYPE') != "disk":
continue
devices.append(data['device_path'])
if not devices:
LOG.debug("no devices found to rescan")
return
cmd = ['blockdev', '--rereadpt'] + devices
try:
util.subp(cmd, capture=True)
except util.ProcessExecutionError as e:
# FIXME: its less than ideal to swallow this error, but until
# we fix LP: #1489521 we kind of need to.
LOG.warn("rescanning devices failed: %s", e)
util.subp(['udevadm', 'settle'])
return
def blkid(devs=None, cache=True):
if devs is None:
devs = []
# 14.04 blkid reads undocumented /dev/.blkid.tab
# man pages mention /run/blkid.tab and /etc/blkid.tab
if not cache:
cfiles = ("/run/blkid/blkid.tab", "/dev/.blkid.tab", "/etc/blkid.tab")
for cachefile in cfiles:
if os.path.exists(cachefile):
os.unlink(cachefile)
cmd = ['blkid', '-o', 'full']
# blkid output is : KEY=VALUE
# where KEY is TYPE, UUID, PARTUUID, LABEL
out, err = util.subp(cmd, capture=True)
data = {}
for line in out.splitlines():
curdev, curdata = line.split(":", 1)
data[curdev] = dict(tok.split('=', 1) for tok in shlex.split(curdata))
return data
def detect_multipath(target_mountpoint):
"""
Detect if the operating system has been installed to a multipath device.
"""
# The obvious way to detect multipath is to use multipath utility which is
# provided by the multipath-tools package. Unfortunately, multipath-tools
# package is not available in all ephemeral images hence we can't use it.
# Another reasonable way to detect multipath is to look for two (or more)
# devices with the same World Wide Name (WWN) which can be fetched using
# scsi_id utility. This way doesn't work as well because WWNs are not
# unique in some cases which leads to false positives which may prevent
# system from booting (see LP: #1463046 for details).
# Taking into account all the issues mentioned above, curent implementation
# detects multipath by looking for a filesystem with the same UUID
# as the target device. It relies on the fact that all alternative routes
# to the same disk observe identical partition information including UUID.
# There are some issues with this approach as well though. We won't detect
# multipath disk if it doesn't any filesystems. Good news is that
# target disk will always have a filesystem because curtin creates them
# while installing the system.
rescan_block_devices()
binfo = blkid(cache=False)
LOG.debug("detect_multipath found blkid info: %s", binfo)
# get_devices_for_mp may return multiple devices by design. It is not yet
# implemented but it should return multiple devices when installer creates
# separate disk partitions for / and /boot. We need to do UUID-based
# multipath detection against each of target devices.
target_devs = get_devices_for_mp(target_mountpoint)
LOG.debug("target_devs: %s" % target_devs)
for devpath, data in binfo.items():
# We need to figure out UUID of the target device first
if devpath not in target_devs:
continue
# This entry contains information about one of target devices
target_uuid = data.get('UUID')
# UUID-based multipath detection won't work if target partition
# doesn't have UUID assigned
if not target_uuid:
LOG.warn("Target partition %s doesn't have UUID assigned",
devpath)
continue
LOG.debug("%s: %s" % (devpath, data.get('UUID', "")))
# Iterating over available devices to see if any other device
# has the same UUID as the target device. If such device exists
# we probably installed the system to the multipath device.
for other_devpath, other_data in binfo.items():
if ((other_data.get('UUID') == target_uuid) and
(other_devpath != devpath)):
return True
# No other devices have the same UUID as the target devices.
# We probably installed the system to the non-multipath device.
return False
def get_scsi_wwid(device, replace_whitespace=False):
"""
Issue a call to scsi_id utility to get WWID of the device.
"""
cmd = ['/lib/udev/scsi_id', '--whitelisted', '--device=%s' % device]
if replace_whitespace:
cmd.append('--replace-whitespace')
try:
(out, err) = util.subp(cmd, capture=True)
scsi_wwid = out.rstrip('\n')
return scsi_wwid
except util.ProcessExecutionError as e:
LOG.warn("Failed to get WWID: %s", e)
return None
def get_multipath_wwids():
"""
Get WWIDs of all multipath devices available in the system.
"""
multipath_devices = set()
multipath_wwids = set()
devuuids = [(d, i['UUID']) for d, i in blkid().items() if 'UUID' in i]
# Looking for two disks which contain filesystems with the same UUID.
for (dev1, uuid1), (dev2, uuid2) in itertools.combinations(devuuids, 2):
if uuid1 == uuid2:
multipath_devices.add(get_blockdev_for_partition(dev1)[0])
for device in multipath_devices:
wwid = get_scsi_wwid(device)
# Function get_scsi_wwid() may return None in case of errors or
# WWID field may be empty for some buggy disk. We don't want to
# propagate both of these value further to avoid generation of
# incorrect /etc/multipath/bindings file.
if wwid:
multipath_wwids.add(wwid)
return multipath_wwids
def get_root_device(dev, fpath="curtin"):
"""
Get root partition for specified device, based on presence of /curtin.
"""
partitions = get_pardevs_on_blockdevs(dev)
target = None
tmp_mount = tempfile.mkdtemp()
for i in partitions:
dev_path = partitions[i]['device_path']
mp = None
try:
util.do_mount(dev_path, tmp_mount)
mp = tmp_mount
curtin_dir = os.path.join(tmp_mount, fpath)
if os.path.isdir(curtin_dir):
target = dev_path
break
except:
pass
finally:
if mp:
util.do_umount(mp)
os.rmdir(tmp_mount)
if target is None:
raise ValueError("Could not find root device")
return target
def get_volume_uuid(path):
"""
Get uuid of disk with given path. This address uniquely identifies
the device and remains consistant across reboots
"""
(out, _err) = util.subp(["blkid", "-o", "export", path], capture=True)
for line in out.splitlines():
if "UUID" in line:
return line.split('=')[-1]
return ''
def get_mountpoints():
"""
Returns a list of all mountpoints where filesystems are currently mounted.
"""
info = _lsblock()
proc_mounts = [mp for (dev, mp, vfs, opts, freq, passno) in
get_proc_mounts()]
lsblock_mounts = list(i.get("MOUNTPOINT") for name, i in info.items() if
i.get("MOUNTPOINT") is not None and
i.get("MOUNTPOINT") != "")
return list(set(proc_mounts + lsblock_mounts))
def get_proc_mounts():
"""
Returns a list of tuples for each entry in /proc/mounts
"""
mounts = []
with open("/proc/mounts", "r") as fp:
for line in fp:
try:
(dev, mp, vfs, opts, freq, passno) = \
line.strip().split(None, 5)
mounts.append((dev, mp, vfs, opts, freq, passno))
except ValueError:
continue
return mounts
def lookup_disk(serial):
"""
Search for a disk by its serial number using /dev/disk/by-id/
"""
# Get all volumes in /dev/disk/by-id/ containing the serial string. The
# string specified can be either in the short or long serial format
disks = list(filter(lambda x: serial in x, os.listdir("/dev/disk/by-id/")))
if not disks or len(disks) < 1:
raise ValueError("no disk with serial '%s' found" % serial)
# Sort by length and take the shortest path name, as the longer path names
# will be the partitions on the disk. Then use os.path.realpath to
# determine the path to the block device in /dev/
disks.sort(key=lambda x: len(x))
path = os.path.realpath("/dev/disk/by-id/%s" % disks[0])
if not os.path.exists(path):
raise ValueError("path '%s' to block device for disk with serial '%s' \
does not exist" % (path, serial))
return path
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/block/mdadm.py 0000644 0000000 0000000 00000047521 12673006714 016365 0 ustar 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Ryan Harper
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
# This module wraps calls to the mdadm utility for examing Linux SoftRAID
# virtual devices. Functions prefixed with 'mdadm_' involve executing
# the 'mdadm' command in a subprocess. The remaining functions handle
# manipulation of the mdadm output.
import os
import re
import shlex
from subprocess import CalledProcessError
from curtin.block import (dev_short, dev_path, is_valid_device, sys_block_path)
from curtin import util
from curtin.log import LOG
NOSPARE_RAID_LEVELS = [
'linear', 'raid0', '0', 0,
]
SPARE_RAID_LEVELS = [
'raid1', 'stripe', 'mirror', '1', 1,
'raid4', '4', 4,
'raid5', '5', 5,
'raid6', '6', 6,
'raid10', '10', 10,
]
VALID_RAID_LEVELS = NOSPARE_RAID_LEVELS + SPARE_RAID_LEVELS
# https://www.kernel.org/doc/Documentation/md.txt
'''
clear
No devices, no size, no level
Writing is equivalent to STOP_ARRAY ioctl
inactive
May have some settings, but array is not active
all IO results in error
When written, doesn't tear down array, but just stops it
suspended (not supported yet)
All IO requests will block. The array can be reconfigured.
Writing this, if accepted, will block until array is quiessent
readonly
no resync can happen. no superblocks get written.
write requests fail
read-auto
like readonly, but behaves like 'clean' on a write request.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
write-pending
clean, but writes are blocked waiting for 'active' to be written.
active-idle
like active, but no writes have been seen for a while (safe_mode_delay).
'''
ERROR_RAID_STATES = [
'clear',
'inactive',
'suspended',
]
READONLY_RAID_STATES = [
'readonly',
]
READWRITE_RAID_STATES = [
'read-auto',
'clean',
'active',
'active-idle',
'write-pending',
]
VALID_RAID_ARRAY_STATES = (
ERROR_RAID_STATES +
READONLY_RAID_STATES +
READWRITE_RAID_STATES
)
# need a on-import check of version and set the value for later reference
''' mdadm version < 3.3 doesn't include enough info when using --export
and we must use --detail and parse out information. This method
checks the mdadm version and will return True if we can use --export
for key=value list with enough info, false if version is less than
'''
MDADM_USE_EXPORT = util.lsb_release()['codename'] not in ['precise', 'trusty']
#
# mdadm executors
#
def mdadm_assemble(md_devname=None, devices=[], spares=[], scan=False):
# md_devname is a /dev/XXXX
# devices is non-empty list of /dev/xxx
# if spares is non-empt list append of /dev/xxx
cmd = ["mdadm", "--assemble"]
if scan:
cmd += ['--scan']
else:
valid_mdname(md_devname)
cmd += [md_devname, "--run"] + devices
if spares:
cmd += spares
util.subp(cmd, capture=True, rcs=[0, 1, 2])
util.subp(["udevadm", "settle"])
def mdadm_create(md_devname, raidlevel, devices, spares=None, md_name=""):
LOG.debug('mdadm_create: ' +
'md_name=%s raidlevel=%s ' % (md_devname, raidlevel) +
' devices=%s spares=%s name=%s' % (devices, spares, md_name))
assert_valid_devpath(md_devname)
if raidlevel not in VALID_RAID_LEVELS:
raise ValueError('Invalid raidlevel: [{}]'.format(raidlevel))
min_devices = md_minimum_devices(raidlevel)
if len(devices) < min_devices:
err = 'Not enough devices for raidlevel: ' + str(raidlevel)
err += ' minimum devices needed: ' + str(min_devices)
raise ValueError(err)
if spares and raidlevel not in SPARE_RAID_LEVELS:
err = ('Raidlevel does not support spare devices: ' + str(raidlevel))
raise ValueError(err)
cmd = ["mdadm", "--create", md_devname, "--run",
"--level=%s" % raidlevel, "--raid-devices=%s" % len(devices)]
if md_name:
cmd.append("--name=%s" % md_name)
for device in devices:
# Zero out device superblock just in case device has been used for raid
# before, as this will cause many issues
util.subp(["mdadm", "--zero-superblock", device], capture=True)
cmd.append(device)
if spares:
cmd.append("--spare-devices=%s" % len(spares))
for device in spares:
util.subp(["mdadm", "--zero-superblock", device], capture=True)
cmd.append(device)
# Create the raid device
util.subp(["udevadm", "settle"])
util.subp(["udevadm", "control", "--stop-exec-queue"])
util.subp(cmd, capture=True)
util.subp(["udevadm", "control", "--start-exec-queue"])
util.subp(["udevadm", "settle",
"--exit-if-exists=%s" % md_devname])
def mdadm_examine(devpath, export=MDADM_USE_EXPORT):
''' exectute mdadm --examine, and optionally
append --export.
Parse and return dict of key=val from output'''
assert_valid_devpath(devpath)
cmd = ["mdadm", "--examine"]
if export:
cmd.extend(["--export"])
cmd.extend([devpath])
try:
(out, _err) = util.subp(cmd, capture=True)
except CalledProcessError:
LOG.exception('Error: not a valid md device: ' + devpath)
return {}
if export:
data = __mdadm_export_to_dict(out)
else:
data = __upgrade_detail_dict(__mdadm_detail_to_dict(out))
return data
def mdadm_stop(devpath):
assert_valid_devpath(devpath)
LOG.info("mdadm stopping: %s" % devpath)
util.subp(["mdadm", "--stop", devpath], rcs=[0, 1], capture=True)
def mdadm_remove(devpath):
assert_valid_devpath(devpath)
LOG.info("mdadm removing: %s" % devpath)
util.subp(["mdadm", "--remove", devpath], rcs=[0, 1], capture=True)
def mdadm_query_detail(md_devname, export=MDADM_USE_EXPORT):
valid_mdname(md_devname)
cmd = ["mdadm", "--query", "--detail"]
if export:
cmd.extend(["--export"])
cmd.extend([md_devname])
(out, _err) = util.subp(cmd, capture=True)
if export:
data = __mdadm_export_to_dict(out)
else:
data = __upgrade_detail_dict(__mdadm_detail_to_dict(out))
return data
def mdadm_detail_scan():
(out, _err) = util.subp(["mdadm", "--detail", "--scan"], capture=True)
if not _err:
return out
# ------------------------------ #
def valid_mdname(md_devname):
assert_valid_devpath(md_devname)
if not is_valid_device(md_devname):
raise ValueError('Specified md device does not exist: ' + md_devname)
return False
return True
def valid_devpath(devpath):
if devpath:
return devpath.startswith('/dev')
return False
def assert_valid_devpath(devpath):
if not valid_devpath(devpath):
raise ValueError("Invalid devpath: '%s'" % devpath)
def md_sysfs_attr(md_devname, attrname):
if not valid_mdname(md_devname):
raise ValueError('Invalid md devicename: [{}]'.format(md_devname))
attrdata = ''
# /sys/class/block//md
sysmd = sys_block_path(md_devname, "md")
# /sys/class/block//md/attrname
sysfs_attr_path = os.path.join(sysmd, attrname)
if os.path.isfile(sysfs_attr_path):
attrdata = util.load_file(sysfs_attr_path).strip()
return attrdata
def md_raidlevel_short(raidlevel):
if isinstance(raidlevel, int) or raidlevel in ['linear', 'stripe']:
return raidlevel
return int(raidlevel.replace('raid', ''))
def md_minimum_devices(raidlevel):
''' return the minimum number of devices for a given raid level '''
rl = md_raidlevel_short(raidlevel)
if rl in [0, 1, 'linear', 'stripe']:
return 2
if rl in [5]:
return 3
if rl in [6, 10]:
return 4
return -1
def __md_check_array_state(md_devname, mode='READWRITE'):
modes = {
'READWRITE': READWRITE_RAID_STATES,
'READONLY': READONLY_RAID_STATES,
'ERROR': ERROR_RAID_STATES,
}
if mode not in modes:
raise ValueError('Invalid Array State mode: ' + mode)
array_state = md_sysfs_attr(md_devname, 'array_state')
if array_state in modes[mode]:
return True
return False
def md_check_array_state_rw(md_devname):
return __md_check_array_state(md_devname, mode='READWRITE')
def md_check_array_state_ro(md_devname):
return __md_check_array_state(md_devname, mode='READONLY')
def md_check_array_state_error(md_devname):
return __md_check_array_state(md_devname, mode='ERROR')
def __mdadm_export_to_dict(output):
''' convert Key=Value text output into dictionary '''
return dict(tok.split('=', 1) for tok in shlex.split(output))
def __mdadm_detail_to_dict(input):
''' Convert mdadm --detail output to dictionary
/dev/vde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 93a73e10:427f280b:b7076c02:204b8f7a
Name : wily-foobar:0 (local to host wily-foobar)
Creation Time : Sat Dec 12 16:06:05 2015
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 20955136 (9.99 GiB 10.73 GB)
Used Dev Size : 20955136 (9.99 GiB 10.73 GB)
Array Size : 10477568 (9.99 GiB 10.73 GB)
Data Offset : 16384 sectors
Super Offset : 8 sectors
Unused Space : before=16296 sectors, after=0 sectors
State : clean
Device UUID : 8fcd62e6:991acc6e:6cb71ee3:7c956919
Update Time : Sat Dec 12 16:09:09 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 65b57c2e - correct
Events : 17
Device Role : spare
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
'''
data = {}
device = re.findall('^(\/dev\/[a-zA-Z0-9-\._]+)', input)
if len(device) == 1:
data.update({'device': device[0]})
else:
raise ValueError('Failed to determine device in input')
# FIXME: probably could do a better regex to match the LHS which
# has one, two or three words
for f in re.findall('(\w+|\w+\ \w+|\w+\ \w+\ \w+)' +
'\ \:\ ([a-zA-Z0-9\-\.,: \(\)=\']+)',
input, re.MULTILINE):
key = f[0].replace(' ', '_').lower()
val = f[1]
if key in data:
raise ValueError('Duplicate key in mdadm regex parsing: ' + key)
data.update({key: val})
return data
def md_device_key_role(devname):
if not devname:
raise ValueError('Missing parameter devname')
return 'MD_DEVICE_' + dev_short(devname) + '_ROLE'
def md_device_key_dev(devname):
if not devname:
raise ValueError('Missing parameter devname')
return 'MD_DEVICE_' + dev_short(devname) + '_DEV'
def __upgrade_detail_dict(detail):
''' This method attempts to convert mdadm --detail output into
a KEY=VALUE output the same as mdadm --detail --export from mdadm v3.3
'''
# if the input already has MD_UUID, it's already been converted
if 'MD_UUID' in detail:
return detail
md_detail = {
'MD_LEVEL': detail['raid_level'],
'MD_DEVICES': detail['raid_devices'],
'MD_METADATA': detail['version'],
'MD_NAME': detail['name'].split()[0],
}
# exmaine has ARRAY UUID
if 'array_uuid' in detail:
md_detail.update({'MD_UUID': detail['array_uuid']})
# query,detail has UUID
elif 'uuid' in detail:
md_detail.update({'MD_UUID': detail['uuid']})
device = detail['device']
# MD_DEVICE_vdc1_DEV=/dev/vdc1
md_detail.update({md_device_key_dev(device): device})
if 'device_role' in detail:
role = detail['device_role']
if role != 'spare':
# device_role = Active device 1
role = role.split()[-1]
# MD_DEVICE_vdc1_ROLE=spare
md_detail.update({md_device_key_role(device): role})
return md_detail
def md_read_run_mdadm_map():
'''
md1 1.2 59beb40f:4c202f67:088e702b:efdf577a /dev/md1
md0 0.90 077e6a9e:edf92012:e2a6e712:b193f786 /dev/md0
return
# md_shortname = (metaversion, md_uuid, md_devpath)
data = {
'md1': (1.2, 59beb40f:4c202f67:088e702b:efdf577a, /dev/md1)
'md0': (0.90, 077e6a9e:edf92012:e2a6e712:b193f786, /dev/md0)
'''
mdadm_map = {}
run_mdadm_map = '/run/mdadm/map'
if os.path.exists(run_mdadm_map):
with open(run_mdadm_map, 'r') as fp:
data = fp.read().strip()
for entry in data.split('\n'):
(key, meta, md_uuid, dev) = entry.split()
mdadm_map.update({key: (meta, md_uuid, dev)})
return mdadm_map
def md_get_spares_list(devpath):
sysfs_md = sys_block_path(devpath, "md")
spares = [dev_path(dev[4:])
for dev in os.listdir(sysfs_md)
if (dev.startswith('dev-') and
util.load_file(os.path.join(sysfs_md,
dev,
'state')).strip() == 'spare')]
return spares
def md_get_devices_list(devpath):
sysfs_md = sys_block_path(devpath, "md")
devices = [dev_path(dev[4:])
for dev in os.listdir(sysfs_md)
if (dev.startswith('dev-') and
util.load_file(os.path.join(sysfs_md,
dev,
'state')).strip() != 'spare')]
return devices
def md_check_array_uuid(md_devname, md_uuid):
valid_mdname(md_devname)
# confirm we have /dev/{mdname} by following the udev symlink
mduuid_path = ('/dev/disk/by-id/md-uuid-' + md_uuid)
mdlink_devname = dev_path(os.path.realpath(mduuid_path))
if md_devname != mdlink_devname:
err = ('Mismatch between devname and md-uuid symlink: ' +
'%s -> %s != %s' % (mduuid_path, mdlink_devname, md_devname))
raise ValueError(err)
return True
def md_get_uuid(md_devname):
valid_mdname(md_devname)
md_query = mdadm_query_detail(md_devname)
return md_query.get('MD_UUID', None)
def _compare_devlist(expected, found):
LOG.debug('comparing device lists: '
'expected: {} found: {}'.format(expected, found))
expected = set(expected)
found = set(found)
if expected != found:
missing = expected.difference(found)
extra = found.difference(expected)
raise ValueError("RAID array device list does not match."
" Missing: {} Extra: {}".format(missing, extra))
def md_check_raidlevel(raidlevel):
# Validate raidlevel against what curtin supports configuring
if raidlevel not in VALID_RAID_LEVELS:
err = ('Invalid raidlevel: ' + raidlevel +
' Must be one of: ' + str(VALID_RAID_LEVELS))
raise ValueError(err)
return True
def md_block_until_in_sync(md_devname):
'''
sync_completed
This shows the number of sectors that have been completed of
whatever the current sync_action is, followed by the number of
sectors in total that could need to be processed. The two
numbers are separated by a '/' thus effectively showing one
value, a fraction of the process that is complete.
A 'select' on this attribute will return when resync completes,
when it reaches the current sync_max (below) and possibly at
other times.
'''
# FIXME: use selectors to block on: /sys/class/block/mdX/md/sync_completed
pass
def md_check_array_state(md_devname):
# check array state
writable = md_check_array_state_rw(md_devname)
degraded = md_sysfs_attr(md_devname, 'degraded')
sync_action = md_sysfs_attr(md_devname, 'sync_action')
if not writable:
raise ValueError('Array not in writable state: ' + md_devname)
if degraded != "0":
raise ValueError('Array in degraded state: ' + md_devname)
if sync_action != "idle":
raise ValueError('Array syncing, not idle state: ' + md_devname)
return True
def md_check_uuid(md_devname):
md_uuid = md_get_uuid(md_devname)
if not md_uuid:
raise ValueError('Failed to get md UUID from device: ' + md_devname)
return md_check_array_uuid(md_devname, md_uuid)
def md_check_devices(md_devname, devices):
if not devices or len(devices) == 0:
raise ValueError('Cannot verify raid array with empty device list')
# collect and compare raid devices based on md name versus
# expected device list.
#
# NB: In some cases, a device might report as a spare until
# md has finished syncing it into the array. Currently
# we fail the check since the specified raid device is not
# yet in its proper role. Callers can check mdadm_sync_action
# state to see if the array is currently recovering, which would
# explain the failure. Also mdadm_degraded will indicate if the
# raid is currently degraded or not, which would also explain the
# failure.
md_raid_devices = md_get_devices_list(md_devname)
LOG.debug('md_check_devices: md_raid_devs: ' + str(md_raid_devices))
_compare_devlist(devices, md_raid_devices)
def md_check_spares(md_devname, spares):
# collect and compare spare devices based on md name versus
# expected device list.
md_raid_spares = md_get_spares_list(md_devname)
_compare_devlist(spares, md_raid_spares)
def md_check_array_membership(md_devname, devices):
# validate that all devices are members of the correct array
md_uuid = md_get_uuid(md_devname)
for device in devices:
dev_examine = mdadm_examine(device, export=False)
if 'MD_UUID' not in dev_examine:
raise ValueError('Device is not part of an array: ' + device)
dev_uuid = dev_examine['MD_UUID']
if dev_uuid != md_uuid:
err = "Device {} is not part of {} array. ".format(device,
md_devname)
err += "MD_UUID mismatch: device:{} != array:{}".format(dev_uuid,
md_uuid)
raise ValueError(err)
def md_check(md_devname, raidlevel, devices=[], spares=[]):
''' Check passed in variables from storage configuration against
the system we're running upon.
'''
LOG.debug('RAID validation: ' +
'name={} raidlevel={} devices={} spares={}'.format(md_devname,
raidlevel,
devices,
spares))
assert_valid_devpath(md_devname)
md_check_array_state(md_devname)
md_check_raidlevel(raidlevel)
md_check_uuid(md_devname)
md_check_devices(md_devname, devices)
md_check_spares(md_devname, spares)
md_check_array_membership(md_devname, devices + spares)
LOG.debug('RAID array OK: ' + md_devname)
return True
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/block/mkfs.py 0000644 0000000 0000000 00000015237 12673006714 016242 0 ustar 0000000 0000000 # Copyright (C) 2016 Canonical Ltd.
#
# Author: Wesley Wiedenmeier
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
# This module wraps calls to mkfs. and determines the appropriate flags
# for each filesystem type
from curtin import util
from curtin import block
import string
import os
from uuid import uuid1
mkfs_commands = {
"btrfs": "mkfs.btrfs",
"ext2": "mkfs.ext2",
"ext3": "mkfs.ext3",
"ext4": "mkfs.ext4",
"fat": "mkfs.vfat",
"fat12": "mkfs.vfat",
"fat16": "mkfs.vfat",
"fat32": "mkfs.vfat",
"vfat": "mkfs.vfat",
"jfs": "jfs_mkfs",
"ntfs": "mkntfs",
"reiserfs": "mkfs.reiserfs",
"swap": "mkswap",
"xfs": "mkfs.xfs"
}
specific_to_family = {
"ext2": "ext",
"ext3": "ext",
"ext4": "ext",
"fat12": "fat",
"fat16": "fat",
"fat32": "fat",
"vfat": "fat",
}
label_length_limits = {
"btrfs": 256,
"ext": 16,
"fat": 11,
"jfs": 16, # see jfs_tune manpage
"ntfs": 32,
"reiserfs": 16,
"swap": 15, # not in manpages, found experimentally
"xfs": 12
}
family_flag_mappings = {
"label": {"btrfs": "--label",
"ext": "-L",
"fat": "-n",
"jfs": "-L",
"ntfs": "--label",
"reiserfs": "--label",
"swap": "--label",
"xfs": "-L"},
"uuid": {"btrfs": "--uuid",
"ext": "-U",
"reiserfs": "--uuid",
"swap": "--uuid"},
"force": {"btrfs": "--force",
"ext": "-F",
"ntfs": "--force",
"reiserfs": "-f",
"swap": "--force",
"xfs": "-f"},
"fatsize": {"fat": "-F"},
"quiet": {"ext": "-q",
"ntfs": "-q",
"reiserfs": "-q",
"xfs": "--quiet"}
}
release_flag_mapping_overrides = {
"precise": {
"force": {"btrfs": None},
"uuid": {"btrfs": None}},
"trusty": {
"uuid": {"btrfs": None}},
}
def valid_fstypes():
return list(mkfs_commands.keys())
def get_flag_mapping(flag_name, fs_family, param=None, strict=False):
ret = []
release = util.lsb_release()['codename']
overrides = release_flag_mapping_overrides.get(release, {})
if flag_name in overrides and fs_family in overrides[flag_name]:
flag_sym = overrides[flag_name][fs_family]
else:
flag_sym_families = family_flag_mappings.get(flag_name)
if flag_sym_families is None:
raise ValueError("unsupported flag '%s'" % flag_name)
flag_sym = flag_sym_families.get(fs_family)
if flag_sym is None:
if strict:
raise ValueError("flag '%s' not supported by fs family '%s'" %
flag_name, fs_family)
else:
ret = [flag_sym]
if param is not None:
ret.append(param)
return ret
def mkfs(path, fstype, strict=False, label=None, uuid=None, force=False):
"""Make filesystem on block device with given path using given fstype and
appropriate flags for filesystem family.
Filesystem uuid and label can be passed in as kwargs. By default no
label or uuid will be used. If a filesystem label is too long curtin
will raise a ValueError if the strict flag is true or will truncate
it to the maximum possible length.
If a flag is not supported by a filesystem family mkfs will raise a
ValueError if the strict flag is true or silently ignore it otherwise.
Force can be specified to force the mkfs command to continue even if it
finds old data or filesystems on the partition.
"""
if path is None:
raise ValueError("invalid block dev path '%s'" % path)
if not os.path.exists(path):
raise ValueError("'%s': no such file or directory" % path)
fs_family = specific_to_family.get(fstype, fstype)
mkfs_cmd = mkfs_commands.get(fstype)
if not mkfs_cmd:
raise ValueError("unsupported fs type '%s'" % fstype)
if util.which(mkfs_cmd) is None:
raise ValueError("need '%s' but it could not be found" % mkfs_cmd)
cmd = [mkfs_cmd]
if force:
cmd.extend(get_flag_mapping("force", fs_family, strict=strict))
if label is not None:
limit = label_length_limits.get(fs_family)
if len(label) > limit:
if strict:
raise ValueError("length of fs label for '%s' exceeds max \
allowed for fstype '%s'. max is '%s'"
% (path, fstype, limit))
else:
label = label[:limit]
cmd.extend(get_flag_mapping("label", fs_family, param=label,
strict=strict))
# If uuid is not specified, generate one and try to use it
if uuid is None:
uuid = str(uuid1())
cmd.extend(get_flag_mapping("uuid", fs_family, param=uuid, strict=strict))
if fs_family == "fat":
fat_size = fstype.strip(string.ascii_letters)
if fat_size in ["12", "16", "32"]:
cmd.extend(get_flag_mapping("fatsize", fs_family, param=fat_size,
strict=strict))
cmd.append(path)
util.subp(cmd, capture=True)
# if fs_family does not support specifying uuid then use blkid to find it
# if blkid is unable to then just return None for uuid
if fs_family not in family_flag_mappings['uuid']:
try:
uuid = block.blkid()[path]['UUID']
except:
pass
# return uuid, may be none if it could not be specified and blkid could not
# find it
return uuid
def mkfs_from_config(path, info, strict=False):
"""Make filesystem on block device with given path according to storage
config given"""
fstype = info.get('fstype')
if fstype is None:
raise ValueError("fstype must be specified")
# NOTE: Since old metadata on partitions that have not been wiped can cause
# some mkfs commands to refuse to work, it's best to use force=True
mkfs(path, fstype, strict=strict, force=True, uuid=info.get('uuid'),
label=info.get('label'))
curtin-0.1.0~bzr365/curtin/commands/__init__.py 0000644 0000000 0000000 00000002036 12673006714 017541 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
def populate_one_subcmd(parser, options_dict, handler):
for ent in options_dict:
args = ent[0]
if not isinstance(args, (list, tuple)):
args = (args,)
parser.add_argument(*args, **ent[1])
parser.set_defaults(func=handler)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/apply_net.py 0000644 0000000 0000000 00000006357 12673006714 020007 0 ustar 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Ryan Harper
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import os
import sys
import curtin.net as net
import curtin.util as util
from . import populate_one_subcmd
def apply_net(target, network_state=None, network_config=None):
if network_state is None and network_config is None:
msg = "Must provide at least config or state"
sys.stderr.write(msg + "\n")
raise Exception(msg)
if target is None:
msg = "Must provide target"
sys.stderr.write(msg + "\n")
raise Exception(msg)
if network_state:
ns = net.network_state.from_state_file(network_state)
elif network_config:
ns = net.parse_net_config(network_config)
net.render_network_state(target=target, network_state=ns)
def apply_net_main(args):
# curtin apply_net [--net-state=/config/netstate.yml] [--target=/]
# [--net-config=/config/maas_net.yml]
state = util.load_command_environment()
if args.target is not None:
state['target'] = args.target
if args.net_state is not None:
state['network_state'] = args.net_state
if args.net_config is not None:
state['network_config'] = args.net_config
if state['target'] is None:
sys.stderr.write("Unable to find target. "
"Use --target or set TARGET_MOUNT_POINT\n")
sys.exit(2)
if not state['network_config'] and not state['network_state']:
sys.stderr.write("Must provide at least config or state\n")
sys.exit(2)
apply_net(target=state['target'],
network_state=state['network_state'],
network_config=state['network_config'])
sys.exit(0)
CMD_ARGUMENTS = (
((('-s', '--net-state'),
{'help': ('file to read containing network state. '
'defaults to env["OUTPUT_NETWORK_STATE"]'),
'metavar': 'NETSTATE', 'action': 'store',
'default': os.environ.get('OUTPUT_NETWORK_STATE')}),
(('-t', '--target'),
{'help': ('target filesystem root to add swap file to. '
'default is env["TARGET_MOUNT_POINT"]'),
'metavar': 'TARGET', 'action': 'store',
'default': os.environ.get('TARGET_MOUNT_POINT')}),
(('-c', '--net-config'),
{'help': ('file to read containing curtin network config.'
'defaults to env["OUTPUT_NETWORK_CONFIG"]'),
'metavar': 'NETCONFIG', 'action': 'store',
'default': os.environ.get('OUTPUT_NETWORK_CONFIG')})))
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, apply_net_main)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/block_meta.py 0000644 0000000 0000000 00000140455 12673006714 020112 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
from collections import OrderedDict
from curtin import (block, config, util)
from curtin.block import mdadm
from curtin.log import LOG
from curtin.block import mkfs
from . import populate_one_subcmd
from curtin.udev import compose_udev_equality
import glob
import os
import platform
import sys
import tempfile
import time
import re
SIMPLE = 'simple'
SIMPLE_BOOT = 'simple-boot'
CUSTOM = 'custom'
CMD_ARGUMENTS = (
((('-D', '--devices'),
{'help': 'which devices to operate on', 'action': 'append',
'metavar': 'DEVICE', 'default': None, }),
('--fstype', {'help': 'root partition filesystem type',
'choices': ['ext4', 'ext3'], 'default': 'ext4'}),
(('-t', '--target'),
{'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]',
'action': 'store', 'metavar': 'TARGET',
'default': os.environ.get('TARGET_MOUNT_POINT')}),
('--boot-fstype', {'help': 'boot partition filesystem type',
'choices': ['ext4', 'ext3'], 'default': None}),
('mode', {'help': 'meta-mode to use',
'choices': [CUSTOM, SIMPLE, SIMPLE_BOOT]}),
)
)
def block_meta(args):
# main entry point for the block-meta command.
state = util.load_command_environment()
cfg = config.load_command_config(args, state)
if args.mode == CUSTOM or cfg.get("storage") is not None:
meta_custom(args)
elif args.mode in (SIMPLE, SIMPLE_BOOT):
meta_simple(args)
else:
raise NotImplementedError("mode=%s is not implemented" % args.mode)
def logtime(msg, func, *args, **kwargs):
with util.LogTimer(LOG.debug, msg):
return func(*args, **kwargs)
def write_image_to_disk(source, dev):
"""
Write disk image to block device
"""
(devname, devnode) = block.get_dev_name_entry(dev)
util.subp(args=['sh', '-c',
('wget "$1" --progress=dot:mega -O - |'
'tar -SxOzf - | dd of="$2"'),
'--', source, devnode])
util.subp(['partprobe', devnode])
util.subp(['udevadm', 'settle'])
return block.get_root_device([devname, ])
def get_bootpt_cfg(cfg, enabled=False, fstype=None, root_fstype=None):
# 'cfg' looks like:
# enabled: boolean
# fstype: filesystem type (default to 'fstype')
# label: filesystem label (default to 'boot')
# parm enable can enable, but not disable
# parm fstype overrides cfg['fstype']
def_boot = (platform.machine() in ('aarch64') and
not util.is_uefi_bootable())
ret = {'enabled': def_boot, 'fstype': None, 'label': 'boot'}
ret.update(cfg)
if enabled:
ret['enabled'] = True
if ret['enabled'] and not ret['fstype']:
if root_fstype:
ret['fstype'] = root_fstype
if fstype:
ret['fstype'] = fstype
return ret
def get_partition_format_type(cfg, machine=None, uefi_bootable=None):
if machine is None:
machine = platform.machine()
if uefi_bootable is None:
uefi_bootable = util.is_uefi_bootable()
cfgval = cfg.get('format', None)
if cfgval:
return cfgval
if uefi_bootable:
return 'uefi'
if machine in ['aarch64']:
return 'gpt'
elif machine.startswith('ppc64'):
return 'prep'
return "mbr"
def wipe_volume(path, wipe_type):
cmds = []
if wipe_type == "pvremove":
# We need to use --force --force in case it's already in a volgroup and
# pvremove doesn't want to remove it
cmds.append(["pvremove", "--force", "--force", "--yes", path])
cmds.append(["pvscan", "--cache"])
cmds.append(["vgscan", "--mknodes", "--cache"])
elif wipe_type == "zero":
cmds.append(["dd", "bs=512", "if=/dev/zero", "of=%s" % path])
elif wipe_type == "random":
cmds.append(["dd", "bs=512", "if=/dev/urandom", "of=%s" % path])
elif wipe_type == "superblock":
cmds.append(["sgdisk", "--zap-all", path])
else:
raise ValueError("wipe mode %s not supported" % wipe_type)
# Dd commands will likely exit with 1 when they run out of space. This
# is expected and not an issue. If pvremove is run and there is no label on
# the system, then it exits with 5. That is also okay, because we might be
# wiping something that is already blank
for cmd in cmds:
util.subp(cmd, rcs=[0, 1, 2, 5], capture=True)
def block_find_sysfs_path(devname):
# Look up any block device holders. Handle devices and partitions
# as devnames (vdb, md0, vdb7)
if not devname:
return []
sys_class_block = '/sys/class/block/'
basename = os.path.basename(devname)
# try without parent blockdevice, then prepend parent
paths = [
os.path.join(sys_class_block, basename),
os.path.join(sys_class_block,
re.split('[\d+]', basename)[0], basename),
]
# find path to devname directory in sysfs
devname_sysfs = None
for path in paths:
if os.path.exists(path):
devname_sysfs = path
if devname_sysfs is None:
err = ('No sysfs path to device:'
' {}'.format(devname_sysfs))
LOG.error(err)
raise ValueError(err)
return devname_sysfs
def get_holders(devname):
devname_sysfs = block_find_sysfs_path(devname)
if devname_sysfs:
LOG.debug('Getting blockdev holders: {}'.format(devname_sysfs))
return os.listdir(os.path.join(devname_sysfs, 'holders'))
return []
def clear_holders(sys_block_path):
holders = os.listdir(os.path.join(sys_block_path, "holders"))
LOG.info("clear_holders running on '%s', with holders '%s'" %
(sys_block_path, holders))
for holder in holders:
# get path to holder in /sys/block, then clear it
try:
holder_realpath = os.path.realpath(
os.path.join(sys_block_path, "holders", holder))
clear_holders(holder_realpath)
except IOError as e:
# something might have already caused the holder to go away
if util.is_file_not_found_exc(e):
pass
pass
# detect what type of holder is using this volume and shut it down, need to
# find more robust name of doing detection
if "bcache" in sys_block_path:
# bcache device
part_devs = []
for part_dev in glob.glob(os.path.join(sys_block_path,
"slaves", "*", "dev")):
with open(part_dev, "r") as fp:
part_dev_id = fp.read().rstrip()
part_devs.append(
os.path.split(os.path.realpath(os.path.join("/dev/block",
part_dev_id)))[-1])
for cache_dev in glob.glob("/sys/fs/bcache/*/bdev*"):
for part_dev in part_devs:
if part_dev in os.path.realpath(cache_dev):
# This is our bcache device, stop it, wait for udev to
# settle
with open(os.path.join(os.path.split(cache_dev)[0],
"stop"), "w") as fp:
LOG.info("stopping: %s" % fp)
fp.write("1")
util.subp(["udevadm", "settle"])
break
for part_dev in part_devs:
wipe_volume(os.path.join("/dev", part_dev), "superblock")
if os.path.exists(os.path.join(sys_block_path, "bcache")):
# bcache device that isn't running, if it were, we would have found it
# when we looked for holders
try:
with open(os.path.join(sys_block_path, "bcache", "set", "stop"),
"w") as fp:
LOG.info("stopping: %s" % fp)
fp.write("1")
except IOError as e:
if not util.is_file_not_found_exc(e):
raise e
with open(os.path.join(sys_block_path, "bcache", "stop"),
"w") as fp:
LOG.info("stopping: %s" % fp)
fp.write("1")
util.subp(["udevadm", "settle"])
if os.path.exists(os.path.join(sys_block_path, "md")):
# md device
block_dev = os.path.join("/dev/", os.path.split(sys_block_path)[-1])
# if these fail its okay, the array might not be assembled and thats
# fine
mdadm.mdadm_stop(block_dev)
mdadm.mdadm_remove(block_dev)
elif os.path.exists(os.path.join(sys_block_path, "dm")):
# Shut down any volgroups
with open(os.path.join(sys_block_path, "dm", "name"), "r") as fp:
name = fp.read().split('-')
util.subp(["lvremove", "--force", name[0].rstrip(), name[1].rstrip()],
rcs=[0, 5])
util.subp(["vgremove", name[0].rstrip()], rcs=[0, 5, 6])
def devsync(devpath):
util.subp(['partprobe', devpath], rcs=[0, 1])
util.subp(['udevadm', 'settle'])
for x in range(0, 10):
if os.path.exists(devpath):
return
else:
LOG.debug('Waiting on device path: {}'.format(devpath))
time.sleep(1)
raise OSError('Failed to find device at path: {}'.format(devpath))
def determine_partition_number(partition_id, storage_config):
vol = storage_config.get(partition_id)
partnumber = vol.get('number')
if vol.get('flag') == "logical":
if not partnumber:
partnumber = 5
for key, item in storage_config.items():
if item.get('type') == "partition" and \
item.get('device') == vol.get('device') and\
item.get('flag') == "logical":
if item.get('id') == vol.get('id'):
break
else:
partnumber += 1
else:
if not partnumber:
partnumber = 1
for key, item in storage_config.items():
if item.get('type') == "partition" and \
item.get('device') == vol.get('device'):
if item.get('id') == vol.get('id'):
break
else:
partnumber += 1
return partnumber
def make_dname(volume, storage_config):
state = util.load_command_environment()
rules_dir = os.path.join(state['scratch'], "rules.d")
vol = storage_config.get(volume)
path = get_path_to_storage_volume(volume, storage_config)
ptuuid = None
dname = vol.get('name')
if vol.get('type') in ["partition", "disk"]:
(out, _err) = util.subp(["blkid", "-o", "export", path], capture=True,
rcs=[0, 2], retries=[1, 1, 1])
for line in out.splitlines():
if "PTUUID" in line or "PARTUUID" in line:
ptuuid = line.split('=')[-1]
break
# we may not always be able to find a uniq identifier on devices with names
if not ptuuid and vol.get('type') in ["disk", "partition"]:
LOG.warning("Can't find a uuid for volume: {}. Skipping dname.".format(
dname))
return
rule = [
compose_udev_equality("SUBSYSTEM", "block"),
compose_udev_equality("ACTION", "add|change"),
]
if vol.get('type') == "disk":
rule.append(compose_udev_equality('ENV{DEVTYPE}', "disk"))
rule.append(compose_udev_equality('ENV{ID_PART_TABLE_UUID}', ptuuid))
elif vol.get('type') == "partition":
rule.append(compose_udev_equality('ENV{DEVTYPE}', "partition"))
dname = storage_config.get(vol.get('device')).get('name') + \
"-part%s" % determine_partition_number(volume, storage_config)
rule.append(compose_udev_equality('ENV{ID_PART_ENTRY_UUID}', ptuuid))
elif vol.get('type') == "raid":
md_data = mdadm.mdadm_query_detail(path)
md_uuid = md_data.get('MD_UUID')
rule.append(compose_udev_equality("ENV{MD_UUID}", md_uuid))
elif vol.get('type') == "bcache":
rule.append(compose_udev_equality("ENV{DEVNAME}", path))
elif vol.get('type') == "lvm_partition":
volgroup_name = storage_config.get(vol.get('volgroup')).get('name')
dname = "%s-%s" % (volgroup_name, dname)
rule.append(compose_udev_equality("ENV{DM_NAME}", dname))
rule.append("SYMLINK+=\"disk/by-dname/%s\"" % dname)
util.ensure_dir(rules_dir)
with open(os.path.join(rules_dir, volume), "w") as fp:
fp.write(', '.join(rule))
def get_path_to_storage_volume(volume, storage_config):
# Get path to block device for volume. Volume param should refer to id of
# volume in storage config
devsync_vol = None
vol = storage_config.get(volume)
if not vol:
raise ValueError("volume with id '%s' not found" % volume)
# Find path to block device
if vol.get('type') == "partition":
partnumber = determine_partition_number(vol.get('id'), storage_config)
disk_block_path = get_path_to_storage_volume(vol.get('device'),
storage_config)
volume_path = disk_block_path + str(partnumber)
devsync_vol = os.path.join(disk_block_path)
elif vol.get('type') == "disk":
# Get path to block device for disk. Device_id param should refer
# to id of device in storage config
if vol.get('serial'):
volume_path = block.lookup_disk(vol.get('serial'))
elif vol.get('path'):
volume_path = vol.get('path')
else:
raise ValueError("serial number or path to block dev must be \
specified to identify disk")
elif vol.get('type') == "lvm_partition":
# For lvm partitions, a directory in /dev/ should be present with the
# name of the volgroup the partition belongs to. We can simply append
# the id of the lvm partition to the path of that directory
volgroup = storage_config.get(vol.get('volgroup'))
if not volgroup:
raise ValueError("lvm volume group '%s' could not be found"
% vol.get('volgroup'))
volume_path = os.path.join("/dev/", volgroup.get('name'),
vol.get('name'))
elif vol.get('type') == "dm_crypt":
# For dm_crypted partitions, unencrypted block device is at
# /dev/mapper/
dm_name = vol.get('dm_name')
if not dm_name:
dm_name = vol.get('id')
volume_path = os.path.join("/dev", "mapper", dm_name)
elif vol.get('type') == "raid":
# For raid partitions, block device is at /dev/mdX
name = vol.get('name')
volume_path = os.path.join("/dev", name)
elif vol.get('type') == "bcache":
# For bcache setups, the only reliable way to determine the name of the
# block device is to look in all /sys/block/bcacheX/ dirs and see what
# block devs are in the slaves dir there. Then, those blockdevs can be
# checked against the kname of the devs in the config for the desired
# bcache device. This is not very elegant though
backing_device_kname = os.path.split(get_path_to_storage_volume(
vol.get('backing_device'), storage_config))[-1]
sys_path = list(filter(lambda x: backing_device_kname in x,
glob.glob("/sys/block/bcache*/slaves/*")))[0]
while "bcache" not in os.path.split(sys_path)[-1]:
sys_path = os.path.split(sys_path)[0]
volume_path = os.path.join("/dev", os.path.split(sys_path)[-1])
else:
raise NotImplementedError("cannot determine the path to storage \
volume '%s' with type '%s'" % (volume, vol.get('type')))
# sync devices
if not devsync_vol:
devsync_vol = volume_path
devsync(devsync_vol)
return volume_path
def disk_handler(info, storage_config):
ptable = info.get('ptable')
disk = get_path_to_storage_volume(info.get('id'), storage_config)
# Handle preserve flag
if info.get('preserve'):
if not ptable:
# Don't need to check state, return
return
# Check state of current ptable
try:
(out, _err) = util.subp(["blkid", "-o", "export", disk],
capture=True)
except util.ProcessExecutionError:
raise ValueError("disk '%s' has no readable partition table or \
cannot be accessed, but preserve is set to true, so cannot \
continue")
current_ptable = list(filter(lambda x: "PTTYPE" in x,
out.splitlines()))[0].split("=")[-1]
if current_ptable == "dos" and ptable != "msdos" or \
current_ptable == "gpt" and ptable != "gpt":
raise ValueError("disk '%s' does not have correct \
partition table, but preserve is set to true, so not \
creating table, so not creating table." % info.get('id'))
LOG.info("disk '%s' marked to be preserved, so keeping partition \
table")
return
# Wipe the disk
if info.get('wipe') and info.get('wipe') != "none":
# The disk has a lable, clear all partitions
mdadm.mdadm_assemble(scan=True)
disk_kname = os.path.split(disk)[-1]
syspath_partitions = list(
os.path.split(prt)[0] for prt in
glob.glob("/sys/block/%s/*/partition" % disk_kname))
for partition in syspath_partitions:
clear_holders(partition)
with open(os.path.join(partition, "dev"), "r") as fp:
block_no = fp.read().rstrip()
partition_path = os.path.realpath(
os.path.join("/dev/block", block_no))
wipe_volume(partition_path, info.get('wipe'))
clear_holders("/sys/block/%s" % disk_kname)
wipe_volume(disk, info.get('wipe'))
# Create partition table on disk
if info.get('ptable'):
LOG.info("labeling device: '%s' with '%s' partition table", disk,
ptable)
if ptable == "gpt":
util.subp(["sgdisk", "--clear", disk])
elif ptable == "msdos":
util.subp(["parted", disk, "--script", "mklabel", "msdos"])
# Make the name if needed
if info.get('name'):
make_dname(info.get('id'), storage_config)
def getnumberoflogicaldisks(device, storage_config):
logicaldisks = 0
for key, item in storage_config.items():
if item.get('device') == device and item.get('flag') == "logical":
logicaldisks = logicaldisks + 1
return logicaldisks
def find_previous_partition(disk_id, part_id, storage_config):
last_partnum = None
for item_id, command in storage_config.items():
if item_id == part_id:
break
# skip anything not on this disk, not a 'partition' or 'extended'
if command['type'] != 'partition' or command['device'] != disk_id:
continue
if command.get('flag') == "extended":
continue
last_partnum = determine_partition_number(item_id, storage_config)
return last_partnum
def partition_handler(info, storage_config):
device = info.get('device')
size = info.get('size')
flag = info.get('flag')
disk_ptable = storage_config.get(device).get('ptable')
partition_type = None
if not device:
raise ValueError("device must be set for partition to be created")
if not size:
raise ValueError("size must be specified for partition to be created")
disk = get_path_to_storage_volume(device, storage_config)
partnumber = determine_partition_number(info.get('id'), storage_config)
disk_kname = os.path.split(
get_path_to_storage_volume(device, storage_config))[-1]
# consider the disks logical sector size when calculating sectors
try:
prefix = "/sys/block/%s/queue/" % disk_kname
with open(prefix + "logical_block_size", "r") as f:
l = f.readline()
logical_block_size_bytes = int(l)
except:
logical_block_size_bytes = 512
if partnumber > 1:
if partnumber == 5 and disk_ptable == "msdos":
for key, item in storage_config.items():
if item.get('type') == "partition" and \
item.get('device') == device and \
item.get('flag') == "extended":
extended_part_no = determine_partition_number(
key, storage_config)
break
previous_partition = "/sys/block/%s/%s%s/" % \
(disk_kname, disk_kname, extended_part_no)
else:
pnum = find_previous_partition(device, info['id'], storage_config)
LOG.debug("previous partition number for '%s' found to be '%s'",
info.get('id'), pnum)
previous_partition = "/sys/block/%s/%s%s/" % \
(disk_kname, disk_kname, pnum)
with open(os.path.join(previous_partition, "size"), "r") as fp:
previous_size = int(fp.read())
with open(os.path.join(previous_partition, "start"), "r") as fp:
previous_start = int(fp.read())
# Align to 1M at the beginning of the disk and at logical partitions
alignment_offset = (1 << 20) / logical_block_size_bytes
if partnumber == 1:
# start of disk
offset_sectors = alignment_offset
else:
# further partitions
if disk_ptable == "gpt" or flag != "logical":
# msdos primary and any gpt part start after former partition end
offset_sectors = previous_start + previous_size
else:
# msdos extended/logical partitions
if flag == "logical":
if partnumber == 5:
# First logical partition
# start at extended partition start + alignment_offset
offset_sectors = previous_start + alignment_offset
else:
# Further logical partitions
# start at former logical partition end + alignment_offset
offset_sectors = (previous_start + previous_size +
alignment_offset)
length_bytes = util.human2bytes(size)
# start sector is part of the sectors that define the partitions size
# so length has to be "size in sectors - 1"
length_sectors = int(length_bytes / logical_block_size_bytes) - 1
# logical partitions can't share their start sector with the extended
# partition and logical partitions can't go head-to-head, so we have to
# realign and for that increase size as required
if info.get('flag') == "extended":
logdisks = getnumberoflogicaldisks(device, storage_config)
length_sectors = length_sectors + (logdisks * alignment_offset)
# Handle preserve flag
if info.get('preserve'):
return
elif storage_config.get(device).get('preserve'):
raise NotImplementedError("Partition '%s' is not marked to be \
preserved, but device '%s' is. At this time, preserving devices \
but not also the partitions on the devices is not supported, \
because of the possibility of damaging partitions intended to be \
preserved." % (info.get('id'), device))
# Set flag
# 'sgdisk --list-types'
sgdisk_flags = {"boot": 'ef00',
"lvm": '8e00',
"raid": 'fd00',
"bios_grub": 'ef02',
"prep": '4100',
"swap": '8200',
"home": '8302',
"linux": '8300'}
LOG.info("adding partition '%s' to disk '%s'" % (info.get('id'), device))
if disk_ptable == "msdos":
if flag in ["extended", "logical", "primary"]:
partition_type = flag
else:
partition_type = "primary"
cmd = ["parted", disk, "--script", "mkpart", partition_type,
"%ss" % offset_sectors, "%ss" % str(offset_sectors +
length_sectors)]
util.subp(cmd)
elif disk_ptable == "gpt":
if flag and flag in sgdisk_flags:
typecode = sgdisk_flags[flag]
else:
typecode = sgdisk_flags['linux']
cmd = ["sgdisk", "--new", "%s:%s:%s" % (partnumber, offset_sectors,
length_sectors + offset_sectors),
"--typecode=%s:%s" % (partnumber, typecode), disk]
util.subp(cmd)
else:
raise ValueError("parent partition has invalid partition table")
# Wipe the partition if told to do so
if info.get('wipe') and info.get('wipe') != "none":
wipe_volume(
get_path_to_storage_volume(info.get('id'), storage_config),
info.get('wipe'))
# Make the name if needed
if storage_config.get(device).get('name') and partition_type != 'extended':
make_dname(info.get('id'), storage_config)
def format_handler(info, storage_config):
volume = info.get('volume')
if not volume:
raise ValueError("volume must be specified for partition '%s'" %
info.get('id'))
# Get path to volume
volume_path = get_path_to_storage_volume(volume, storage_config)
# Handle preserve flag
if info.get('preserve'):
# Volume marked to be preserved, not formatting
return
# Make filesystem using block library
mkfs.mkfs_from_config(volume_path, info)
def mount_handler(info, storage_config):
state = util.load_command_environment()
path = info.get('path')
filesystem = storage_config.get(info.get('device'))
if not path and filesystem.get('fstype') != "swap":
raise ValueError("path to mountpoint must be specified")
volume = storage_config.get(filesystem.get('volume'))
# Get path to volume
volume_path = get_path_to_storage_volume(filesystem.get('volume'),
storage_config)
if filesystem.get('fstype') != "swap":
# Figure out what point should be
while len(path) > 0 and path[0] == "/":
path = path[1:]
mount_point = os.path.join(state['target'], path)
# Create mount point if does not exist
util.ensure_dir(mount_point)
# Mount volume
util.subp(['mount', volume_path, mount_point])
# Add volume to fstab
if state['fstab']:
with open(state['fstab'], "a") as fp:
if volume.get('type') in ["raid", "bcache",
"disk", "lvm_partition"]:
location = get_path_to_storage_volume(volume.get('id'),
storage_config)
elif volume.get('type') in ["partition", "dm_crypt"]:
location = "UUID=%s" % block.get_volume_uuid(volume_path)
else:
raise ValueError("cannot write fstab for volume type '%s'" %
volume.get("type"))
if filesystem.get('fstype') == "swap":
path = "none"
options = "sw"
else:
path = "/%s" % path
options = "defaults"
if filesystem.get('fstype') in ["fat", "fat12", "fat16", "fat32",
"fat64"]:
fstype = "vfat"
else:
fstype = filesystem.get('fstype')
fp.write("%s %s %s %s 0 0\n" % (location, path, fstype, options))
else:
LOG.info("fstab not in environment, so not writing")
def lvm_volgroup_handler(info, storage_config):
devices = info.get('devices')
device_paths = []
name = info.get('name')
if not devices:
raise ValueError("devices for volgroup '%s' must be specified" %
info.get('id'))
if not name:
raise ValueError("name for volgroups needs to be specified")
for device_id in devices:
device = storage_config.get(device_id)
if not device:
raise ValueError("device '%s' could not be found in storage config"
% device_id)
device_paths.append(get_path_to_storage_volume(device_id,
storage_config))
# Handle preserve flag
if info.get('preserve'):
# LVM will probably be offline, so start it
util.subp(["vgchange", "-a", "y"])
# Verify that volgroup exists and contains all specified devices
current_paths = []
(out, _err) = util.subp(["pvdisplay", "-C", "--separator", "=", "-o",
"vg_name,pv_name", "--noheadings"],
capture=True)
for line in out.splitlines():
if name in line:
current_paths.append(line.split("=")[-1])
if set(current_paths) != set(device_paths):
raise ValueError("volgroup '%s' marked to be preserved, but does \
not exist or does not contain the right physical \
volumes" % info.get('id'))
else:
# Create vgrcreate command and run
cmd = ["vgcreate", name]
cmd.extend(device_paths)
util.subp(cmd)
def lvm_partition_handler(info, storage_config):
volgroup = storage_config.get(info.get('volgroup')).get('name')
name = info.get('name')
if not volgroup:
raise ValueError("lvm volgroup for lvm partition must be specified")
if not name:
raise ValueError("lvm partition name must be specified")
# Handle preserve flag
if info.get('preserve'):
(out, _err) = util.subp(["lvdisplay", "-C", "--separator", "=", "-o",
"lv_name,vg_name", "--noheadings"],
capture=True)
found = False
for line in out.splitlines():
if name in line:
if volgroup == line.split("=")[-1]:
found = True
break
if not found:
raise ValueError("lvm partition '%s' marked to be preserved, but \
does not exist or does not mach storage \
configuration" % info.get('id'))
elif storage_config.get(info.get('volgroup')).get('preserve'):
raise NotImplementedError("Lvm Partition '%s' is not marked to be \
preserved, but volgroup '%s' is. At this time, preserving \
volgroups but not also the lvm partitions on the volgroup is \
not supported, because of the possibility of damaging lvm \
partitions intended to be preserved." % (info.get('id'), volgroup))
else:
cmd = ["lvcreate", volgroup, "-n", name]
if info.get('size'):
cmd.extend(["-L", info.get('size')])
else:
cmd.extend(["-l", "100%FREE"])
util.subp(cmd)
if info.get('ptable'):
raise ValueError("Partition tables on top of lvm logical volumes is \
not supported")
make_dname(info.get('id'), storage_config)
def dm_crypt_handler(info, storage_config):
state = util.load_command_environment()
volume = info.get('volume')
key = info.get('key')
keysize = info.get('keysize')
cipher = info.get('cipher')
dm_name = info.get('dm_name')
if not volume:
raise ValueError("volume for cryptsetup to operate on must be \
specified")
if not key:
raise ValueError("encryption key must be specified")
if not dm_name:
dm_name = info.get('id')
volume_path = get_path_to_storage_volume(volume, storage_config)
# TODO: this is insecure, find better way to do this
tmp_keyfile = tempfile.mkstemp()[1]
fp = open(tmp_keyfile, "w")
fp.write(key)
fp.close()
cmd = ["cryptsetup"]
if cipher:
cmd.extend(["--cipher", cipher])
if keysize:
cmd.extend(["--key-size", keysize])
cmd.extend(["luksFormat", volume_path, tmp_keyfile])
util.subp(cmd)
cmd = ["cryptsetup", "open", "--type", "luks", volume_path, dm_name,
"--key-file", tmp_keyfile]
util.subp(cmd)
os.remove(tmp_keyfile)
# A crypttab will be created in the same directory as the fstab in the
# configuration. This will then be copied onto the system later
if state['fstab']:
crypt_tab_location = os.path.join(os.path.split(state['fstab'])[0],
"crypttab")
uuid = block.get_volume_uuid(volume_path)
with open(crypt_tab_location, "a") as fp:
fp.write("%s UUID=%s none luks\n" % (dm_name, uuid))
else:
LOG.info("fstab configuration is not present in environment, so \
cannot locate an appropriate directory to write crypttab in \
so not writing crypttab")
def raid_handler(info, storage_config):
state = util.load_command_environment()
devices = info.get('devices')
raidlevel = info.get('raidlevel')
spare_devices = info.get('spare_devices')
md_devname = block.dev_path(info.get('name'))
if not devices:
raise ValueError("devices for raid must be specified")
if raidlevel not in ['linear', 'raid0', 0, 'stripe', 'raid1', 1, 'mirror',
'raid4', 4, 'raid5', 5, 'raid6', 6, 'raid10', 10]:
raise ValueError("invalid raidlevel '%s'" % raidlevel)
if raidlevel in ['linear', 'raid0', 0, 'stripe']:
if spare_devices:
raise ValueError("spareunsupported in raidlevel '%s'" % raidlevel)
device_paths = list(get_path_to_storage_volume(dev, storage_config) for
dev in devices)
spare_device_paths = []
if spare_devices:
spare_device_paths = list(get_path_to_storage_volume(dev,
storage_config) for dev in spare_devices)
# Handle preserve flag
if info.get('preserve'):
# check if the array is already up, if not try to assemble
if not mdadm.md_check(md_devname, raidlevel,
device_paths, spare_device_paths):
LOG.info("assembling preserved raid for "
"{}".format(md_devname))
mdadm.mdadm_assemble(md_devname, device_paths, spare_device_paths)
# try again after attempting to assemble
if not mdadm.md_check(md_devname, raidlevel,
devices, spare_device_paths):
raise ValueError("Unable to confirm preserved raid array: "
" {}".format(md_devname))
# raid is all OK
return
mdadm.mdadm_create(md_devname, raidlevel,
device_paths, spare_device_paths,
info.get('mdname', ''))
# Make dname rule for this dev
make_dname(info.get('id'), storage_config)
# A mdadm.conf will be created in the same directory as the fstab in the
# configuration. This will then be copied onto the installed system later.
# The file must also be written onto the running system to enable it to run
# mdadm --assemble and continue installation
if state['fstab']:
mdadm_location = os.path.join(os.path.split(state['fstab'])[0],
"mdadm.conf")
mdadm_scan_data = mdadm.mdadm_detail_scan()
with open(mdadm_location, "w") as fp:
fp.write(mdadm_scan_data)
else:
LOG.info("fstab configuration is not present in the environment, so \
cannot locate an appropriate directory to write mdadm.conf in, \
so not writing mdadm.conf")
# If ptable is specified, call disk_handler on this mdadm device to create
# the table
if info.get('ptable'):
disk_handler(info, storage_config)
def bcache_handler(info, storage_config):
backing_device = get_path_to_storage_volume(info.get('backing_device'),
storage_config)
cache_device = get_path_to_storage_volume(info.get('cache_device'),
storage_config)
cache_mode = info.get('cache_mode', None)
if not backing_device or not cache_device:
raise ValueError("backing device and cache device for bcache"
" must be specified")
# The bcache module is not loaded when bcache is installed by apt-get, so
# we will load it now
util.subp(["modprobe", "bcache"])
if cache_device:
# /sys/class/block/XXX/YYY/
cache_device_sysfs = block_find_sysfs_path(cache_device)
if os.path.exists(os.path.join(cache_device_sysfs, "bcache")):
# read in cset uuid from cache device
(out, err) = util.subp(["bcache-super-show", cache_device],
capture=True)
LOG.debug('out=[{}]'.format(out))
[cset_uuid] = [line.split()[-1] for line in out.split("\n")
if line.startswith('cset.uuid')]
else:
# make the cache device, extracting cacheset uuid
(out, err) = util.subp(["make-bcache", "-C", cache_device],
capture=True)
LOG.debug('out=[{}]'.format(out))
[cset_uuid] = [line.split()[-1] for line in out.split("\n")
if line.startswith('Set UUID:')]
if backing_device:
backing_device_sysfs = block_find_sysfs_path(backing_device)
if not os.path.exists(os.path.join(backing_device_sysfs, "bcache")):
util.subp(["make-bcache", "-B", backing_device])
# Some versions of bcache-tools will register the bcache device as soon as
# we run make-bcache using udev rules, so wait for udev to settle, then try
# to locate the dev, on older versions we need to register it manually
# though
devpath = None
cur_id = info.get('id')
try:
util.subp(["udevadm", "settle"])
devpath = get_path_to_storage_volume(cur_id, storage_config)
except (OSError, IndexError):
# Register
for path in [backing_device, cache_device]:
fp = open("/sys/fs/bcache/register", "w")
fp.write(path)
fp.close()
devpath = get_path_to_storage_volume(cur_id, storage_config)
syspath = block.sys_block_path(devpath)
# if we specify both then we need to attach backing to cache
if cache_device and backing_device:
if cset_uuid:
LOG.info("Attaching backing device to cacheset: "
"{} -> {} cset.uuid: {}".format(backing_device,
cache_device,
cset_uuid))
attach = os.path.join(backing_device_sysfs,
"bcache",
"attach")
with open(attach, "w") as fp:
fp.write(cset_uuid)
else:
LOG.error("Invalid cset_uuid: '%s'" % cset_uuid)
raise Exception("Invalid cset_uuid: '%s'" % cset_uuid)
if cache_mode:
# find the actual bcache device name via sysfs using the
# backing device's holders directory.
holders = get_holders(backing_device)
if len(holders) != 1:
err = ('Invalid number of holding devices:'
' {}'.format(holders))
LOG.error(err)
raise ValueError(err)
[bcache_dev] = holders
LOG.info("Setting cache_mode on {} to {}".format(bcache_dev,
cache_mode))
cache_mode_file = os.path.join(syspath, "bcache/cache_mode")
with open(cache_mode_file, "w") as fp:
fp.write(cache_mode)
if info.get('name'):
# Make dname rule for this dev
make_dname(info.get('id'), storage_config)
if info.get('ptable'):
raise ValueError("Partition tables on top of lvm logical volumes is \
not supported")
def extract_storage_ordered_dict(config):
storage_config = config.get('storage', {})
if not storage_config:
raise ValueError("no 'storage' entry in config")
scfg = storage_config.get('config')
if not scfg:
raise ValueError("invalid storage config data")
# Since storage config will often have to be searched for a value by its
# id, and this can become very inefficient as storage_config grows, a dict
# will be generated with the id of each component of the storage_config as
# its index and the component of storage_config as its value
return OrderedDict((d["id"], d) for (i, d) in enumerate(scfg))
def meta_custom(args):
"""Does custom partitioning based on the layout provided in the config
file. Section with the name storage contains information on which
partitions on which disks to create. It also contains information about
overlays (raid, lvm, bcache) which need to be setup.
"""
command_handlers = {
'disk': disk_handler,
'partition': partition_handler,
'format': format_handler,
'mount': mount_handler,
'lvm_volgroup': lvm_volgroup_handler,
'lvm_partition': lvm_partition_handler,
'dm_crypt': dm_crypt_handler,
'raid': raid_handler,
'bcache': bcache_handler
}
state = util.load_command_environment()
cfg = config.load_command_config(args, state)
storage_config_dict = extract_storage_ordered_dict(cfg)
for item_id, command in storage_config_dict.items():
handler = command_handlers.get(command['type'])
if not handler:
raise ValueError("unknown command type '%s'" % command['type'])
try:
handler(command, storage_config_dict)
except Exception as error:
LOG.error("An error occured handling '%s': %s - %s" %
(item_id, type(error).__name__, error))
raise
return 0
def meta_simple(args):
"""Creates a root partition. If args.mode == SIMPLE_BOOT, it will also
create a separate /boot partition.
"""
state = util.load_command_environment()
cfg = config.load_command_config(args, state)
if args.target is not None:
state['target'] = args.target
if state['target'] is None:
sys.stderr.write("Unable to find target. "
"Use --target or set TARGET_MOUNT_POINT\n")
sys.exit(2)
devices = args.devices
if devices is None:
devices = cfg.get('block-meta', {}).get('devices', [])
bootpt = get_bootpt_cfg(
cfg.get('block-meta', {}).get('boot-partition', {}),
enabled=args.mode == SIMPLE_BOOT, fstype=args.boot_fstype,
root_fstype=args.fstype)
ptfmt = get_partition_format_type(cfg.get('block-meta', {}))
# Remove duplicates but maintain ordering.
devices = list(OrderedDict.fromkeys(devices))
# Multipath devices might be automatically assembled if multipath-tools
# package is available in the installation environment. We need to stop
# all multipath devices to exclusively use one of paths as a target disk.
block.stop_all_unused_multipath_devices()
if len(devices) == 0:
devices = block.get_installable_blockdevs()
LOG.warn("'%s' mode, no devices given. unused list: %s",
args.mode, devices)
# Check if the list of installable block devices is still empty after
# checking for block devices and filtering out the removable ones. In
# this case we may have a system which has its harddrives reported by
# lsblk incorrectly. In this case we search for installable
# blockdevices that are removable as a last resort before raising an
# exception.
if len(devices) == 0:
devices = block.get_installable_blockdevs(include_removable=True)
if len(devices) == 0:
# Fail gracefully if no devices are found, still.
raise Exception("No valid target devices found that curtin "
"can install on.")
else:
LOG.warn("No non-removable, installable devices found. List "
"populated with removable devices allowed: %s",
devices)
if len(devices) > 1:
if args.devices is not None:
LOG.warn("'%s' mode but multiple devices given. "
"using first found", args.mode)
available = [f for f in devices
if block.is_valid_device(f)]
target = sorted(available)[0]
LOG.warn("mode is '%s'. multiple devices given. using '%s' "
"(first available)", args.mode, target)
else:
target = devices[0]
if not block.is_valid_device(target):
raise Exception("target device '%s' is not a valid device" % target)
(devname, devnode) = block.get_dev_name_entry(target)
LOG.info("installing in '%s' mode to '%s'", args.mode, devname)
sources = cfg.get('sources', {})
dd_images = util.get_dd_images(sources)
if len(dd_images):
# we have at least one dd-able image
# we will only take the first one
rootdev = write_image_to_disk(dd_images[0], devname)
util.subp(['mount', rootdev, state['target']])
return 0
# helper partition will forcibly set up partition there
ptcmd = ['partition', '--format=' + ptfmt]
if bootpt['enabled']:
ptcmd.append('--boot')
ptcmd.append(devnode)
if bootpt['enabled'] and ptfmt in ("uefi", "prep"):
raise ValueError("format=%s with boot partition not supported" % ptfmt)
bootdev_ptnum = None
rootdev_ptnum = None
bootdev = None
if bootpt['enabled']:
bootdev_ptnum = 1
rootdev_ptnum = 2
else:
if ptfmt == "prep":
rootdev_ptnum = 2
else:
rootdev_ptnum = 1
logtime("creating partition with: %s" % ' '.join(ptcmd),
util.subp, ptcmd)
ptpre = ""
if not os.path.exists("%s%s" % (devnode, rootdev_ptnum)):
# perhaps the device is /dev/p
if os.path.exists("%sp%s" % (devnode, rootdev_ptnum)):
ptpre = "p"
else:
LOG.warn("root device %s%s did not exist, expecting failure",
devnode, rootdev_ptnum)
if bootdev_ptnum:
bootdev = "%s%s%s" % (devnode, ptpre, bootdev_ptnum)
if ptfmt == "uefi":
# assumed / required from the partitioner pt_uefi
uefi_ptnum = "15"
uefi_label = "uefi-boot"
uefi_dev = "%s%s%s" % (devnode, ptpre, uefi_ptnum)
rootdev = "%s%s%s" % (devnode, ptpre, rootdev_ptnum)
LOG.debug("rootdev=%s bootdev=%s fmt=%s bootpt=%s",
rootdev, bootdev, ptfmt, bootpt)
# mkfs for root partition first and mount
cmd = ['mkfs.%s' % args.fstype, '-q', '-L', 'cloudimg-rootfs', rootdev]
logtime(' '.join(cmd), util.subp, cmd)
util.subp(['mount', rootdev, state['target']])
if bootpt['enabled']:
# create 'boot' directory in state['target']
boot_dir = os.path.join(state['target'], 'boot')
util.subp(['mkdir', boot_dir])
# mkfs for boot partition and mount
cmd = ['mkfs.%s' % bootpt['fstype'],
'-q', '-L', bootpt['label'], bootdev]
logtime(' '.join(cmd), util.subp, cmd)
util.subp(['mount', bootdev, boot_dir])
if ptfmt == "uefi":
uefi_dir = os.path.join(state['target'], 'boot', 'efi')
util.ensure_dir(uefi_dir)
util.subp(['mount', uefi_dev, uefi_dir])
if state['fstab']:
with open(state['fstab'], "w") as fp:
if bootpt['enabled']:
fp.write("LABEL=%s /boot %s defaults 0 0\n" %
(bootpt['label'], bootpt['fstype']))
if ptfmt == "uefi":
# label created in helpers/partition for uefi
fp.write("LABEL=%s /boot/efi vfat defaults 0 0\n" %
uefi_label)
fp.write("LABEL=%s / %s defaults 0 0\n" %
('cloudimg-rootfs', args.fstype))
else:
LOG.info("fstab not in environment, so not writing")
return 0
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, block_meta)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/curthooks.py 0000644 0000000 0000000 00000063037 12673006714 020033 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import copy
import glob
import os
import platform
import re
import sys
import shutil
import textwrap
from curtin import config
from curtin import block
from curtin import futil
from curtin.log import LOG
from curtin import swap
from curtin import util
from curtin import net
from . import populate_one_subcmd
CMD_ARGUMENTS = (
((('-t', '--target'),
{'help': 'operate on target. default is env[TARGET_MOUNT_POINT]',
'action': 'store', 'metavar': 'TARGET', 'default': None}),
(('-c', '--config'),
{'help': 'operate on config. default is env[CONFIG]',
'action': 'store', 'metavar': 'CONFIG', 'default': None}),
)
)
KERNEL_MAPPING = {
'precise': {
'3.2.0': '',
'3.5.0': '-lts-quantal',
'3.8.0': '-lts-raring',
'3.11.0': '-lts-saucy',
'3.13.0': '-lts-trusty',
},
'trusty': {
'3.13.0': '',
'3.16.0': '-lts-utopic',
'3.19.0': '-lts-vivid',
'4.2.0': '-lts-wily',
'4.4.0': '-lts-xenial',
},
'xenial': {
'4.3.0': '', # development release has 4.3, release will have 4.4
'4.4.0': '',
}
}
def write_files(cfg, target):
# this takes 'write_files' entry in config and writes files in the target
# config entry example:
# f1:
# path: /file1
# content: !!binary |
# f0VMRgIBAQAAAAAAAAAAAAIAPgABAAAAwARAAAAAAABAAAAAAAAAAJAVAAAAAAA
# f2: {path: /file2, content: "foobar", permissions: '0666'}
if 'write_files' not in cfg:
return
for (key, info) in cfg.get('write_files').items():
if not info.get('path'):
LOG.warn("Warning, write_files[%s] had no 'path' entry", key)
continue
futil.write_finfo(path=target + os.path.sep + info['path'],
content=info.get('content', ''),
owner=info.get('owner', "-1:-1"),
perms=info.get('permissions',
info.get('perms', "0644")))
def apt_config(cfg, target):
# cfg['apt_proxy']
proxy_cfg_path = os.path.sep.join(
[target, '/etc/apt/apt.conf.d/90curtin-aptproxy'])
if cfg.get('apt_proxy'):
util.write_file(
proxy_cfg_path,
content='Acquire::HTTP::Proxy "%s";\n' % cfg['apt_proxy'])
else:
if os.path.isfile(proxy_cfg_path):
os.unlink(proxy_cfg_path)
# cfg['apt_mirrors']
# apt_mirrors:
# ubuntu_archive: http://local.archive/ubuntu
# ubuntu_security: http://local.archive/ubuntu
sources_list = os.path.sep.join([target, '/etc/apt/sources.list'])
if (isinstance(cfg.get('apt_mirrors'), dict) and
os.path.isfile(sources_list)):
repls = [
('ubuntu_archive', r'http://\S*[.]*archive.ubuntu.com/\S*'),
('ubuntu_security', r'http://security.ubuntu.com/\S*'),
]
content = None
for name, regex in repls:
mirror = cfg['apt_mirrors'].get(name)
if not mirror:
continue
if content is None:
with open(sources_list) as fp:
content = fp.read()
util.write_file(sources_list + ".dist", content)
content = re.sub(regex, mirror + " ", content)
if content is not None:
util.write_file(sources_list, content)
def disable_overlayroot(cfg, target):
# cloud images come with overlayroot, but installed systems need disabled
disable = cfg.get('disable_overlayroot', True)
local_conf = os.path.sep.join([target, 'etc/overlayroot.local.conf'])
if disable and os.path.exists(local_conf):
LOG.debug("renaming %s to %s", local_conf, local_conf + ".old")
shutil.move(local_conf, local_conf + ".old")
def clean_cloud_init(target):
flist = glob.glob(
os.path.sep.join([target, "/etc/cloud/cloud.cfg.d/*dpkg*"]))
LOG.debug("cleaning cloud-init config from: %s" % flist)
for dpkg_cfg in flist:
os.unlink(dpkg_cfg)
def install_kernel(cfg, target):
kernel_cfg = cfg.get('kernel', {'package': None,
'fallback-package': None,
'mapping': {}})
if kernel_cfg is not None:
kernel_package = kernel_cfg.get('package')
kernel_fallback = kernel_cfg.get('fallback-package')
else:
kernel_package = None
kernel_fallback = None
mapping = copy.deepcopy(KERNEL_MAPPING)
config.merge_config(mapping, kernel_cfg.get('mapping', {}))
with util.RunInChroot(target) as in_chroot:
if kernel_package:
util.install_packages([kernel_package], target=target)
return
# uname[2] is kernel name (ie: 3.16.0-7-generic)
# version gets X.Y.Z, flavor gets anything after second '-'.
kernel = os.uname()[2]
codename, err = in_chroot(['lsb_release', '--codename', '--short'],
capture=True)
codename = codename.strip()
version, abi, flavor = kernel.split('-', 2)
try:
map_suffix = mapping[codename][version]
except KeyError:
LOG.warn("Couldn't detect kernel package to install for %s."
% kernel)
if kernel_fallback is not None:
util.install_packages([kernel_fallback])
return
package = "linux-{flavor}{map_suffix}".format(
flavor=flavor, map_suffix=map_suffix)
util.apt_update(target)
out, err = in_chroot(['apt-cache', 'search', package], capture=True)
if (len(out.strip()) > 0 and
not util.has_pkg_installed(package, target)):
util.install_packages([package], target=target)
else:
LOG.warn("Tried to install kernel %s but package not found."
% package)
if kernel_fallback is not None:
util.install_packages([kernel_fallback], target=target)
def apply_debconf_selections(cfg, target):
# debconf_selections:
# set1: |
# cloud-init cloud-init/datasources multiselect MAAS
# set2: pkg pkg/value string bar
selsets = cfg.get('debconf_selections')
if not selsets:
LOG.debug("debconf_selections was not set in config")
return
# for each entry in selections, chroot and apply them.
# keep a running total of packages we've seen.
pkgs_cfgd = set()
for key, content in selsets.items():
LOG.debug("setting for %s, %s" % (key, content))
util.subp(['chroot', target, 'debconf-set-selections'],
data=content.encode())
for line in content.splitlines():
if line.startswith("#"):
continue
pkg = re.sub(r"[:\s].*", "", line)
pkgs_cfgd.add(pkg)
pkgs_installed = get_installed_packages(target)
LOG.debug("pkgs_cfgd: %s" % pkgs_cfgd)
LOG.debug("pkgs_installed: %s" % pkgs_installed)
need_reconfig = pkgs_cfgd.intersection(pkgs_installed)
if len(need_reconfig) == 0:
LOG.debug("no need for reconfig")
return
# For any packages that are already installed, but have preseed data
# we populate the debconf database, but the filesystem configuration
# would be preferred on a subsequent dpkg-reconfigure.
# so, what we have to do is "know" information about certain packages
# to unconfigure them.
unhandled = []
to_config = []
for pkg in need_reconfig:
if pkg in CONFIG_CLEANERS:
LOG.debug("unconfiguring %s" % pkg)
CONFIG_CLEANERS[pkg](target)
to_config.append(pkg)
else:
unhandled.append(pkg)
if len(unhandled):
LOG.warn("The following packages were installed and preseeded, "
"but cannot be unconfigured: %s", unhandled)
util.subp(['chroot', target, 'dpkg-reconfigure',
'--frontend=noninteractive'] +
list(to_config), data=None)
def get_installed_packages(target=None):
cmd = []
if target is not None:
cmd = ['chroot', target]
cmd.extend(['dpkg-query', '--list'])
(out, _err) = util.subp(cmd, capture=True)
if isinstance(out, bytes):
out = out.decode()
pkgs_inst = set()
for line in out.splitlines():
try:
(state, pkg, other) = line.split(None, 2)
except ValueError:
continue
if state.startswith("hi") or state.startswith("ii"):
pkgs_inst.add(re.sub(":.*", "", pkg))
return pkgs_inst
def setup_grub(cfg, target):
# target is the path to the mounted filesystem
# FIXME: these methods need moving to curtin.block
# and using them from there rather than commands.block_meta
from curtin.commands.block_meta import (extract_storage_ordered_dict,
get_path_to_storage_volume)
grubcfg = cfg.get('grub', {})
# copy legacy top level name
if 'grub_install_devices' in cfg and 'install_devices' not in grubcfg:
grubcfg['install_devices'] = cfg['grub_install_devices']
# if there is storage config, look for devices tagged with 'grub_device'
storage_cfg_odict = None
try:
storage_cfg_odict = extract_storage_ordered_dict(cfg)
except ValueError as e:
pass
if storage_cfg_odict:
storage_grub_devices = []
for item_id, item in storage_cfg_odict.items():
if not item.get('grub_device'):
continue
LOG.debug("checking: %s", item)
storage_grub_devices.append(
get_path_to_storage_volume(item_id, storage_cfg_odict))
if len(storage_grub_devices) > 0:
grubcfg['install_devices'] = storage_grub_devices
LOG.debug("install_devices: %s", grubcfg.get('install_devices'))
if 'install_devices' in grubcfg:
instdevs = grubcfg.get('install_devices')
if isinstance(instdevs, str):
instdevs = [instdevs]
if instdevs is None:
LOG.debug("grub installation disabled by config")
else:
# If there were no install_devices found then we try to do the right
# thing. That right thing is basically installing on all block
# devices that are mounted. On powerpc, though it means finding PrEP
# partitions.
devs = block.get_devices_for_mp(target)
blockdevs = set()
for maybepart in devs:
try:
(blockdev, part) = block.get_blockdev_for_partition(maybepart)
blockdevs.add(blockdev)
except ValueError as e:
# if there is no syspath for this device such as a lvm
# or raid device, then a ValueError is raised here.
LOG.debug("failed to find block device for %s", maybepart)
if platform.machine().startswith("ppc64"):
# assume we want partitions that are 4100 (PReP). The snippet here
# just prints the partition number partitions of that type.
shnip = textwrap.dedent("""
export LANG=C;
for d in "$@"; do
sgdisk "$d" --print |
awk "\$6 == prep { print d \$1 }" "d=$d" prep=4100
done
""")
try:
out, err = util.subp(
['sh', '-c', shnip, '--'] + list(blockdevs),
capture=True)
instdevs = str(out).splitlines()
if not instdevs:
LOG.warn("No power grub target partitions found!")
instdevs = None
except util.ProcessExecutionError as e:
LOG.warn("Failed to find power grub partitions: %s", e)
instdevs = None
else:
instdevs = list(blockdevs)
# UEFI requires grub-efi-{arch}. If a signed version of that package
# exists then it will be installed.
if util.is_uefi_bootable():
arch = util.get_architecture()
pkgs = ['grub-efi-%s' % arch]
# Architecture might support a signed UEFI loader
uefi_pkg_signed = 'grub-efi-%s-signed' % arch
if util.has_pkg_available(uefi_pkg_signed):
pkgs.append(uefi_pkg_signed)
# AMD64 has shim-signed for SecureBoot support
if arch == "amd64":
pkgs.append("shim-signed")
# Install the UEFI packages needed for the architecture
util.install_packages(pkgs, target=target)
env = os.environ.copy()
replace_default = grubcfg.get('replace_linux_default', True)
if str(replace_default).lower() in ("0", "false"):
env['REPLACE_GRUB_LINUX_DEFAULT'] = "0"
else:
env['REPLACE_GRUB_LINUX_DEFAULT'] = "1"
if instdevs:
instdevs = [block.get_dev_name_entry(i)[1] for i in instdevs]
else:
instdevs = ["none"]
LOG.debug("installing grub to %s [replace_default=%s]",
instdevs, replace_default)
with util.ChrootableTarget(target):
args = ['install-grub']
if util.is_uefi_bootable():
args.append("--uefi")
if grubcfg.get('update_nvram', False):
LOG.debug("GRUB UEFI enabling NVRAM updates")
args.append("--update-nvram")
else:
LOG.debug("NOT enabling UEFI nvram updates")
LOG.debug("Target system may not boot")
args.append(target)
util.subp(args + instdevs, env=env)
def update_initramfs(target, all_kernels=False):
cmd = ['update-initramfs', '-u']
if all_kernels:
cmd.extend(['-k', 'all'])
with util.RunInChroot(target) as in_chroot:
in_chroot(cmd)
def copy_fstab(fstab, target):
if not fstab:
LOG.warn("fstab variable not in state, not copying fstab")
return
shutil.copy(fstab, os.path.sep.join([target, 'etc/fstab']))
def copy_crypttab(crypttab, target):
if not crypttab:
LOG.warn("crypttab config must be specified, not copying")
return
shutil.copy(crypttab, os.path.sep.join([target, 'etc/crypttab']))
def copy_mdadm_conf(mdadm_conf, target):
if not mdadm_conf:
LOG.warn("mdadm config must be specified, not copying")
return
shutil.copy(mdadm_conf, os.path.sep.join([target,
'etc/mdadm/mdadm.conf']))
def apply_networking(target, state):
netstate = state.get('network_state')
netconf = state.get('network_config')
interfaces = state.get('interfaces')
def is_valid_src(infile):
with open(infile, 'r') as fp:
content = fp.read()
if len(content.split('\n')) > 1:
return True
return False
ns = None
if is_valid_src(netstate):
LOG.debug("applying network_state")
ns = net.network_state.from_state_file(netstate)
elif is_valid_src(netconf):
LOG.debug("applying network_config")
ns = net.parse_net_config(netconf)
if ns is not None:
net.render_network_state(target=target, network_state=ns)
else:
LOG.debug("copying interfaces")
copy_interfaces(interfaces, target)
def copy_interfaces(interfaces, target):
if not interfaces:
LOG.warn("no interfaces file to copy!")
return
eni = os.path.sep.join([target, 'etc/network/interfaces'])
shutil.copy(interfaces, eni)
def copy_dname_rules(rules_d, target):
if not rules_d:
LOG.warn("no udev rules directory to copy")
return
for rule in os.listdir(rules_d):
target_file = os.path.join(
target, "etc/udev/rules.d", "%s.rules" % rule)
shutil.copy(os.path.join(rules_d, rule), target_file)
def restore_dist_interfaces(cfg, target):
# cloud images have a link of /etc/network/interfaces into /run
eni = os.path.sep.join([target, 'etc/network/interfaces'])
if not cfg.get('restore_dist_interfaces', True):
return
rp = os.path.realpath(eni)
if (os.path.exists(eni + ".dist") and
(rp.startswith("/run") or rp.startswith(target + "/run"))):
LOG.debug("restoring dist interfaces, existing link pointed to /run")
shutil.move(eni, eni + ".old")
shutil.move(eni + ".dist", eni)
def add_swap(cfg, target, fstab):
# add swap file per cfg to filesystem root at target. update fstab.
#
# swap:
# filename: 'swap.img',
# size: None # (or 1G)
# maxsize: 2G
if 'swap' in cfg and not cfg.get('swap'):
LOG.debug("disabling 'add_swap' due to config")
return
swapcfg = cfg.get('swap', {})
fname = swapcfg.get('filename', None)
size = swapcfg.get('size', None)
maxsize = swapcfg.get('maxsize', None)
if size:
size = util.human2bytes(str(size))
if maxsize:
maxsize = util.human2bytes(str(maxsize))
swap.setup_swapfile(target=target, fstab=fstab, swapfile=fname, size=size,
maxsize=maxsize)
def detect_and_handle_multipath(cfg, target):
DEFAULT_MULTIPATH_PACKAGES = ['multipath-tools-boot']
mpcfg = cfg.get('multipath', {})
mpmode = mpcfg.get('mode', 'auto')
mppkgs = mpcfg.get('packages', DEFAULT_MULTIPATH_PACKAGES)
mpbindings = mpcfg.get('overwrite_bindings', True)
if isinstance(mppkgs, str):
mppkgs = [mppkgs]
if mpmode == 'disabled':
return
if mpmode == 'auto' and not block.detect_multipath(target):
return
LOG.info("Detected multipath devices. Installing support via %s", mppkgs)
util.install_packages(mppkgs, target=target)
multipath_cfg_path = os.path.sep.join([target, '/etc/multipath.conf'])
multipath_bind_path = os.path.sep.join([target, '/etc/multipath/bindings'])
# We don't want to overwrite multipath.conf file provided by the image.
if not os.path.isfile(multipath_cfg_path):
# Without user_friendly_names option enabled system fails to boot
# if any of the disks has spaces in its name. Package multipath-tools
# has bug opened for this issue (LP: 1432062) but it was not fixed yet.
multipath_cfg_content = '\n'.join(
['# This file was created by curtin while installing the system.',
'defaults {',
' user_friendly_names yes',
'}',
''])
util.write_file(multipath_cfg_path, content=multipath_cfg_content)
if mpbindings or not os.path.isfile(multipath_bind_path):
# we do assume that get_devices_for_mp()[0] is /
target_dev = block.get_devices_for_mp(target)[0]
wwid = block.get_scsi_wwid(target_dev)
blockdev, partno = block.get_blockdev_for_partition(target_dev)
mpname = "mpath0"
grub_dev = "/dev/mapper/" + mpname
if partno is not None:
grub_dev += "-part%s" % partno
LOG.debug("configuring multipath install for root=%s wwid=%s",
grub_dev, wwid)
multipath_bind_content = '\n'.join(
['# This file was created by curtin while installing the system.',
"%s %s" % (mpname, wwid),
'# End of content generated by curtin.',
'# Everything below is maintained by multipath subsystem.',
''])
util.write_file(multipath_bind_path, content=multipath_bind_content)
grub_cfg = os.path.sep.join(
[target, '/etc/default/grub.d/50-curtin-multipath.cfg'])
msg = '\n'.join([
'# Written by curtin for multipath device wwid "%s"' % wwid,
'GRUB_DEVICE=%s' % grub_dev,
'GRUB_DISABLE_LINUX_UUID=true',
''])
util.write_file(grub_cfg, content=msg)
# FIXME: this assumes grub. need more generic way to update root=
util.ensure_dir(os.path.sep.join([target, os.path.dirname(grub_dev)]))
with util.RunInChroot(target) as in_chroot:
in_chroot(['update-grub'])
else:
LOG.warn("Not sure how this will boot")
# Initrams needs to be updated to include /etc/multipath.cfg
# and /etc/multipath/bindings files.
update_initramfs(target, all_kernels=True)
def install_missing_packages(cfg, target):
''' describe which operation types will require specific packages
'custom_config_key': {
'pkg1': ['op_name_1', 'op_name_2', ...]
}
'''
custom_configs = {
'storage': {
'lvm2': ['lvm_volgroup', 'lvm_partition'],
'mdadm': ['raid'],
'bcache-tools': ['bcache']},
'network': {
'vlan': ['vlan'],
'ifenslave': ['bond'],
'bridge-utils': ['bridge']},
}
format_configs = {
'xfsprogs': ['xfs'],
'e2fsprogs': ['ext2', 'ext3', 'ext4'],
'btrfs-tools': ['btrfs'],
}
needed_packages = []
installed_packages = get_installed_packages(target)
for cust_cfg, pkg_reqs in custom_configs.items():
if cust_cfg not in cfg:
continue
all_types = set(
operation['type']
for operation in cfg[cust_cfg]['config']
)
for pkg, types in pkg_reqs.items():
if set(types).intersection(all_types) and \
pkg not in installed_packages:
needed_packages.append(pkg)
format_types = set(
[operation['fstype']
for operation in cfg[cust_cfg]['config']
if operation['type'] == 'format'])
for pkg, fstypes in format_configs.items():
if set(fstypes).intersection(format_types) and \
pkg not in installed_packages:
needed_packages.append(pkg)
if needed_packages:
util.install_packages(needed_packages, target=target)
def system_upgrade(cfg, target):
"""run system-upgrade (apt-get dist-upgrade) or other in target.
config:
system_upgrade:
enabled: False
"""
mycfg = {'system_upgrade': {'enabled': False}}
config.merge_config(mycfg, cfg)
mycfg = mycfg.get('system_upgrade')
if not isinstance(mycfg, dict):
LOG.debug("system_upgrade disabled by config. entry not a dict.")
return
if not config.value_as_boolean(mycfg.get('enabled', True)):
LOG.debug("system_upgrade disabled by config.")
return
util.system_upgrade(target=target)
def curthooks(args):
state = util.load_command_environment()
if args.target is not None:
target = args.target
else:
target = state['target']
if target is None:
sys.stderr.write("Unable to find target. "
"Use --target or set TARGET_MOUNT_POINT\n")
sys.exit(2)
# if network-config hook exists in target,
# we do not run the builtin
if util.run_hook_if_exists(target, 'curtin-hooks'):
sys.exit(0)
cfg = config.load_command_config(args, state)
write_files(cfg, target)
apt_config(cfg, target)
disable_overlayroot(cfg, target)
install_kernel(cfg, target)
apply_debconf_selections(cfg, target)
restore_dist_interfaces(cfg, target)
add_swap(cfg, target, state.get('fstab'))
apply_networking(target, state)
copy_fstab(state.get('fstab'), target)
detect_and_handle_multipath(cfg, target)
install_missing_packages(cfg, target)
system_upgrade(cfg, target)
# If a crypttab file was created by block_meta than it needs to be copied
# onto the target system, and update_initramfs() needs to be run, so that
# the cryptsetup hooks are properly configured on the installed system and
# it will be able to open encrypted volumes at boot.
crypttab_location = os.path.join(os.path.split(state['fstab'])[0],
"crypttab")
if os.path.exists(crypttab_location):
copy_crypttab(crypttab_location, target)
update_initramfs(target)
# If a mdadm.conf file was created by block_meta than it needs to be copied
# onto the target system
mdadm_location = os.path.join(os.path.split(state['fstab'])[0],
"mdadm.conf")
if os.path.exists(mdadm_location):
copy_mdadm_conf(mdadm_location, target)
# as per https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/964052
# reconfigure mdadm
util.subp(['chroot', target, 'dpkg-reconfigure',
'--frontend=noninteractive', 'mdadm'], data=None)
# If udev dname rules were created, copy them to target
udev_rules_d = os.path.join(state['scratch'], "rules.d")
if os.path.isdir(udev_rules_d):
copy_dname_rules(udev_rules_d, target)
# As a rule, ARMv7 systems don't use grub. This may change some
# day, but for now, assume no. They do require the initramfs
# to be updated, and this also triggers boot loader setup via
# flash-kernel.
machine = platform.machine()
if (machine.startswith('armv7') or
machine.startswith('aarch64') and not util.is_uefi_bootable()):
update_initramfs(target)
else:
setup_grub(cfg, target)
sys.exit(0)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, curthooks)
CONFIG_CLEANERS = {
'cloud-init': clean_cloud_init,
}
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/extract.py 0000644 0000000 0000000 00000007771 12673006714 017467 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import os
import sys
import curtin.config
from curtin.log import LOG
import curtin.util
from . import populate_one_subcmd
CMD_ARGUMENTS = (
((('-t', '--target'),
{'help': ('target directory to extract to (root) '
'[default TARGET_MOUNT_POINT]'),
'action': 'store', 'default': os.environ.get('TARGET_MOUNT_POINT')}),
(('sources',),
{'help': 'the sources to install [default read from CONFIG]',
'nargs': '*'}),
)
)
def tar_xattr_opts(cmd=None):
# if tar cmd supports xattrs, return the required flags to extract them.
if cmd is None:
cmd = ['tar']
if isinstance(cmd, str):
cmd = [cmd]
(out, _err) = curtin.util.subp(cmd + ['--help'], capture=True)
if "xattr" in out:
return ['--xattrs', '--xattrs-include=*']
return []
def extract_root_tgz_url(source, target):
# extract a -root.tar.gz url in the 'target' directory
#
# Uses smtar to avoid specifying the compression type
curtin.util.subp(args=['sh', '-cf',
('wget "$1" --progress=dot:mega -O - |'
'smtar -C "$2" ' + ' '.join(tar_xattr_opts()) +
' ' + '-Sxpf - --numeric-owner'),
'--', source, target])
def extract_root_tgz_file(source, target):
curtin.util.subp(args=['tar', '-C', target] +
tar_xattr_opts() + ['-Sxpzf', source, '--numeric-owner'])
def copy_to_target(source, target):
if source.startswith("cp://"):
source = source[5:]
source = os.path.abspath(source)
curtin.util.subp(args=['sh', '-c',
('mkdir -p "$2" && cd "$2" && '
'rsync -aXHAS --one-file-system "$1/" .'),
'--', source, target])
def extract(args):
if not args.target:
raise ValueError("Target must be defined or set in environment")
cfgfile = os.environ.get('CONFIG')
cfg = {}
sources = args.sources
target = args.target
if not sources:
if cfgfile:
cfg = curtin.config.load_config(cfgfile)
if not cfg.get('sources'):
raise ValueError("'sources' must be on cmdline or in config")
sources = cfg.get('sources')
if isinstance(sources, dict):
sources = [sources[k] for k in sorted(sources.keys())]
LOG.debug("Installing sources: %s to target at %s" % (sources, target))
for source in sources:
if source['type'].startswith('dd-'):
continue
if source['uri'].startswith("cp://"):
copy_to_target(source['uri'], target)
elif os.path.isfile(source['uri']):
extract_root_tgz_file(source['uri'], target)
elif source['uri'].startswith("file://"):
extract_root_tgz_file(
source['uri'][len("file://"):],
target)
elif (source['uri'].startswith("http://") or
source['uri'].startswith("https://")):
extract_root_tgz_url(source['uri'], target)
else:
raise TypeError(
"do not know how to extract '%s'" %
source['uri'])
sys.exit(0)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, extract)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/hook.py 0000644 0000000 0000000 00000002663 12673006714 016750 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import os
import sys
import curtin.config
from curtin.log import LOG
import curtin.util
from . import populate_one_subcmd
CMD_ARGUMENTS = (
((('target',),
{'help': 'finalize the provided directory [default TARGET_MOUNT_POINT]',
'action': 'store', 'default': os.environ.get('TARGET_MOUNT_POINT'),
'nargs': '?'}),
)
)
def hook(args):
if not args.target:
raise ValueError("Target must be provided or set in environment")
LOG.debug("Finalizing %s" % args.target)
curtin.util.run_hook_if_exists(args.target, "finalize")
sys.exit(0)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, hook)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/in_target.py 0000644 0000000 0000000 00000005253 12673006714 017762 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import os
import pty
import sys
from curtin import util
from . import populate_one_subcmd
CMD_ARGUMENTS = (
((('-a', '--allow-daemons'),
{'help': 'do not disable daemons via invoke-rc.d',
'action': 'store_true', 'default': False, }),
(('-i', '--interactive'),
{'help': 'use command invoked interactively',
'action': 'store_true', 'default': False}),
(('--capture',),
{'help': 'capture/swallow output of command',
'action': 'store_true', 'default': False}),
(('-t', '--target'),
{'help': 'chroot to target. default is env[TARGET_MOUNT_POINT]',
'action': 'store', 'metavar': 'TARGET',
'default': os.environ.get('TARGET_MOUNT_POINT')}),
('command_args',
{'help': 'run a command chrooted in the target', 'nargs': '*'}),
)
)
def run_command(cmd, interactive, capture=False):
exit = 0
if interactive:
pty.spawn(cmd)
else:
try:
util.subp(cmd, capture=capture)
except util.ProcessExecutionError as e:
exit = e.exit_code
return exit
def in_target_main(args):
if args.target is not None:
target = args.target
else:
state = util.load_command_environment()
target = state['target']
if args.target is None:
sys.stderr.write("Unable to find target. "
"Use --target or set TARGET_MOUNT_POINT\n")
sys.exit(2)
if os.path.abspath(target) == "/":
cmd = args.command_args
else:
cmd = ['chroot', target] + args.command_args
if target == "/" and args.allow_daemons:
ret = run_command(cmd, args.interactive, capture=args.capture)
else:
with util.ChrootableTarget(target, allow_daemons=args.allow_daemons):
ret = run_command(cmd, args.interactive)
sys.exit(ret)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, in_target_main)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/install.py 0000644 0000000 0000000 00000035174 12673006714 017461 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import argparse
import json
import os
import re
import shlex
import shutil
import subprocess
import sys
import tempfile
from curtin import block
from curtin import config
from curtin import util
from curtin.log import LOG
from curtin.reporter.legacy import load_reporter
from curtin.reporter import events
from . import populate_one_subcmd
INSTALL_LOG = "/var/log/curtin/install.log"
STAGE_DESCRIPTIONS = {
'early': 'preparing for installation',
'partitioning': 'configuring storage',
'network': 'configuring network',
'extract': 'writing install sources to disk',
'curthooks': 'configuring installed system',
'hook': 'finalizing installation',
'late': 'executing late commands',
}
CONFIG_BUILTIN = {
'sources': {},
'stages': ['early', 'partitioning', 'network', 'extract', 'curthooks',
'hook', 'late'],
'extract_commands': {'builtin': ['curtin', 'extract']},
'hook_commands': {'builtin': ['curtin', 'hook']},
'partitioning_commands': {
'builtin': ['curtin', 'block-meta', 'simple']},
'curthooks_commands': {'builtin': ['curtin', 'curthooks']},
'late_commands': {'builtin': []},
'network_commands': {'builtin': ['curtin', 'net-meta', 'auto']},
'apply_net_commands': {'builtin': []},
'install': {'log_file': INSTALL_LOG},
}
def clear_install_log(logfile):
"""Clear the installation log, so no previous installation is present."""
util.ensure_dir(os.path.dirname(logfile))
try:
open(logfile, 'w').close()
except:
pass
def writeline(fname, output):
"""Write a line to a file."""
if not output.endswith('\n'):
output += '\n'
try:
with open(fname, 'a') as fp:
fp.write(output)
except IOError:
pass
class WorkingDir(object):
def __init__(self, config):
top_d = tempfile.mkdtemp()
state_d = os.path.join(top_d, 'state')
target_d = os.path.join(top_d, 'target')
scratch_d = os.path.join(top_d, 'scratch')
for p in (state_d, target_d, scratch_d):
os.mkdir(p)
netconf_f = os.path.join(state_d, 'network_config')
netstate_f = os.path.join(state_d, 'network_state')
interfaces_f = os.path.join(state_d, 'interfaces')
config_f = os.path.join(state_d, 'config')
fstab_f = os.path.join(state_d, 'fstab')
with open(config_f, "w") as fp:
json.dump(config, fp)
# just touch these files to make sure they exist
for f in (interfaces_f, config_f, fstab_f, netconf_f, netstate_f):
with open(f, "ab") as fp:
pass
self.scratch = scratch_d
self.target = target_d
self.top = top_d
self.interfaces = interfaces_f
self.netconf = netconf_f
self.netstate = netstate_f
self.fstab = fstab_f
self.config = config
self.config_file = config_f
def env(self):
return ({'WORKING_DIR': self.scratch, 'OUTPUT_FSTAB': self.fstab,
'OUTPUT_INTERFACES': self.interfaces,
'OUTPUT_NETWORK_CONFIG': self.netconf,
'OUTPUT_NETWORK_STATE': self.netstate,
'TARGET_MOUNT_POINT': self.target,
'CONFIG': self.config_file})
class Stage(object):
def __init__(self, name, commands, env, reportstack=None, logfile=None):
self.name = name
self.commands = commands
self.env = env
if logfile is None:
logfile = INSTALL_LOG
self.install_log = self._open_install_log(logfile)
if hasattr(sys.stdout, 'buffer'):
self.write_stdout = self._write_stdout3
else:
self.write_stdout = self._write_stdout2
if reportstack is None:
reportstack = events.ReportEventStack(
name="stage-%s" % name, description="basic stage %s" % name,
reporting_enabled=False)
self.reportstack = reportstack
def _open_install_log(self, logfile):
"""Open the install log."""
if not logfile:
return None
try:
return open(logfile, 'ab')
except IOError:
return None
def _write_stdout3(self, data):
sys.stdout.buffer.write(data) # pylint: disable=no-member
sys.stdout.flush()
def _write_stdout2(self, data):
sys.stdout.write(data)
sys.stdout.flush()
def write(self, data):
"""Write data to stdout and to the install_log."""
self.write_stdout(data)
if self.install_log is not None:
self.install_log.write(data)
self.install_log.flush()
def run(self):
for cmdname in sorted(self.commands.keys()):
cmd = self.commands[cmdname]
if not cmd:
continue
cur_res = events.ReportEventStack(
name=cmdname, description="running '%s'" % cmdname,
parent=self.reportstack)
env = self.env.copy()
env['CURTIN_REPORTSTACK'] = cur_res.fullname
shell = not isinstance(cmd, list)
with util.LogTimer(LOG.debug, cmdname):
with cur_res:
try:
sp = subprocess.Popen(
cmd, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
env=env, shell=shell)
except OSError as e:
LOG.warn("%s command failed", cmdname)
raise util.ProcessExecutionError(cmd=cmd, reason=e)
output = b""
while True:
data = sp.stdout.read(1)
if not data and sp.poll() is not None:
break
self.write(data)
output += data
rc = sp.returncode
if rc != 0:
LOG.warn("%s command failed", cmdname)
raise util.ProcessExecutionError(
stdout=output, stderr="",
exit_code=rc, cmd=cmd)
def apply_power_state(pstate):
"""
power_state:
delay: 5
mode: poweroff
message: Bye Bye
"""
cmd = load_power_state(pstate)
if not cmd:
return
LOG.info("powering off with %s", cmd)
fid = os.fork()
if fid == 0:
try:
util.subp(cmd)
os._exit(0)
except:
LOG.warn("%s returned non-zero" % cmd)
os._exit(1)
return
def load_power_state(pstate):
"""Returns a command to reboot the system if power_state should."""
if pstate is None:
return None
if not isinstance(pstate, dict):
raise TypeError("power_state is not a dict.")
opt_map = {'halt': '-H', 'poweroff': '-P', 'reboot': '-r'}
mode = pstate.get("mode")
if mode not in opt_map:
raise TypeError("power_state[mode] required, must be one of: %s." %
','.join(opt_map.keys()))
delay = pstate.get("delay", "5")
if delay == "now":
delay = "0"
elif re.match(r"\+[0-9]+", str(delay)):
delay = "%sm" % delay[1:]
else:
delay = str(delay)
args = ["shutdown", opt_map[mode], "now"]
if pstate.get("message"):
args.append(pstate.get("message"))
shcmd = ('sleep "$1" && shift; '
'[ -f /run/block-curtin-poweroff ] && exit 0; '
'exec "$@"')
return (['sh', '-c', shcmd, 'curtin-poweroff', delay] + args)
def apply_kexec(kexec, target):
"""
load kexec kernel from target dir, similar to /etc/init.d/kexec-load
kexec:
mode: on
"""
grubcfg = "boot/grub/grub.cfg"
target_grubcfg = os.path.join(target, grubcfg)
if kexec is None or kexec.get("mode") != "on":
return False
if not isinstance(kexec, dict):
raise TypeError("kexec is not a dict.")
if not util.which('kexec'):
util.install_packages('kexec-tools')
if not os.path.isfile(target_grubcfg):
raise ValueError("%s does not exist in target" % grubcfg)
with open(target_grubcfg, "r") as fp:
default = 0
menu_lines = []
# get the default grub boot entry number and menu entry line numbers
for line_num, line in enumerate(fp, 1):
if re.search(r"\bset default=\"[0-9]+\"\b", " %s " % line):
default = int(re.sub(r"[^0-9]", '', line))
if re.search(r"\bmenuentry\b", " %s " % line):
menu_lines.append(line_num)
if not menu_lines:
LOG.error("grub config file does not have a menuentry\n")
return False
# get the begin and end line numbers for default menuentry section,
# using end of file if it's the last menuentry section
begin = menu_lines[default]
if begin != menu_lines[-1]:
end = menu_lines[default + 1] - 1
else:
end = line_num
fp.seek(0)
lines = fp.readlines()
kernel = append = initrd = ""
for i in range(begin, end):
if 'linux' in lines[i].split():
split_line = shlex.split(lines[i])
kernel = os.path.join(target, split_line[1])
append = "--append=" + ' '.join(split_line[2:])
if 'initrd' in lines[i].split():
split_line = shlex.split(lines[i])
initrd = "--initrd=" + os.path.join(target, split_line[1])
if not kernel:
LOG.error("grub config file does not have a kernel\n")
return False
LOG.debug("kexec -l %s %s %s" % (kernel, append, initrd))
util.subp(args=['kexec', '-l', kernel, append, initrd])
return True
def cmd_install(args):
cfg = CONFIG_BUILTIN.copy()
config.merge_config(cfg, args.config)
for source in args.source:
src = util.sanitize_source(source)
cfg['sources']["%02d_cmdline" % len(cfg['sources'])] = src
LOG.debug("merged config: %s" % cfg)
if not len(cfg.get('sources', [])):
raise util.BadUsage("no sources provided to install")
for i in cfg['sources']:
# we default to tgz for old style sources config
cfg['sources'][i] = util.sanitize_source(cfg['sources'][i])
if cfg.get('http_proxy'):
os.environ['http_proxy'] = cfg['http_proxy']
instcfg = cfg.get('install', {})
logfile = instcfg.get('log_file')
post_files = instcfg.get('post_files', [logfile])
# Generate curtin configuration dump and add to write_files unless
# installation config disables dump
yaml_dump_file = instcfg.get('save_install_config',
'/root/curtin-install-cfg.yaml')
if yaml_dump_file:
write_files = cfg.get('write_files', {})
write_files['curtin_install_cfg'] = {
'path': yaml_dump_file,
'permissions': '0400',
'owner': 'root:root',
'content': config.dump_config(cfg)
}
cfg['write_files'] = write_files
# Load reporter
clear_install_log(logfile)
post_files = cfg.get('post_files', [logfile])
legacy_reporter = load_reporter(cfg)
legacy_reporter.files = post_files
args.reportstack.post_files = post_files
try:
dd_images = util.get_dd_images(cfg.get('sources', {}))
if len(dd_images) > 1:
raise ValueError("You may not use more then one disk image")
workingd = WorkingDir(cfg)
LOG.debug(workingd.env())
env = os.environ.copy()
env.update(workingd.env())
for name in cfg.get('stages'):
desc = STAGE_DESCRIPTIONS.get(name, "stage %s" % name)
reportstack = events.ReportEventStack(
"stage-%s" % name, description=desc,
parent=args.reportstack)
env['CURTIN_REPORTSTACK'] = reportstack.fullname
with reportstack:
commands_name = '%s_commands' % name
with util.LogTimer(LOG.debug, 'stage_%s' % name):
stage = Stage(name, cfg.get(commands_name, {}), env,
reportstack=reportstack, logfile=logfile)
stage.run()
if apply_kexec(cfg.get('kexec'), workingd.target):
cfg['power_state'] = {'mode': 'reboot', 'delay': 'now',
'message': "'rebooting with kexec'"}
writeline(logfile, "Installation finished.")
legacy_reporter.report_success()
except Exception as e:
exp_msg = "Installation failed with exception: %s" % e
writeline(logfile, exp_msg)
LOG.error(exp_msg)
legacy_reporter.report_failure(exp_msg)
raise e
finally:
for d in ('sys', 'dev', 'proc'):
util.do_umount(os.path.join(workingd.target, d))
mounted = block.get_mountpoints()
mounted.sort(key=lambda x: -1 * x.count("/"))
for d in filter(lambda x: workingd.target in x, mounted):
util.do_umount(d)
util.do_umount(workingd.target)
shutil.rmtree(workingd.top)
apply_power_state(cfg.get('power_state'))
sys.exit(0)
# we explicitly accept config on install for backwards compatibility
CMD_ARGUMENTS = (
((('-c', '--config'),
{'help': 'read configuration from cfg', 'action': util.MergedCmdAppend,
'metavar': 'FILE', 'type': argparse.FileType("rb"),
'dest': 'cfgopts', 'default': []}),
('--set', {'action': util.MergedCmdAppend,
'help': ('define a config variable. key can be a "/" '
'delimited path ("early_commands/cmd1=a"). if '
'key starts with "json:" then val is loaded as '
'json (json:stages="[\'early\']")'),
'metavar': 'key=val', 'dest': 'cfgopts'}),
('source', {'help': 'what to install', 'nargs': '*'}),
)
)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, cmd_install)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/main.py 0000644 0000000 0000000 00000016577 12673006714 016745 0 ustar 0000000 0000000 #!/usr/bin/python
# Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import argparse
import os
import sys
import traceback
from .. import log
from .. import util
from ..deps import install_deps
SUB_COMMAND_MODULES = ['apply_net', 'block-meta', 'curthooks', 'extract',
'hook', 'in-target', 'install', 'mkfs', 'net-meta',
'pack', 'swap', 'system-install', 'system-upgrade']
def add_subcmd(subparser, subcmd):
modname = subcmd.replace("-", "_")
subcmd_full = "curtin.commands.%s" % modname
__import__(subcmd_full)
try:
popfunc = getattr(sys.modules[subcmd_full], 'POPULATE_SUBCMD')
except AttributeError:
raise AttributeError("No 'POPULATE_SUBCMD' in %s" % subcmd_full)
popfunc(subparser.add_parser(subcmd))
class NoHelpParser(argparse.ArgumentParser):
# ArgumentParser with forced 'add_help=False'
def __init__(self, *args, **kwargs):
kwargs.update({'add_help': False})
super(NoHelpParser, self).__init__(*args, **kwargs)
def error(self, message):
# without overriding this, argparse exits with bad usage
raise ValueError("failed parsing arguments: %s" % message)
def get_main_parser(stacktrace=False, verbosity=0,
parser_class=argparse.ArgumentParser):
parser = parser_class(prog='curtin')
parser.add_argument('--showtrace', action='store_true', default=stacktrace)
parser.add_argument('-v', '--verbose', action='count', default=verbosity,
dest='verbosity')
parser.add_argument('--log-file', default=sys.stderr,
type=argparse.FileType('w'))
parser.add_argument('-c', '--config', action=util.MergedCmdAppend,
help='read configuration from cfg',
metavar='FILE', type=argparse.FileType("rb"),
dest='main_cfgopts', default=[])
parser.add_argument('--install-deps', action='store_true',
help='install dependencies as necessary',
default=False)
parser.add_argument('--set', action=util.MergedCmdAppend,
help=('define a config variable. key can be a "/" '
'delimited path ("early_commands/cmd1=a"). if '
'key starts with "json:" then val is loaded as '
'json (json:stages="[\'early\']")'),
metavar='key=val', dest='main_cfgopts')
parser.set_defaults(config={})
parser.set_defaults(reportstack=None)
return parser
def maybe_install_deps(args, stacktrace=True, verbosity=0):
parser = get_main_parser(stacktrace=stacktrace, verbosity=verbosity,
parser_class=NoHelpParser)
subps = parser.add_subparsers(dest="subcmd", parser_class=NoHelpParser)
for subcmd in SUB_COMMAND_MODULES:
subps.add_parser(subcmd)
install_only_args = [
['-v', '--install-deps'],
['-vv', '--install-deps'],
['--install-deps', '-v'],
['--install-deps', '-vv'],
['--install-deps'],
]
install_only = args in install_only_args
if install_only:
verbosity = 1
else:
try:
ns, unknown = parser.parse_known_args(args)
verbosity = ns.verbosity
if not ns.install_deps:
return
except ValueError:
# bad usage will be reported by the real reporter
return
ret = install_deps(verbosity=verbosity)
if ret != 0 or install_only:
sys.exit(ret)
return
def main(args=None):
if args is None:
args = sys.argv
stacktrace = (os.environ.get('CURTIN_STACKTRACE', "0").lower()
not in ("0", "false", ""))
try:
verbosity = int(os.environ.get('CURTIN_VERBOSITY', "0"))
except ValueError:
verbosity = 1
maybe_install_deps(sys.argv[1:], stacktrace=stacktrace,
verbosity=verbosity)
# Above here, only standard library modules can be assumed.
from .. import config
from ..reporter import (events, update_configuration)
parser = get_main_parser(stacktrace=stacktrace, verbosity=verbosity)
subps = parser.add_subparsers(dest="subcmd")
for subcmd in SUB_COMMAND_MODULES:
add_subcmd(subps, subcmd)
args = parser.parse_args(sys.argv[1:])
# merge config flags into a single config dictionary
cfg_opts = args.main_cfgopts
if hasattr(args, 'cfgopts'):
cfg_opts += getattr(args, 'cfgopts')
cfg = {}
if cfg_opts:
for (flag, val) in cfg_opts:
if flag in ('-c', '--config'):
config.merge_config_fp(cfg, val)
val.close()
elif flag in ('--set'):
config.merge_cmdarg(cfg, val)
else:
cfg = config.load_command_config(args, util.load_command_environment())
args.config = cfg
# if user gave cmdline arguments, then set environ so subsequent
# curtin calls get those as default
showtrace = args.showtrace
if 'showtrace' in cfg:
showtrace = str(cfg['showtrace']).lower() not in ("0", "false")
os.environ['CURTIN_STACKTRACE'] = str(int(showtrace))
verbosity = args.verbosity
if 'verbosity' in cfg:
verbosity = int(cfg['verbosity'])
os.environ['CURTIN_VERBOSITY'] = str(verbosity)
if not getattr(args, 'func', None):
# http://bugs.python.org/issue16308
parser.print_help()
sys.exit(1)
log.basicConfig(stream=args.log_file, verbosity=verbosity)
paths = util.get_paths()
if paths['helpers'] is None or paths['curtin_exe'] is None:
raise OSError("Unable to find helpers or 'curtin' exe to add to path")
path = os.environ['PATH'].split(':')
for cand in (paths['helpers'], os.path.dirname(paths['curtin_exe'])):
if cand not in [os.path.abspath(d) for d in path]:
path.insert(0, cand)
os.environ['PATH'] = ':'.join(path)
# set up the reportstack
update_configuration(cfg.get('reporting', {}))
stack_prefix = (os.environ.get("CURTIN_REPORTSTACK", "") +
"/cmd-%s" % args.subcmd)
if stack_prefix.startswith("/"):
stack_prefix = stack_prefix[1:]
os.environ["CURTIN_REPORTSTACK"] = stack_prefix
args.reportstack = events.ReportEventStack(
name=stack_prefix, description="curtin command %s" % args.subcmd,
reporting_enabled=True)
try:
with args.reportstack:
ret = args.func(args)
sys.exit(ret)
except Exception as e:
if showtrace:
traceback.print_exc()
sys.stderr.write("%s\n" % e)
sys.exit(3)
if __name__ == '__main__':
sys.exit(main())
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/mkfs.py 0000644 0000000 0000000 00000004303 12673006714 016741 0 ustar 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Wesley Wiedenmeier
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
from . import populate_one_subcmd
from curtin.block.mkfs import mkfs as run_mkfs
from curtin.block.mkfs import valid_fstypes
import sys
CMD_ARGUMENTS = (
(('devices',
{'help': 'create filesystem on the target volume(s) or storage config \
item(s)',
'metavar': 'DEVICE', 'action': 'store', 'nargs': '+'}),
(('-f', '--fstype'),
{'help': 'filesystem type to use. default is ext4',
'choices': sorted(valid_fstypes()),
'default': 'ext4', 'action': 'store'}),
(('-l', '--label'),
{'help': 'label to use for filesystem', 'action': 'store'}),
(('-u', '--uuid'),
{'help': 'uuid to use for filesystem', 'action': 'store'}),
(('-s', '--strict'),
{'help': 'exit if mkfs cannot do exactly what is specified',
'action': 'store_true', 'default': False}),
(('-F', '--force'),
{'help': 'continue if some data already exists on device',
'action': 'store_true', 'default': False})
)
)
def mkfs(args):
for device in args.devices:
uuid = run_mkfs(device, args.fstype, strict=args.strict,
uuid=args.uuid, label=args.label,
force=args.force)
print("Created '%s' filesystem in '%s' with uuid '%s' and label '%s'" %
(args.fstype, device, uuid, args.label))
sys.exit(0)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, mkfs)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/net_meta.py 0000644 0000000 0000000 00000013012 12673006714 017572 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import argparse
import os
import sys
from curtin import net
import curtin.util as util
import curtin.config as config
from . import populate_one_subcmd
DEVNAME_ALIASES = ['connected', 'configured', 'netboot']
def network_device(value):
if value in DEVNAME_ALIASES:
return value
if (value.startswith('eth') or
(value.startswith('en') and len(value) == 3)):
return value
raise argparse.ArgumentTypeError("%s does not look like a netdev name")
def resolve_alias(alias):
if alias == "connected":
alldevs = net.get_devicelist()
return [d for d in alldevs if
net.is_physical(d) and net.is_up(d)]
elif alias == "configured":
alldevs = net.get_devicelist()
return [d for d in alldevs if
net.is_physical(d) and net.is_up(d) and net.is_connected(d)]
elif alias == "netboot":
# should read /proc/cmdline here for BOOTIF
raise NotImplementedError("netboot alias not implemented")
else:
raise ValueError("'%s' is not an alias: %s", alias, DEVNAME_ALIASES)
def interfaces_basic_dhcp(devices):
content = '\n'.join(
[("# This file describes the network interfaces available on "
"your system"),
"# and how to activate them. For more information see interfaces(5).",
"",
"# The loopback network interface",
"auto lo",
"iface lo inet loopback",
])
for d in devices:
content += '\n'.join(("", "", "auto %s" % d,
"iface %s inet dhcp" % d,))
content += "\n"
return content
def interfaces_custom(args):
state = util.load_command_environment()
cfg = config.load_command_config(args, state)
network_config = cfg.get('network', [])
if not network_config:
raise Exception("network configuration is required by mode '%s' "
"but not provided in the config file" % 'custom')
return config.dump_config({'network': network_config})
def net_meta(args):
# curtin net-meta --devices connected dhcp
# curtin net-meta --devices configured dhcp
# curtin net-meta --devices netboot dhcp
# curtin net-meta --devices connected custom
# if network-config hook exists in target,
# we do not run the builtin
if util.run_hook_if_exists(args.target, 'network-config'):
sys.exit(0)
state = util.load_command_environment()
cfg = config.load_command_config(args, state)
if cfg.get("network") is not None:
args.mode = "custom"
eni = "etc/network/interfaces"
if args.mode == "auto":
if not args.devices:
args.devices = ["connected"]
t_eni = None
if args.target:
t_eni = os.path.sep.join((args.target, eni,))
if not os.path.isfile(t_eni):
t_eni = None
if t_eni:
args.mode = "copy"
else:
args.mode = "dhcp"
devices = []
if args.devices:
for dev in args.devices:
if dev in DEVNAME_ALIASES:
devices += resolve_alias(dev)
else:
devices.append(dev)
if args.mode == "copy":
if not args.target:
raise argparse.ArgumentTypeError("mode 'copy' requires --target")
t_eni = os.path.sep.join((args.target, "etc/network/interfaces",))
with open(t_eni, "r") as fp:
content = fp.read()
elif args.mode == "dhcp":
content = interfaces_basic_dhcp(devices)
elif args.mode == 'custom':
content = interfaces_custom(args)
# if we have a config, write it out to OUTPUT_NETWORK_CONFIG
output_network_config = os.environ.get("OUTPUT_NETWORK_CONFIG", "")
if output_network_config:
with open(output_network_config, "w") as fp:
fp.write(content)
if args.output == "-":
sys.stdout.write(content)
else:
with open(args.output, "w") as fp:
fp.write(content)
sys.exit(0)
CMD_ARGUMENTS = (
((('-D', '--devices'),
{'help': 'which devices to operate on', 'action': 'append',
'metavar': 'DEVICE', 'type': network_device}),
(('-o', '--output'),
{'help': 'file to write to. defaults to env["OUTPUT_INTERFACES"] or "-"',
'metavar': 'IFILE', 'action': 'store',
'default': os.environ.get('OUTPUT_INTERFACES', "-")}),
(('-t', '--target'),
{'help': 'operate on target. default is env[TARGET_MOUNT_POINT]',
'action': 'store', 'metavar': 'TARGET',
'default': os.environ.get('TARGET_MOUNT_POINT')}),
('mode', {'help': 'meta-mode to use',
'choices': ['dhcp', 'copy', 'auto', 'custom']})
)
)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, net_meta)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/pack.py 0000644 0000000 0000000 00000003612 12673006714 016721 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import sys
from curtin import pack
from . import populate_one_subcmd
CMD_ARGUMENTS = (
((('-o', '--output'),
{'help': 'where to write the archive to', 'action': 'store',
'metavar': 'FILE', 'default': "-", }),
(('-a', '--add'),
{'help': 'include FILE_PATH in archive at ARCHIVE_PATH',
'action': 'append', 'metavar': 'ARCHIVE_PATH:FILE_PATH',
'default': []}),
('command_args',
{'help': 'command to run after extracting', 'nargs': '*'}),
)
)
def pack_main(args):
if args.output == "-":
fdout = sys.stdout
else:
fdout = open(args.output, "w")
delim = ":"
addl = []
for tok in args.add:
if delim not in tok:
raise ValueError("'--add' argument '%s' did not have a '%s'",
(tok, delim))
(archpath, filepath) = tok.split(":", 1)
addl.append((archpath, filepath),)
pack.pack(fdout, command=args.command_args, copy_files=addl)
if args.output != "-":
fdout.close()
sys.exit(0)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, pack_main)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/swap.py 0000644 0000000 0000000 00000005477 12673006714 016770 0 ustar 0000000 0000000 # Copyright (C) 2013 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import os
import sys
import curtin.swap as swap
import curtin.util as util
from . import populate_one_subcmd
def swap_main(args):
# curtin swap [--size=4G] [--target=/] [--fstab=/etc/fstab] [swap]
state = util.load_command_environment()
if args.target is not None:
state['target'] = args.target
if args.fstab is not None:
state['fstab'] = args.fstab
if state['target'] is None:
sys.stderr.write("Unable to find target. "
"Use --target or set TARGET_MOUNT_POINT\n")
sys.exit(2)
size = args.size
if size is not None and size.lower() == "auto":
size = None
if size is not None:
try:
size = util.human2bytes(size)
except ValueError as e:
sys.stderr.write("%s\n" % e)
sys.exit(2)
if args.maxsize is not None:
args.maxsize = util.human2bytes(args.maxsize)
swap.setup_swapfile(target=state['target'], fstab=state['fstab'],
swapfile=args.swapfile, size=size,
maxsize=args.maxsize)
sys.exit(2)
CMD_ARGUMENTS = (
((('-f', '--fstab'),
{'help': 'file to write to. defaults to env["OUTPUT_FSTAB"]',
'metavar': 'FSTAB', 'action': 'store',
'default': os.environ.get('OUTPUT_FSTAB')}),
(('-t', '--target'),
{'help': ('target filesystem root to add swap file to. '
'default is env[TARGET_MOUNT_POINT]'),
'action': 'store', 'metavar': 'TARGET',
'default': os.environ.get('TARGET_MOUNT_POINT')}),
(('-s', '--size'),
{'help': 'size of swap file (eg: 1G, 1500M, 1024K, 100000. def: "auto")',
'default': None, 'action': 'store'}),
(('-M', '--maxsize'),
{'help': 'maximum size of swap file (assuming "auto")',
'default': None, 'action': 'store'}),
('swapfile', {'help': 'path to swap file under target',
'default': 'swap.img', 'nargs': '?'}),
)
)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, swap_main)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/system_install.py 0000644 0000000 0000000 00000003754 12673006714 021064 0 ustar 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import os
import sys
import curtin.util as util
from . import populate_one_subcmd
from curtin.log import LOG
def system_install_pkgs_main(args):
# curtin system-install [--target=/] [pkg, [pkg...]]
if args.target is None:
args.target = "/"
exit_code = 0
try:
util.install_packages(
pkglist=args.packages, target=args.target,
allow_daemons=args.allow_daemons)
except util.ProcessExecutionError as e:
LOG.warn("system install failed for %s: %s" % (args.packages, e))
exit_code = e.exit_code
sys.exit(exit_code)
CMD_ARGUMENTS = (
((('--allow-daemons',),
{'help': ('do not disable running of daemons during upgrade.'),
'action': 'store_true', 'default': False}),
(('-t', '--target'),
{'help': ('target root to upgrade. '
'default is env[TARGET_MOUNT_POINT]'),
'action': 'store', 'metavar': 'TARGET',
'default': os.environ.get('TARGET_MOUNT_POINT')}),
('packages',
{'help': 'the list of packages to install',
'metavar': 'PACKAGES', 'action': 'store', 'nargs': '+'}),
)
)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, system_install_pkgs_main)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/commands/system_upgrade.py 0000644 0000000 0000000 00000003437 12673006714 021043 0 ustar 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import os
import sys
import curtin.util as util
from . import populate_one_subcmd
from curtin.log import LOG
def system_upgrade_main(args):
# curtin system-upgrade [--target=/]
if args.target is None:
args.target = "/"
exit_code = 0
try:
util.system_upgrade(target=args.target,
allow_daemons=args.allow_daemons)
except util.ProcessExecutionError as e:
LOG.warn("system upgrade failed: %s" % e)
exit_code = e.exit_code
sys.exit(exit_code)
CMD_ARGUMENTS = (
((('--allow-daemons',),
{'help': ('do not disable running of daemons during upgrade.'),
'action': 'store_true', 'default': False}),
(('-t', '--target'),
{'help': ('target root to upgrade. '
'default is env[TARGET_MOUNT_POINT]'),
'action': 'store', 'metavar': 'TARGET',
'default': os.environ.get('TARGET_MOUNT_POINT')}),
)
)
def POPULATE_SUBCMD(parser):
populate_one_subcmd(parser, CMD_ARGUMENTS, system_upgrade_main)
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/deps/__init__.py 0000644 0000000 0000000 00000011176 12673006714 016700 0 ustar 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import os
import sys
from curtin.util import (which, install_packages, lsb_release,
ProcessExecutionError)
REQUIRED_IMPORTS = [
# import string to execute, python2 package, python3 package
('import yaml', 'python-yaml', 'python3-yaml'),
]
REQUIRED_EXECUTABLES = [
# executable in PATH, package
('file', 'file'),
('lvcreate', 'lvm2'),
('mdadm', 'mdadm'),
('mkfs.vfat', 'dosfstools'),
('mkfs.btrfs', 'btrfs-tools'),
('mkfs.ext4', 'e2fsprogs'),
('mkfs.xfs', 'xfsprogs'),
('partprobe', 'parted'),
('sgdisk', 'gdisk'),
('udevadm', 'udev'),
('make-bcache', 'bcache-tools'),
]
if lsb_release()['codename'] == "precise":
REQUIRED_IMPORTS.append(
('import oauth.oauth', 'python-oauth', None),)
else:
REQUIRED_IMPORTS.append(
('import oauthlib.oauth1', 'python-oauthlib', 'python3-oauthlib'),)
class MissingDeps(Exception):
def __init__(self, message, deps):
self.message = message
if isinstance(deps, str) or deps is None:
deps = [deps]
self.deps = [d for d in deps if d is not None]
self.fatal = None in deps
def __str__(self):
if self.fatal:
if not len(self.deps):
return self.message + " Unresolvable."
return (self.message +
" Unresolvable. Partially resolvable with packages: %s" %
' '.join(self.deps))
else:
return self.message + " Install packages: %s" % ' '.join(self.deps)
def check_import(imports, py2pkgs, py3pkgs, message=None):
import_group = imports
if isinstance(import_group, str):
import_group = [import_group]
for istr in import_group:
try:
exec(istr)
return
except ImportError:
pass
if not message:
if isinstance(imports, str):
message = "Failed '%s'." % imports
else:
message = "Unable to do any of %s." % import_group
if sys.version_info[0] == 2:
pkgs = py2pkgs
else:
pkgs = py3pkgs
raise MissingDeps(message, pkgs)
def check_executable(cmdname, pkg):
if not which(cmdname):
raise MissingDeps("Missing program '%s'." % cmdname, pkg)
def check_executables(executables=None):
if executables is None:
executables = REQUIRED_EXECUTABLES
mdeps = []
for exe, pkg in executables:
try:
check_executable(exe, pkg)
except MissingDeps as e:
mdeps.append(e)
return mdeps
def check_imports(imports=None):
if imports is None:
imports = REQUIRED_IMPORTS
mdeps = []
for import_str, py2pkg, py3pkg in imports:
try:
check_import(import_str, py2pkg, py3pkg)
except MissingDeps as e:
mdeps.append(e)
return mdeps
def find_missing_deps():
return check_executables() + check_imports()
def install_deps(verbosity=False, dry_run=False, allow_daemons=True):
errors = find_missing_deps()
if len(errors) == 0:
if verbosity:
sys.stderr.write("No missing dependencies\n")
return 0
missing_pkgs = []
for e in errors:
missing_pkgs += e.deps
deps_string = ' '.join(sorted(missing_pkgs))
if dry_run:
sys.stderr.write("Missing dependencies: %s\n" % deps_string)
return 0
if os.geteuid() != 0:
sys.stderr.write("Missing dependencies: %s\n" % deps_string)
sys.stderr.write("Package installation is not possible as non-root.\n")
return 2
if verbosity:
sys.stderr.write("Installing %s\n" % deps_string)
ret = 0
try:
install_packages(missing_pkgs, allow_daemons=allow_daemons,
aptopts=["--no-install-recommends"])
except ProcessExecutionError as e:
sys.stderr.write("%s\n" % e)
ret = e.exit_code
return ret
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/deps/check.py 0000644 0000000 0000000 00000004273 12673006714 016216 0 ustar 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
"""
The intent point of this module is that it can be called
and exit success or fail, indicating that deps should be there.
python -m curtin.deps.check [-v]
"""
import argparse
import sys
from . import find_missing_deps
def debug(level, msg_level, msg):
if level >= msg_level:
if msg[-1] != "\n":
msg += "\n"
sys.stderr.write(msg)
def main():
parser = argparse.ArgumentParser(
prog='curtin-check-deps',
description='check dependencies for curtin.')
parser.add_argument('-v', '--verbose', action='count', default=0,
dest='verbosity')
args, extra = parser.parse_known_args(sys.argv[1:])
errors = find_missing_deps()
if len(errors) == 0:
# exit 0 means all dependencies are available.
debug(args.verbosity, 1, "No missing dependencies")
sys.exit(0)
missing_pkgs = []
fatal = []
for e in errors:
if e.fatal:
fatal.append(e)
debug(args.verbosity, 2, str(e))
missing_pkgs += e.deps
if len(fatal):
for e in fatal:
debug(args.verbosity, 1, str(e))
sys.exit(1)
debug(args.verbosity, 1,
"Fix with:\n apt-get -qy install %s\n" %
' '.join(sorted(missing_pkgs)))
# we exit higher with less deps needed.
# exiting 99 means just 1 dep needed.
sys.exit(100-len(missing_pkgs))
if __name__ == '__main__':
main()
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/deps/install.py 0000644 0000000 0000000 00000003107 12673006714 016602 0 ustar 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
"""
The intent of this module is that it can be called to install deps
python -m curtin.deps.install [-v]
"""
import argparse
import sys
from . import install_deps
def main():
parser = argparse.ArgumentParser(
prog='curtin-install-deps',
description='install dependencies for curtin.')
parser.add_argument('-v', '--verbose', action='count', default=0,
dest='verbosity')
parser.add_argument('--dry-run', action='store_true', default=False)
parser.add_argument('--no-allow-daemons', action='store_false',
default=True)
args = parser.parse_args(sys.argv[1:])
ret = install_deps(verbosity=args.verbosity, dry_run=args.dry_run,
allow_daemons=True)
sys.exit(ret)
if __name__ == '__main__':
main()
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/net/__init__.py 0000644 0000000 0000000 00000035532 12673006714 016535 0 ustar 0000000 0000000 # Copyright (C) 2013-2014 Canonical Ltd.
#
# Author: Scott Moser
# Author: Blake Rouse
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
import errno
import glob
import os
import re
from curtin.log import LOG
from curtin.udev import generate_udev_rule
import curtin.util as util
import curtin.config as config
from . import network_state
SYS_CLASS_NET = "/sys/class/net/"
NET_CONFIG_OPTIONS = [
"address", "netmask", "broadcast", "network", "metric", "gateway",
"pointtopoint", "media", "mtu", "hostname", "leasehours", "leasetime",
"vendor", "client", "bootfile", "server", "hwaddr", "provider", "frame",
"netnum", "endpoint", "local", "ttl",
]
NET_CONFIG_COMMANDS = [
"pre-up", "up", "post-up", "down", "pre-down", "post-down",
]
NET_CONFIG_BRIDGE_OPTIONS = [
"bridge_ageing", "bridge_bridgeprio", "bridge_fd", "bridge_gcinit",
"bridge_hello", "bridge_maxage", "bridge_maxwait", "bridge_stp",
]
def sys_dev_path(devname, path=""):
return SYS_CLASS_NET + devname + "/" + path
def read_sys_net(devname, path, translate=None, enoent=None, keyerror=None):
try:
contents = ""
with open(sys_dev_path(devname, path), "r") as fp:
contents = fp.read().strip()
if translate is None:
return contents
try:
return translate.get(contents)
except KeyError:
LOG.debug("found unexpected value '%s' in '%s/%s'", contents,
devname, path)
if keyerror is not None:
return keyerror
raise
except OSError as e:
if e.errno == errno.ENOENT and enoent is not None:
return enoent
raise
def is_up(devname):
# The linux kernel says to consider devices in 'unknown'
# operstate as up for the purposes of network configuration. See
# Documentation/networking/operstates.txt in the kernel source.
translate = {'up': True, 'unknown': True, 'down': False}
return read_sys_net(devname, "operstate", enoent=False, keyerror=False,
translate=translate)
def is_wireless(devname):
return os.path.exists(sys_dev_path(devname, "wireless"))
def is_connected(devname):
# is_connected isn't really as simple as that. 2 is
# 'physically connected'. 3 is 'not connected'. but a wlan interface will
# always show 3.
try:
iflink = read_sys_net(devname, "iflink", enoent=False)
if iflink == "2":
return True
if not is_wireless(devname):
return False
LOG.debug("'%s' is wireless, basing 'connected' on carrier", devname)
return read_sys_net(devname, "carrier", enoent=False, keyerror=False,
translate={'0': False, '1': True})
except IOError as e:
if e.errno == errno.EINVAL:
return False
raise
def is_physical(devname):
return os.path.exists(sys_dev_path(devname, "device"))
def is_present(devname):
return os.path.exists(sys_dev_path(devname))
def get_devicelist():
return os.listdir(SYS_CLASS_NET)
class ParserError(Exception):
"""Raised when parser has issue parsing the interfaces file."""
def parse_deb_config_data(ifaces, contents, src_dir, src_path):
"""Parses the file contents, placing result into ifaces.
'_source_path' is added to every dictionary entry to define which file
the configration information came from.
:param ifaces: interface dictionary
:param contents: contents of interfaces file
:param src_dir: directory interfaces file was located
:param src_path: file path the `contents` was read
"""
currif = None
for line in contents.splitlines():
line = line.strip()
if line.startswith('#'):
continue
split = line.split(' ')
option = split[0]
if option == "source-directory":
parsed_src_dir = split[1]
if not parsed_src_dir.startswith("/"):
parsed_src_dir = os.path.join(src_dir, parsed_src_dir)
for expanded_path in glob.glob(parsed_src_dir):
dir_contents = os.listdir(expanded_path)
dir_contents = [
os.path.join(expanded_path, path)
for path in dir_contents
if (os.path.isfile(os.path.join(expanded_path, path)) and
re.match("^[a-zA-Z0-9_-]+$", path) is not None)
]
for entry in dir_contents:
with open(entry, "r") as fp:
src_data = fp.read().strip()
abs_entry = os.path.abspath(entry)
parse_deb_config_data(
ifaces, src_data,
os.path.dirname(abs_entry), abs_entry)
elif option == "source":
new_src_path = split[1]
if not new_src_path.startswith("/"):
new_src_path = os.path.join(src_dir, new_src_path)
for expanded_path in glob.glob(new_src_path):
with open(expanded_path, "r") as fp:
src_data = fp.read().strip()
abs_path = os.path.abspath(expanded_path)
parse_deb_config_data(
ifaces, src_data,
os.path.dirname(abs_path), abs_path)
elif option == "auto":
for iface in split[1:]:
if iface not in ifaces:
ifaces[iface] = {
# Include the source path this interface was found in.
"_source_path": src_path
}
ifaces[iface]['auto'] = True
elif option == "iface":
iface, family, method = split[1:4]
if iface not in ifaces:
ifaces[iface] = {
# Include the source path this interface was found in.
"_source_path": src_path
}
elif 'family' in ifaces[iface]:
raise ParserError(
"Interface %s can only be defined once. "
"Re-defined in '%s'." % (iface, src_path))
ifaces[iface]['family'] = family
ifaces[iface]['method'] = method
currif = iface
elif option == "hwaddress":
ifaces[currif]['hwaddress'] = split[1]
elif option in NET_CONFIG_OPTIONS:
ifaces[currif][option] = split[1]
elif option in NET_CONFIG_COMMANDS:
if option not in ifaces[currif]:
ifaces[currif][option] = []
ifaces[currif][option].append(' '.join(split[1:]))
elif option.startswith('dns-'):
if 'dns' not in ifaces[currif]:
ifaces[currif]['dns'] = {}
if option == 'dns-search':
ifaces[currif]['dns']['search'] = []
for domain in split[1:]:
ifaces[currif]['dns']['search'].append(domain)
elif option == 'dns-nameservers':
ifaces[currif]['dns']['nameservers'] = []
for server in split[1:]:
ifaces[currif]['dns']['nameservers'].append(server)
elif option.startswith('bridge_'):
if 'bridge' not in ifaces[currif]:
ifaces[currif]['bridge'] = {}
if option in NET_CONFIG_BRIDGE_OPTIONS:
bridge_option = option.replace('bridge_', '', 1)
ifaces[currif]['bridge'][bridge_option] = split[1]
elif option == "bridge_ports":
ifaces[currif]['bridge']['ports'] = []
for iface in split[1:]:
ifaces[currif]['bridge']['ports'].append(iface)
elif option == "bridge_hw" and split[1].lower() == "mac":
ifaces[currif]['bridge']['mac'] = split[2]
elif option == "bridge_pathcost":
if 'pathcost' not in ifaces[currif]['bridge']:
ifaces[currif]['bridge']['pathcost'] = {}
ifaces[currif]['bridge']['pathcost'][split[1]] = split[2]
elif option == "bridge_portprio":
if 'portprio' not in ifaces[currif]['bridge']:
ifaces[currif]['bridge']['portprio'] = {}
ifaces[currif]['bridge']['portprio'][split[1]] = split[2]
elif option.startswith('bond-'):
if 'bond' not in ifaces[currif]:
ifaces[currif]['bond'] = {}
bond_option = option.replace('bond-', '', 1)
ifaces[currif]['bond'][bond_option] = split[1]
for iface in ifaces.keys():
if 'auto' not in ifaces[iface]:
ifaces[iface]['auto'] = False
def parse_deb_config(path):
"""Parses a debian network configuration file."""
ifaces = {}
with open(path, "r") as fp:
contents = fp.read().strip()
abs_path = os.path.abspath(path)
parse_deb_config_data(
ifaces, contents,
os.path.dirname(abs_path), abs_path)
return ifaces
def parse_net_config_data(net_config):
"""Parses the config, returns NetworkState dictionary
:param net_config: curtin network config dict
"""
state = None
if 'version' in net_config and 'config' in net_config:
ns = network_state.NetworkState(version=net_config.get('version'),
config=net_config.get('config'))
ns.parse_config()
state = ns.network_state
return state
def parse_net_config(path):
"""Parses a curtin network configuration file and
return network state"""
ns = None
net_config = config.load_config(path)
if 'network' in net_config:
ns = parse_net_config_data(net_config.get('network'))
return ns
def render_persistent_net(network_state):
''' Given state, emit udev rules to map
mac to ifname
'''
content = ""
interfaces = network_state.get('interfaces')
for iface in interfaces.values():
# for physical interfaces write out a persist net udev rule
if iface['type'] == 'physical' and \
'name' in iface and 'mac_address' in iface:
content += generate_udev_rule(iface['name'],
iface['mac_address'])
return content
# TODO: switch valid_map based on mode inet/inet6
def iface_add_subnet(iface, subnet):
content = ""
valid_map = [
'address',
'netmask',
'broadcast',
'metric',
'gateway',
'pointopoint',
'mtu',
'scope',
'dns_search',
'dns_nameservers',
]
for key, value in subnet.items():
if value and key in valid_map:
if type(value) == list:
value = " ".join(value)
if '_' in key:
key = key.replace('_', '-')
content += " {} {}\n".format(key, value)
return content
# TODO: switch to valid_map for attrs
def iface_add_attrs(iface):
content = ""
ignore_map = [
'type',
'name',
'inet',
'mode',
'index',
'subnets',
]
if iface['type'] not in ['bond', 'bridge']:
ignore_map.append('mac_address')
for key, value in iface.items():
if value and key not in ignore_map:
if type(value) == list:
value = " ".join(value)
content += " {} {}\n".format(key, value)
return content
def render_route(route):
content = "up route add"
mapping = {
'network': '-net',
'netmask': 'netmask',
'gateway': 'gw',
'metric': 'metric',
}
for k in ['network', 'netmask', 'gateway', 'metric']:
if k in route:
content += " %s %s" % (mapping[k], route[k])
content += '\n'
return content
def render_interfaces(network_state):
''' Given state, emit etc/network/interfaces content '''
content = ""
interfaces = network_state.get('interfaces')
''' Apply a sort order to ensure that we write out
the physical interfaces first; this is critical for
bonding
'''
order = {
'physical': 0,
'bond': 1,
'bridge': 2,
'vlan': 3,
}
content += "auto lo\niface lo inet loopback\n"
for dnskey, value in network_state.get('dns', {}).items():
if len(value):
content += " dns-{} {}\n".format(dnskey, " ".join(value))
for iface in sorted(interfaces.values(),
key=lambda k: (order[k['type']], k['name'])):
content += "auto {name}\n".format(**iface)
subnets = iface.get('subnets', {})
if subnets:
for index, subnet in zip(range(0, len(subnets)), subnets):
iface['index'] = index
iface['mode'] = subnet['type']
if iface['mode'].endswith('6'):
iface['inet'] += '6'
elif iface['mode'] == 'static' and ":" in subnet['address']:
iface['inet'] += '6'
if iface['mode'].startswith('dhcp'):
iface['mode'] = 'dhcp'
if index == 0:
content += "iface {name} {inet} {mode}\n".format(**iface)
else:
content += "auto {name}:{index}\n".format(**iface)
content += \
"iface {name}:{index} {inet} {mode}\n".format(**iface)
content += iface_add_subnet(iface, subnet)
content += iface_add_attrs(iface)
content += "\n"
else:
content += "iface {name} {inet} {mode}\n".format(**iface)
content += iface_add_attrs(iface)
content += "\n"
for route in network_state.get('routes'):
content += render_route(route)
# global replacements until v2 format
content = content.replace('mac_address', 'hwaddress')
return content
def render_network_state(target, network_state):
eni = 'etc/network/interfaces'
netrules = 'etc/udev/rules.d/70-persistent-net.rules'
eni = os.path.sep.join((target, eni,))
util.ensure_dir(os.path.dirname(eni))
with open(eni, 'w+') as f:
f.write(render_interfaces(network_state))
netrules = os.path.sep.join((target, netrules,))
util.ensure_dir(os.path.dirname(netrules))
with open(netrules, 'w+') as f:
f.write(render_persistent_net(network_state))
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/net/network_state.py 0000644 0000000 0000000 00000031535 12673006714 017666 0 ustar 0000000 0000000 # Copyright (C) 2013-2014 Canonical Ltd.
#
# Author: Ryan Harper
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
from curtin.log import LOG
import curtin.config as curtin_config
NETWORK_STATE_VERSION = 1
NETWORK_STATE_REQUIRED_KEYS = {
1: ['version', 'config', 'network_state'],
}
def from_state_file(state_file):
network_state = None
state = curtin_config.load_config(state_file)
network_state = NetworkState()
network_state.load(state)
return network_state
class NetworkState:
def __init__(self, version=NETWORK_STATE_VERSION, config=None):
self.version = version
self.config = config
self.network_state = {
'interfaces': {},
'routes': [],
'dns': {
'nameservers': [],
'search': [],
}
}
self.command_handlers = self.get_command_handlers()
def get_command_handlers(self):
METHOD_PREFIX = 'handle_'
methods = filter(lambda x: callable(getattr(self, x)) and
x.startswith(METHOD_PREFIX), dir(self))
handlers = {}
for m in methods:
key = m.replace(METHOD_PREFIX, '')
handlers[key] = getattr(self, m)
return handlers
def dump(self):
state = {
'version': self.version,
'config': self.config,
'network_state': self.network_state,
}
return curtin_config.dump_config(state)
def load(self, state):
if 'version' not in state:
LOG.error('Invalid state, missing version field')
raise Exception('Invalid state, missing version field')
required_keys = NETWORK_STATE_REQUIRED_KEYS[state['version']]
if not self.valid_command(state, required_keys):
msg = 'Invalid state, missing keys: {}'.format(required_keys)
LOG.error(msg)
raise Exception(msg)
# v1 - direct attr mapping, except version
for key in [k for k in required_keys if k not in ['version']]:
setattr(self, key, state[key])
self.command_handlers = self.get_command_handlers()
def dump_network_state(self):
return curtin_config.dump_config(self.network_state)
def parse_config(self):
# rebuild network state
for command in self.config:
handler = self.command_handlers.get(command['type'])
handler(command)
def valid_command(self, command, required_keys):
if not required_keys:
return False
found_keys = [key for key in command.keys() if key in required_keys]
return len(found_keys) == len(required_keys)
def handle_physical(self, command):
'''
command = {
'type': 'physical',
'mac_address': 'c0:d6:9f:2c:e8:80',
'name': 'eth0',
'subnets': [
{'type': 'dhcp4'}
]
}
'''
required_keys = [
'name',
]
if not self.valid_command(command, required_keys):
LOG.warn('Skipping Invalid command: {}'.format(command))
LOG.debug(self.dump_network_state())
return
interfaces = self.network_state.get('interfaces')
iface = interfaces.get(command['name'], {})
for param, val in command.get('params', {}).items():
iface.update({param: val})
iface.update({
'name': command.get('name'),
'type': command.get('type'),
'mac_address': command.get('mac_address'),
'inet': 'inet',
'mode': 'manual',
'mtu': command.get('mtu'),
'address': None,
'gateway': None,
'subnets': command.get('subnets'),
})
self.network_state['interfaces'].update({command.get('name'): iface})
self.dump_network_state()
def handle_vlan(self, command):
'''
auto eth0.222
iface eth0.222 inet static
address 10.10.10.1
netmask 255.255.255.0
vlan-raw-device eth0
'''
required_keys = [
'name',
'vlan_link',
'vlan_id',
]
if not self.valid_command(command, required_keys):
print('Skipping Invalid command: {}'.format(command))
print(self.dump_network_state())
return
interfaces = self.network_state.get('interfaces')
self.handle_physical(command)
iface = interfaces.get(command.get('name'), {})
iface['vlan-raw-device'] = command.get('vlan_link')
iface['vlan_id'] = command.get('vlan_id')
interfaces.update({iface['name']: iface})
def handle_bond(self, command):
'''
#/etc/network/interfaces
auto eth0
iface eth0 inet manual
bond-master bond0
bond-mode 802.3ad
auto eth1
iface eth1 inet manual
bond-master bond0
bond-mode 802.3ad
auto bond0
iface bond0 inet static
address 192.168.0.10
gateway 192.168.0.1
netmask 255.255.255.0
bond-slaves none
bond-mode 802.3ad
bond-miimon 100
bond-downdelay 200
bond-updelay 200
bond-lacp-rate 4
'''
required_keys = [
'name',
'bond_interfaces',
'params',
]
if not self.valid_command(command, required_keys):
print('Skipping Invalid command: {}'.format(command))
print(self.dump_network_state())
return
self.handle_physical(command)
interfaces = self.network_state.get('interfaces')
iface = interfaces.get(command.get('name'), {})
for param, val in command.get('params').items():
iface.update({param: val})
iface.update({'bond-slaves': 'none'})
self.network_state['interfaces'].update({iface['name']: iface})
# handle bond slaves
for ifname in command.get('bond_interfaces'):
if ifname not in interfaces:
cmd = {
'name': ifname,
'type': 'bond',
}
# inject placeholder
self.handle_physical(cmd)
interfaces = self.network_state.get('interfaces')
bond_if = interfaces.get(ifname)
bond_if['bond-master'] = command.get('name')
# copy in bond config into slave
for param, val in command.get('params').items():
bond_if.update({param: val})
self.network_state['interfaces'].update({ifname: bond_if})
def handle_bridge(self, command):
'''
auto br0
iface br0 inet static
address 10.10.10.1
netmask 255.255.255.0
bridge_ports eth0 eth1
bridge_stp off
bridge_fd 0
bridge_maxwait 0
bridge_params = [
"bridge_ports",
"bridge_ageing",
"bridge_bridgeprio",
"bridge_fd",
"bridge_gcint",
"bridge_hello",
"bridge_hw",
"bridge_maxage",
"bridge_maxwait",
"bridge_pathcost",
"bridge_portprio",
"bridge_stp",
"bridge_waitport",
]
'''
required_keys = [
'name',
'bridge_interfaces',
'params',
]
if not self.valid_command(command, required_keys):
print('Skipping Invalid command: {}'.format(command))
print(self.dump_network_state())
return
# find one of the bridge port ifaces to get mac_addr
# handle bridge_slaves
interfaces = self.network_state.get('interfaces')
for ifname in command.get('bridge_interfaces'):
if ifname in interfaces:
continue
cmd = {
'name': ifname,
}
# inject placeholder
self.handle_physical(cmd)
interfaces = self.network_state.get('interfaces')
self.handle_physical(command)
iface = interfaces.get(command.get('name'), {})
iface['bridge_ports'] = command['bridge_interfaces']
for param, val in command.get('params').items():
iface.update({param: val})
interfaces.update({iface['name']: iface})
def handle_nameserver(self, command):
required_keys = [
'address',
]
if not self.valid_command(command, required_keys):
print('Skipping Invalid command: {}'.format(command))
print(self.dump_network_state())
return
dns = self.network_state.get('dns')
if 'address' in command:
addrs = command['address']
if not type(addrs) == list:
addrs = [addrs]
for addr in addrs:
dns['nameservers'].append(addr)
if 'search' in command:
paths = command['search']
if not isinstance(paths, list):
paths = [paths]
for path in paths:
dns['search'].append(path)
def handle_route(self, command):
required_keys = [
'destination',
]
if not self.valid_command(command, required_keys):
print('Skipping Invalid command: {}'.format(command))
print(self.dump_network_state())
return
routes = self.network_state.get('routes')
network, cidr = command['destination'].split("/")
netmask = cidr2mask(int(cidr))
route = {
'network': network,
'netmask': netmask,
'gateway': command.get('gateway'),
'metric': command.get('metric'),
}
routes.append(route)
def cidr2mask(cidr):
mask = [0, 0, 0, 0]
for i in list(range(0, cidr)):
idx = int(i / 8)
mask[idx] = mask[idx] + (1 << (7 - i % 8))
return ".".join([str(x) for x in mask])
if __name__ == '__main__':
import sys
import random
from curtin import net
def load_config(nc):
version = nc.get('version')
config = nc.get('config')
return (version, config)
def test_parse(network_config):
(version, config) = load_config(network_config)
ns1 = NetworkState(version=version, config=config)
ns1.parse_config()
random.shuffle(config)
ns2 = NetworkState(version=version, config=config)
ns2.parse_config()
print("----NS1-----")
print(ns1.dump_network_state())
print()
print("----NS2-----")
print(ns2.dump_network_state())
print("NS1 == NS2 ?=> {}".format(
ns1.network_state == ns2.network_state))
eni = net.render_interfaces(ns2.network_state)
print(eni)
udev_rules = net.render_persistent_net(ns2.network_state)
print(udev_rules)
def test_dump_and_load(network_config):
print("Loading network_config into NetworkState")
(version, config) = load_config(network_config)
ns1 = NetworkState(version=version, config=config)
ns1.parse_config()
print("Dumping state to file")
ns1_dump = ns1.dump()
ns1_state = "/tmp/ns1.state"
with open(ns1_state, "w+") as f:
f.write(ns1_dump)
print("Loading state from file")
ns2 = from_state_file(ns1_state)
print("NS1 == NS2 ?=> {}".format(
ns1.network_state == ns2.network_state))
def test_output(network_config):
(version, config) = load_config(network_config)
ns1 = NetworkState(version=version, config=config)
ns1.parse_config()
random.shuffle(config)
ns2 = NetworkState(version=version, config=config)
ns2.parse_config()
print("NS1 == NS2 ?=> {}".format(
ns1.network_state == ns2.network_state))
eni_1 = net.render_interfaces(ns1.network_state)
eni_2 = net.render_interfaces(ns2.network_state)
print(eni_1)
print(eni_2)
print("eni_1 == eni_2 ?=> {}".format(
eni_1 == eni_2))
y = curtin_config.load_config(sys.argv[1])
network_config = y.get('network')
test_parse(network_config)
test_dump_and_load(network_config)
test_output(network_config)
curtin-0.1.0~bzr365/curtin/reporter/__init__.py 0000644 0000000 0000000 00000003430 12673006714 017601 0 ustar 0000000 0000000 # Copyright (C) 2014 Canonical Ltd.
#
# Author: Newell Jensen
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
"""Reporter Abstract Base Class."""
from .registry import DictRegistry
from .handlers import available_handlers
DEFAULT_CONFIG = {
'logging': {'type': 'log'},
}
def update_configuration(config):
"""Update the instanciated_handler_registry.
:param config:
The dictionary containing changes to apply. If a key is given
with a False-ish value, the registered handler matching that name
will be unregistered.
"""
for handler_name, handler_config in config.items():
if not handler_config:
instantiated_handler_registry.unregister_item(
handler_name, force=True)
continue
handler_config = handler_config.copy()
cls = available_handlers.registered_items[handler_config.pop('type')]
instantiated_handler_registry.unregister_item(handler_name)
instance = cls(**handler_config)
instantiated_handler_registry.register_item(handler_name, instance)
instantiated_handler_registry = DictRegistry()
update_configuration(DEFAULT_CONFIG)
curtin-0.1.0~bzr365/curtin/reporter/events.py 0000644 0000000 0000000 00000020524 12673006714 017351 0 ustar 0000000 0000000 # Copyright (C) 2015 Canonical Ltd.
#
# Author: Scott Moser
#
# Curtin is free software: you can redistribute it and/or modify it under
# the terms of the GNU Affero General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# Curtin is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
# FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for
# more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Curtin. If not, see .
"""
cloud-init reporting framework
The reporting framework is intended to allow all parts of cloud-init to
report events in a structured manner.
"""
import base64
import os.path
import time
from . import instantiated_handler_registry
FINISH_EVENT_TYPE = 'finish'
START_EVENT_TYPE = 'start'
RESULT_EVENT_TYPE = 'result'
DEFAULT_EVENT_ORIGIN = 'curtin'
class _nameset(set):
def __getattr__(self, name):
if name in self:
return name
raise AttributeError("%s not a valid value" % name)
status = _nameset(("SUCCESS", "WARN", "FAIL"))
class ReportingEvent(object):
"""Encapsulation of event formatting."""
def __init__(self, event_type, name, description,
origin=DEFAULT_EVENT_ORIGIN, timestamp=time.time()):
self.event_type = event_type
self.name = name
self.description = description
self.origin = origin
self.timestamp = timestamp
def as_string(self):
"""The event represented as a string."""
return '{0}: {1}: {2}'.format(
self.event_type, self.name, self.description)
def as_dict(self):
"""The event represented as a dictionary."""
return {'name': self.name, 'description': self.description,
'event_type': self.event_type, 'origin': self.origin,
'timestamp': self.timestamp}
class FinishReportingEvent(ReportingEvent):
def __init__(self, name, description, result=status.SUCCESS,
post_files=None):
super(FinishReportingEvent, self).__init__(
FINISH_EVENT_TYPE, name, description)
self.result = result
if post_files is None:
post_files = []
self.post_files = post_files
if result not in status:
raise ValueError("Invalid result: %s" % result)
def as_string(self):
return '{0}: {1}: {2}: {3}'.format(
self.event_type, self.name, self.result, self.description)
def as_dict(self):
"""The event represented as json friendly."""
data = super(FinishReportingEvent, self).as_dict()
data['result'] = self.result
if self.post_files:
data['files'] = _collect_file_info(self.post_files)
return data
def report_event(event):
"""Report an event to all registered event handlers.
This should generally be called via one of the other functions in
the reporting module.
:param event_type:
The type of the event; this should be a constant from the
reporting module.
"""
for _, handler in instantiated_handler_registry.registered_items.items():
handler.publish_event(event)
def report_finish_event(event_name, event_description,
result=status.SUCCESS, post_files=None):
"""Report a "finish" event.
See :py:func:`.report_event` for parameter details.
"""
event = FinishReportingEvent(event_name, event_description, result,
post_files=post_files)
return report_event(event)
def report_start_event(event_name, event_description):
"""Report a "start" event.
:param event_name:
The name of the event; this should be a topic which events would
share (e.g. it will be the same for start and finish events).
:param event_description:
A human-readable description of the event that has occurred.
"""
event = ReportingEvent(START_EVENT_TYPE, event_name, event_description)
return report_event(event)
class ReportEventStack(object):
"""Context Manager for using :py:func:`report_event`
This enables calling :py:func:`report_start_event` and
:py:func:`report_finish_event` through a context manager.
:param name:
the name of the event
:param description:
the event's description, passed on to :py:func:`report_start_event`
:param message:
the description to use for the finish event. defaults to
:param:description.
:param parent:
:type parent: :py:class:ReportEventStack or None
The parent of this event. The parent is populated with
results of all its children. The name used in reporting
is /
:param reporting_enabled:
Indicates if reporting events should be generated.
If not provided, defaults to the parent's value, or True if no parent
is provided.
:param result_on_exception:
The result value to set if an exception is caught. default
value is FAIL.
"""
def __init__(self, name, description, message=None, parent=None,
reporting_enabled=None, result_on_exception=status.FAIL,
post_files=None):
self.parent = parent
self.name = name
self.description = description
self.message = message
self.result_on_exception = result_on_exception
self.result = status.SUCCESS
if post_files is None:
post_files = []
self.post_files = post_files
# use parents reporting value if not provided
if reporting_enabled is None:
if parent:
reporting_enabled = parent.reporting_enabled
else:
reporting_enabled = True
self.reporting_enabled = reporting_enabled
if parent:
self.fullname = '/'.join((parent.fullname, name,))
else:
self.fullname = self.name
self.children = {}
def __repr__(self):
return ("ReportEventStack(%s, %s, reporting_enabled=%s)" %
(self.name, self.description, self.reporting_enabled))
def __enter__(self):
self.result = status.SUCCESS
if self.reporting_enabled:
report_start_event(self.fullname, self.description)
if self.parent:
self.parent.children[self.name] = (None, None)
return self
def _childrens_finish_info(self):
for cand_result in (status.FAIL, status.WARN):
for name, (value, msg) in self.children.items():
if value == cand_result:
return (value, self.message)
return (self.result, self.message)
@property
def result(self):
return self._result
@result.setter
def result(self, value):
if value not in status:
raise ValueError("'%s' not a valid result" % value)
self._result = value
@property
def message(self):
if self._message is not None:
return self._message
return self.description
@message.setter
def message(self, value):
self._message = value
def _finish_info(self, exc):
# return tuple of description, and value
# explicitly handle sys.exit(0) as not an error
if exc and not(isinstance(exc, SystemExit) and exc.code == 0):
return (self.result_on_exception, self.message)
return self._childrens_finish_info()
def __exit__(self, exc_type, exc_value, traceback):
(result, msg) = self._finish_info(exc_value)
if self.parent:
self.parent.children[self.name] = (result, msg)
if self.reporting_enabled:
report_finish_event(self.fullname, msg, result,
post_files=self.post_files)
def _collect_file_info(files):
if not files:
return None
ret = []
for fname in files:
if not os.path.isfile(fname):
content = None
else:
with open(fname, "rb") as fp:
content = base64.b64encode(fp.read()).decode()
ret.append({'path': fname, 'content': content,
'encoding': 'base64'})
return ret
# vi: ts=4 expandtab syntax=python
curtin-0.1.0~bzr365/curtin/reporter/handlers.py 0000644 0000000 0000000 00000004732 12673006714 017650 0 ustar 0000000 0000000 # vi: ts=4 expandtab
import abc
from .registry import DictRegistry
from .. import url_helper
from .. import log as logging
LOG = logging.getLogger(__name__)
class ReportingHandler(object):
"""Base class for report handlers.
Implement :meth:`~publish_event` for controlling what
the handler does with an event.
"""
@abc.abstractmethod
def publish_event(self, event):
"""Publish an event to the ``INFO`` log level."""
class LogHandler(ReportingHandler):
"""Publishes events to the cloud-init log at the ``INFO`` log level."""
def __init__(self, level="DEBUG"):
super(LogHandler, self).__init__()
if isinstance(level, int):
pass
else:
input_level = level
try:
level = getattr(logging, level.upper())
except:
LOG.warn("invalid level '%s', using WARN", input_level)
level = logging.WARN
self.level = level
def publish_event(self, event):
"""Publish an event to the ``INFO`` log level."""
logger = logging.getLogger(
'.'.join(['cloudinit', 'reporting', event.event_type, event.name]))
logger.log(self.level, event.as_string())
class PrintHandler(ReportingHandler):
"""Print the event as a string."""
def publish_event(self, event):
print(event.as_string())
class WebHookHandler(ReportingHandler):
def __init__(self, endpoint, consumer_key=None, token_key=None,
token_secret=None, consumer_secret=None, timeout=None,
retries=None):
super(WebHookHandler, self).__init__()
self.oauth_helper = url_helper.OauthUrlHelper(
consumer_key=consumer_key, token_key=token_key,
token_secret=token_secret, consumer_secret=consumer_secret)
self.endpoint = endpoint
self.timeout = timeout
self.retries = retries
self.headers = {'Content-Type': 'application/json'}
def publish_event(self, event):
try:
return self.oauth_helper.geturl(
url=self.endpoint, data=event.as_dict(),
headers=self.headers, retries=self.retries)
except Exception as e:
LOG.warn("failed posting event: %s [%s]" % (event.as_string(), e))
available_handlers = DictRegistry()
available_handlers.register_item('log', LogHandler)
available_handlers.register_item('print', PrintHandler)
available_handlers.register_item('webhook', WebHookHandler)
curtin-0.1.0~bzr365/curtin/reporter/legacy/ 0000755 0000000 0000000 00000000000 12673006714 016734 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/curtin/reporter/registry.py 0000644 0000000 0000000 00000002012 12673006714 017705 0 ustar 0000000 0000000 # Copyright 2015 Canonical Ltd.
# This file is part of cloud-init. See LICENCE file for license information.
#
# vi: ts=4 expandtab
import copy
class DictRegistry(object):
"""A simple registry for a mapping of objects."""
def __init__(self):
self.reset()
def reset(self):
self._items = {}
def register_item(self, key, item):
"""Add item to the registry."""
if key in self._items:
raise ValueError(
'Item already registered with key {0}'.format(key))
self._items[key] = item
def unregister_item(self, key, force=True):
"""Remove item from the registry."""
if key in self._items:
del self._items[key]
elif not force:
raise KeyError("%s: key not present to unregister" % key)
@property
def registered_items(self):
"""All the items that have been registered.
This cannot be used to modify the contents of the registry.
"""
return copy.copy(self._items)
curtin-0.1.0~bzr365/curtin/reporter/legacy/__init__.py 0000644 0000000 0000000 00000002537 12673006714 021054 0 ustar 0000000 0000000 from curtin.util import (
try_import_module,
)
from abc import (
ABCMeta,
abstractmethod,
)
from curtin.log import LOG
class BaseReporter:
"""Skeleton for a report."""
__metaclass__ = ABCMeta
@abstractmethod
def report_success(self):
"""Report installation success."""
@abstractmethod
def report_failure(self, failure):
"""Report installation failure."""
class EmptyReporter(BaseReporter):
def report_success(self):
"""Empty."""
def report_failure(self, failure):
"""Empty."""
class LoadReporterException(Exception):
"""Raise exception if desired reporter not loaded."""
pass
def load_reporter(config):
"""Loads and returns reporter instance stored in config file."""
reporter = config.get('reporter')
if reporter is None:
LOG.info("'reporter' not found in config file.")
return EmptyReporter()
name, options = reporter.popitem()
module = try_import_module('curtin.reporter.legacy.%s' % name)
if module is None:
LOG.error(
"Module for %s reporter could not load." % name)
return EmptyReporter()
try:
return module.load_factory(options)
except LoadReporterException:
LOG.error(
"Failed loading %s reporter with %s" % (name, options))
return EmptyReporter()
curtin-0.1.0~bzr365/curtin/reporter/legacy/maas.py 0000644 0000000 0000000 00000007761 12673006714 020242 0 ustar 0000000 0000000 from curtin import url_helper
from . import (BaseReporter, LoadReporterException)
import mimetypes
import os.path
import random
import string
import sys
class MAASReporter(BaseReporter):
def __init__(self, config):
"""Load config dictionary and initialize object."""
self.url = config['url']
self.urlhelper = url_helper.OauthUrlHelper(
consumer_key=config.get('consumer_key'),
token_key=config.get('token_key'),
token_secret=config.get('token_secret'),
consumer_secret='',
skew_data_file="/run/oauth_skew.json")
self.files = []
self.retries = config.get('retries', [1, 1, 2, 4, 8, 16, 32])
def report_success(self):
"""Report installation success."""
status = "OK"
message = "Installation succeeded."
self.report(status, message, files=self.files)
def report_failure(self, message):
"""Report installation failure."""
status = "FAILED"
self.report(status, message, files=self.files)
def encode_multipart_data(self, data, files):
"""Create a MIME multipart payload from L{data} and L{files}.
@param data: A mapping of names (ASCII strings) to data (byte string).
@param files: A mapping of names (ASCII strings) to file objects ready
to be read.
@return: A 2-tuple of C{(body, headers)}, where C{body} is a a byte
string and C{headers} is a dict of headers to add to the enclosing
request in which this payload will travel.
"""
boundary = self._random_string(30)
lines = []
for name in data:
lines.extend(self._encode_field(name, data[name], boundary))
for name in files:
lines.extend(self._encode_file(name, files[name], boundary))
lines.extend(('--%s--' % boundary, ''))
body = '\r\n'.join(lines)
headers = {
'content-type': 'multipart/form-data; boundary=' + boundary,
'content-length': "%d" % len(body),
}
return body, headers
def report(self, status, message=None, files=None):
"""Send the report."""
params = {}
params['status'] = status
if message is not None:
params['error'] = message
if files is None:
files = []
install_files = {}
for fpath in files:
install_files[os.path.basename(fpath)] = open(fpath, "r")
data, headers = self.encode_multipart_data(params, install_files)
msg = ""
if not isinstance(data, bytes):
data = data.encode()
try:
payload = self.urlhelper.geturl(
self.url, data=data, headers=headers,
retries=self.retries)
if payload != b'OK':
raise TypeError("Unexpected result from call: %s" % payload)
else:
msg = "Success"
except url_helper.UrlError as exc:
msg = str(exc)
except Exception as exc:
raise exc
sys.stderr.write("%s\n" % msg)
def _encode_field(self, field_name, data, boundary):
return (
'--' + boundary,
'Content-Disposition: form-data; name="%s"' % field_name,
'', str(data),
)
def _encode_file(self, name, fileObj, boundary):
return (
'--' + boundary,
'Content-Disposition: form-data; name="%s"; filename="%s"'
% (name, name),
'Content-Type: %s' % self._get_content_type(name),
'',
fileObj.read(),
)
def _random_string(self, length):
return ''.join(random.choice(string.ascii_letters)
for ii in range(length + 1))
def _get_content_type(self, filename):
return mimetypes.guess_type(filename)[0] or 'application/octet-stream'
def load_factory(options):
try:
return MAASReporter(options)
except Exception:
raise LoadReporterException
curtin-0.1.0~bzr365/debian/changelog.trunk 0000644 0000000 0000000 00000000224 12673006714 016556 0 ustar 0000000 0000000 curtin (0.1.0~bzrREVNO-0ubuntu1) UNRELEASED; urgency=low
* Initial release
-- Scott Moser Mon, 29 Jul 2013 16:12:09 -0400
curtin-0.1.0~bzr365/debian/compat 0000644 0000000 0000000 00000000002 12673006714 014742 0 ustar 0000000 0000000 7
curtin-0.1.0~bzr365/debian/control 0000644 0000000 0000000 00000004254 12673006714 015154 0 ustar 0000000 0000000 Source: curtin
Section: admin
Priority: extra
Standards-Version: 3.9.6
Maintainer: Ubuntu Developers
Build-Depends: debhelper (>= 7),
dh-python,
pep8,
pyflakes,
python-all,
python-mock,
python-nose,
python-oauthlib,
python-setuptools,
python-yaml,
python3,
python3-mock,
python3-nose,
python3-oauthlib,
python3-pyflakes | pyflakes (<< 1.1.0-2),
python3-setuptools,
python3-yaml
Homepage: http://launchpad.net/curtin
X-Python3-Version: >= 3.2
Package: curtin
Architecture: all
Priority: extra
Depends: btrfs-tools,
dosfstools,
file,
gdisk,
lvm2,
mdadm,
parted,
python3-curtin (= ${binary:Version}),
udev,
xfsprogs,
${misc:Depends}
Description: Library and tools for the curtin installer
This package provides the curtin installer.
.
Curtin is an installer that is blunt, brief, snappish, snippety and
unceremonious.
Package: curtin-common
Architecture: all
Priority: extra
Depends: util-linux (>= 2.20.1-1ubuntu3), ${misc:Depends}, ${python3:Depends}
Conflicts: curtin (<= 0.1.0~bzr54-0ubuntu1)
Description: Library and tools for curtin installer
This package contains utilities for the curtin installer.
Package: python-curtin
Section: python
Architecture: all
Priority: extra
Depends: curl | wget,
curtin-common (= ${binary:Version}),
util-linux (>= 2.20.1-1ubuntu3),
${misc:Depends},
${python:Depends}
Description: Library and tools for curtin installer
This package provides python library for use by curtin.
Package: python3-curtin
Section: python
Architecture: all
Priority: extra
Conflicts: curtin (<= 0.1.0~bzr54-0ubuntu1)
Depends: curl | wget,
curtin-common (= ${binary:Version}),
util-linux (>= 2.20.1-1ubuntu3),
${misc:Depends},
${python3:Depends}
Description: Library and tools for curtin installer
This package provides python3 library for use by curtin.
curtin-0.1.0~bzr365/debian/copyright 0000644 0000000 0000000 00000001140 12673006714 015473 0 ustar 0000000 0000000 Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: curtin
Upstream-Contact: Scott Moser
Source: https://launchpad.net/curtin
Files: *
Copyright: 2013, Canonical Ltd.
License: AGPL-3
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
.
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
.
The complete text of the AGPL version 3 can be seen in
http://www.gnu.org/licenses/agpl-3.0.html
curtin-0.1.0~bzr365/debian/curtin-common.install 0000644 0000000 0000000 00000000031 12673006714 017720 0 ustar 0000000 0000000 usr/lib/curtin/helpers/*
curtin-0.1.0~bzr365/debian/curtin.install 0000644 0000000 0000000 00000000012 12673006714 016431 0 ustar 0000000 0000000 usr/bin/*
curtin-0.1.0~bzr365/debian/python-curtin.install 0000644 0000000 0000000 00000000045 12673006714 017756 0 ustar 0000000 0000000 usr/lib/python2*/*-packages/curtin/*
curtin-0.1.0~bzr365/debian/python3-curtin.install 0000644 0000000 0000000 00000000045 12673006714 020041 0 ustar 0000000 0000000 usr/lib/python3*/*-packages/curtin/*
curtin-0.1.0~bzr365/debian/rules 0000755 0000000 0000000 00000000644 12673006714 014630 0 ustar 0000000 0000000 #!/usr/bin/make -f
PYVERS := $(shell pyversions -r)
PY3VERS := $(shell py3versions -r)
%:
dh $@ --with=python2,python3
override_dh_auto_install:
dh_auto_install
set -ex; for python in $(PY3VERS) $(PYVERS); do \
$$python setup.py build --executable=/usr/bin/python && \
$$python setup.py install --root=$(CURDIR)/debian/tmp --install-layout=deb; \
done
chmod 755 $(CURDIR)/debian/tmp/usr/lib/curtin/helpers/*
curtin-0.1.0~bzr365/debian/source/ 0000755 0000000 0000000 00000000000 12673006714 015044 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/debian/source/format 0000644 0000000 0000000 00000000014 12673006714 016252 0 ustar 0000000 0000000 3.0 (quilt)
curtin-0.1.0~bzr365/doc/Makefile 0000644 0000000 0000000 00000012674 12673006714 014541 0 ustar 0000000 0000000 # Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make ' where is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/curtin.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/curtin.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/curtin"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/curtin"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
curtin-0.1.0~bzr365/doc/conf.py 0000644 0000000 0000000 00000017032 12673006714 014371 0 ustar 0000000 0000000 # -*- coding: utf-8 -*-
#
# curtin documentation build configuration file, created by
# sphinx-quickstart on Thu May 30 16:03:34 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'curtin'
copyright = u'2013, Scott Moser'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.3'
# The full version, including alpha/beta/rc tags.
release = '0.3'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'classic'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# " v documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'curtindoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'curtin.tex', u'curtin Documentation',
u'Scott Moser', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'curtin', u'curtin Documentation',
[u'Scott Moser'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'curtin', u'curtin Documentation',
u'Scott Moser', 'curtin', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
curtin-0.1.0~bzr365/doc/devel/ 0000755 0000000 0000000 00000000000 12673006714 014166 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/doc/index.rst 0000644 0000000 0000000 00000001162 12673006714 014730 0 ustar 0000000 0000000 .. curtin documentation master file, created by
sphinx-quickstart on Thu May 30 16:03:34 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to curtin's documentation!
==================================
This is 'curtin', the curt installer. It is blunt, brief, snappish, snippety and unceremonious. Its goal is to install an operating system as quick as possible.
Contents:
.. toctree::
:maxdepth: 2
topics/overview
topics/reporting
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
curtin-0.1.0~bzr365/doc/topics/ 0000755 0000000 0000000 00000000000 12673006714 014370 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/doc/devel/README-maas-image.txt 0000644 0000000 0000000 00000002742 12673006714 017670 0 ustar 0000000 0000000 A maas image is also convenient for curtin development.
Using a maas image requires you to be able to be root to convert the
root-image.gz file into a -root.tar.gz file that is installable by curtin.
But other than that, the maas image is very convenient.
follow doc/devel/README.txt, but instead of the 'get cloud image' section
do the following.
## this gets root image url that you could find from traversing
## http://maas.ubuntu.com/images/ephemeral-v2/daily/xenial/amd64/
rel=xenial
streams_url="http://maas.ubuntu.com/images/ephemeral-v2/daily/streams/v1/index.json"
root_img_url=$(sstream-query \
--output-format="%(item_url)s" --max=1 \
${streams_url} release=$rel ftype=root-image.gz arch=amd64 krel=$rel)
root_img_gz="${root_img_url##*/}"
root_img=${root_img_gz%.gz}
## download the root_img_url to local .gz file
[ -f "$root_img" ] ||
{ wget "${root_img_url}" -O "${root_img_gz}.tmp" && mv "${root_img_gz}.tmp" "$root_img_gz" ; }
## maas2roottar to
## a.) create the root_img file [root-image]
## b.) extract kernel and initramfs root-image-kernel, root-image-initramfs]
## c.) convert maas image to installable tarball [root-image.tar.gz]
## and pull the kernel and initramfs out
[ -f "$root_img" ] || ./tools/maas2roottar -vv --krd "${root_img_gz}"
## launch it using --kernel, --initrd
./tools/launch \
./root-image --kernel=./root-image-kernel --initrd=./root-image-initrd \
-v --publish ./root-image.tar.gz -- \
curtin -vv install "PUBURL/root-image.tar.gz"
curtin-0.1.0~bzr365/doc/devel/README-vmtest.txt 0000644 0000000 0000000 00000015363 12673006714 017214 0 ustar 0000000 0000000 == Background ==
Curtin includes a mechanism called 'vmtest' that allows it to actually
do installs and validate a number of configurations.
The general flow of the vmtests is:
1. each test has an associated yaml config file for curtin in examples/tests
2. uses curtin-pack to create the user-data for cloud-init to trigger install
3. create and install a system using 'tools/launch'.
3.1 The install environment is booted from a maas ephemeral image.
3.2 kernel & initrd used are from maas images (not part of the image)
3.3 network by default is handled via user networking
3.4 It creates all empty disks required
3.5 cloud-init datasource is provided by launch
a) like: ds=nocloud-net;seedfrom=http://10.7.0.41:41518/
provided by python webserver start_http
b) via -drive file=/tmp/launch.8VOiOn/seed.img,if=virtio,media=cdrom
as a seed disk (if booted without external kernel)
3.6 dependencies and other preparations are installed at the beginning by
curtin inside the ephemeral image prior to configuring the target
4. power off the system.
5. configure a 'NoCloud' datasource seed image that provides scripts that
will run on first boot.
5.1 this will contain all our code to gather health data on the install
5.2 by cloud-init design this runs only once per instance, if you start
the system again this won't be called again
6. boot the installed system with 'tools/xkvm'.
6.1 reuses the disks that were installed/configured in the former steps
6.2 also adds an output disk
6.3 additionally the seed image for the data gathering is added
6.4 On this boot it will run the provided scripts, write their output to a
"data" disk and then shut itself down.
7. extract the data from the output disk
8. vmtest python code now verifies if the output is as expected.
== Debugging ==
At 3.1
- one can pull data out of the maas image with
sudo mount-image-callback your.img -- sh -c 'COMMAND'
e.g. sudo mount-image-callback your.img -- sh -c 'cp $MOUNTPOINT/boot/* .'
At step 3.6 -> 4.
- tools/launch can be called in a way to give you console access
to do so just call tools/launch but drop the -serial=x parameter.
One might want to change "'power_state': {'mode': 'poweroff'}" to avoid
the auto reboot before getting control
Replace the directory usually seen in the launch calls with a clean fresh
directory
- In /curtin curtin and its config can be found
- if the system gets that far cloud-init will create a user ubuntu/passw0rd
- otherwise one can use a cloud-image from https://cloud-images.ubuntu.com/
and add a backdoor user via
bzr branch lp:~maas-maintainers/maas/backdoor-image backdoor-image
sudo ./backdoor-image -v --user= --password-auth --password= IMG
At step 6 -> 7
- You might want to keep all the temporary images around.
To do so you can set CURTIN_VMTEST_KEEP_DATA_PASS=all:
export CURTIN_VMTEST_KEEP_DATA_PASS=all CURTIN_VMTEST_KEEP_DATA_FAIL=all
That will keep the /tmp/tmpXXXXX directories and all files in there for
further execution.
At step 7
- You might want to take a look at the output disk yourself.
It is a normal qcow image, so one can use mount-image-callback as described
above
- to invoke xkvm on your own take the command you see in the output and
remove the "-serial ..." but add -nographic instead
For graphical console one can add --vnc 127.0.0.1:1
== Setup ==
In order to run vmtest you'll need some dependencies. To get them, you
can run:
make vmtest-deps
That will install all necessary dependencies.
== Running ==
Running tests is done most simply by:
make vmtest
If you wish to all tests in test_network.py, do so with:
sudo PATH=$PWD/tools:$PATH nosetests3 tests/vmtests/test_network.py
Or run a single test with:
sudo PATH=$PWD/tools:$PATH nosetests3 tests/vmtests/test_network.py:WilyTestBasic
Note:
* currently, the tests have to run as root. The reason for this is that
the kernel and initramfs to boot are extracted from the maas ephemeral
image. This should be fixed at some point, and then 'make vmtest'
The tests themselves don't actually have to run as root, but the
test setup does.
* the 'tools' directory must be in your path.
* test will set apt_proxy in the guests to the value of
'apt_proxy' environment variable. If that is not set it will
look at the host's apt config and read 'Acquire::HTTP::Proxy'
== Environment Variables ==
Some environment variables affect the running of vmtest
* apt_proxy:
test will set apt_proxy in the guests to the value of 'apt_proxy'.
If that is not set it will look at the host's apt config and read
'Acquire::HTTP::Proxy'
* CURTIN_VMTEST_KEEP_DATA_PASS CURTIN_VMTEST_KEEP_DATA_FAIL:
default:
CURTIN_VMTEST_KEEP_DATA_PASS=none
CURTIN_VMTEST_KEEP_DATA_FAIL=all
These 2 variables determine what portions of the temporary
test data are kept.
The variables contain a comma ',' delimited list of directories
that should be kept in the case of pass or fail. Additionally,
the values 'all' and 'none' are accepted.
Each vmtest that runs has its own sub-directory under the top level
CURTIN_VMTEST_TOPDIR. In that directory are directories:
boot: inputs to the system boot (after install)
install: install phase related files
disks: the disks used for installation and boot
logs: install and boot logs
collect: data collected by the boot phase
* CURTIN_VMTEST_TOPDIR: default $TMPDIR/vmtest-
vmtest puts all test data under this value. By default, it creates
a directory in TMPDIR (/tmp) named with as "vmtest-"
If you set this value, you must ensure that the directory is either
non-existant or clean.
* CURTIN_VMTEST_LOG: default $TMPDIR/vmtest-.log
vmtest writes extended log information to this file.
The default puts the log along side the TOPDIR.
* CURTIN_VMTEST_IMAGE_SYNC: default false (boolean)
if set to true, each run will attempt a sync of images.
If you want to make sure images are always up to date, then set to true.
* CURTIN_VMTEST_BRIDGE: default 'user'
the network devices will be attached to this bridge. The default is
'user', which means to use qemu user mode networking. Set it to
'virbr0' or 'lxcbr0' to use those bridges and then be able to ssh
in directly.
* IMAGE_DIR: default /srv/images
vmtest keeps a mirror of maas ephemeral images in this directory.
* IMAGES_TO_KEEP: default 1
keep this number of images of each release in the IMAGE_DIR.
Environment 'boolean' values:
For boolean environment variables the value is considered True
if it is any value other than case insensitive 'false', '' or "0"
curtin-0.1.0~bzr365/doc/devel/README.txt 0000644 0000000 0000000 00000004174 12673006714 015672 0 ustar 0000000 0000000 ## curtin development ##
This document describes how to use kvm and ubuntu cloud images
to develop curtin or test install configurations inside kvm.
## get some dependencies ##
sudo apt-get -qy install kvm libvirt-bin cloud-utils bzr
## get cloud image to boot (-disk1.img) and one to install (-root.tar.gz)
mkdir -p ~/download
DLDIR=$( cd ~/download && pwd )
rel="trusty"
arch=amd64
burl="http://cloud-images.ubuntu.com/$rel/current/"
for f in $rel-server-cloudimg-${arch}-root.tar.gz $rel-server-cloudimg-${arch}-disk1.img; do
wget "$burl/$f" -O $DLDIR/$f; done
( cd $DLDIR && qemu-img convert -O qcow $rel-server-cloudimg-${arch}-disk1.img $rel-server-cloudimg-${arch}-disk1.qcow2)
BOOTIMG="$DLDIR/$rel-server-cloudimg-${arch}-disk1.qcow2"
ROOTTGZ="$DLDIR/$rel-server-cloudimg-${arch}-root.tar.gz"
## get curtin
mkdir -p ~/src
bzr init-repo ~/src/curtin
( cd ~/src/curtin && bzr branch lp:curtin trunk.dist )
( cd ~/src/curtin && bzr branch trunk.dist trunk )
## work with curtin
cd ~/src/curtin/trunk
# use 'launch' to launch a kvm instance with user data to pack
# up local curtin and run it inside instance.
./tools/launch $BOOTIMG --publish $ROOTTGZ -- curtin install "PUBURL/${ROOTTGZ##*/}"
## notes about 'launch' ##
* launch has --help so you can see that for some info.
* '--publish' adds a web server at ${HTTP_PORT:-9923}
and puts the files you want available there. You can reference
this url in config or cmdline with 'PUBURL'. For example
'--publish foo.img' will put 'foo.img' at PUBURL/foo.img.
* launch sets 'ubuntu' user password to 'passw0rd'
* launch runs 'kvm -curses'
kvm -curses keyboard info:
'alt-2' to go to qemu console
* launch puts serial console to 'serial.log' (look there for stuff)
* when logged in
* you can look at /var/log/cloud-init-output.log
* archive should be extracted in /curtin
* shell archive should be in /var/lib/cloud/instance/scripts/part-002
* when logged in, and archive available at
## other notes ##
* need to add '--install-deps' or something for curtin
cloud-image in 12.04 has no 'python3'
ideally 'curtin --install-deps install' would get the things it needs
curtin-0.1.0~bzr365/doc/topics/overview.rst 0000644 0000000 0000000 00000011733 12673006714 016775 0 ustar 0000000 0000000 ========
Overview
========
Curtin is intended to be a bare bones "installer". Its goal is to take data from a source, and get it onto disk as quick as possible and then boot it. The key difference from traditional package based installers is that curtin assumes the thing its installing is intelligent and will do the right thing.
Stages
------
A usage of curtin will go through the following stages:
- Install Environment boot
- Early Commands
- Partitioning
- Network Discovery and Setup
- Extraction of sources
- Hook for installed OS to customize itself
- Final Commands
Install Environment boot
~~~~~~~~~~~~~~~~~~~~~~~~
At the moment, curtin doesn't address how the system that it is running on is booted. It could be booted from a live-cd or from a pxe boot environment. It could even be booted off a disk in the system (although installation to that disk would probably break things).
Curtin's assumption is that a fairly rich linux (Ubuntu) environment is booted.
Early Commands
~~~~~~~~~~~~~~
Early commands are executed on the system, and non-zero exit status will terminate the installation process. These commands are intended to be used for things like
- module loading
- hardware setup
- environment setup for subsequent stages of curtin.
**Config Example**::
early_commands:
05_load_loop: [modprobe, loop]
99_update: apt-get update && apt-get dist-upgrade
Partitioning
~~~~~~~~~~~~
Partitioning covers setting up filesystems on the system. A series of commands are run serially in order. At the end, a fstab formated file must be populated in ``OUTPUT_FSTAB`` that contains mount information, and the filesystems are expected to be mounted at the ``TARGET_MOUNT_POINT``.
Any commands can be used to create this filesystem, but curtin contains some tools to facilitate with this process.
**Config Example**::
paritioning_commands:
10_wipe_filesystems: curtin wipe --quick --all-unused-disks
50_setup_raid: curtin disk-setup --all-disks raid0 /
**Command environment**
Partitioning commands have the following environment variables available to them:
- ``WORKING_DIR``: This is simply for some sort of inter-command state. It will be the same directory for each command run and will only be deleted at the end of all partitioning_commands.
- ``OUTPUT_FSTAB``: This is the target path for a fstab file. After all partitioning commands have been run, a file should exist, formated per fstab(5) that describes how the filesystems should be mounted.
- ``TARGET_MOUNT_POINT``:
Network Discovery and Setup
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Networking is done in a similar fashion to partitioning. A series of commands, specified in the config are run. At the end of these commands, a interfaces(5) style file is expected to be written to ``OUTPUT_INTERFACES``.
Note, that as with fstab, this file is not copied verbatum to the target filesystem, but rather made availble to the OS customization stage. That stage may just copy the file verbatum, but may also parse it, and use that as input.
**Config Example**::
network_commands:
10_netconf: curtin network copy-existing
**Command environment**
Networking commands have the following environment variables available to them:
- ``WORKING_DIR``: This is simply for some sort of inter-command state. It will be the same directory for each command run and will only be deleted at the end of all network_commands.
- ``OUTPUT_INTERFACES``: This is the target path for an interfaces style file. After all commands have been run, a file should exist, formated per interfaces(5) that describes the systems network setup.
Extraction of sources
~~~~~~~~~~~~~~~~~~~~~
Sources are the things to install. Curtin prefers to install root filesystem tar files.
**Config Example**::
sources:
05_primary: http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-root.tar.gz
Given the source above, curtin will essentiall do a::
wget $URL | tar -Sxvzf
Hook for installed OS to customize itself
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After extraction of sources, the source that was extracted is then given a chance to customize itself for the system. This customization may include:
- ensuring that appropriate device drivers are loaded on first boot
- consuming the network interfaces file and applying its declarations.
- ensuring that necessary packages
**Config Example**::
config_hook: {{TARGET_MP}}/opt/curtin/config-hook
**Command environment**
- ``INTERFACES``: This is a path to the file created during networking stage
- ``FSTAB``: This is a path to the file created during partitioning stage
- ``CONFIG``: This is a path to the curtin config file. It is provided so that additional configuration could be provided through to the OS customization.
**Helpers**
Curtin provides some helpers to make the OS customization easier.
- `curtin in-target`: run the command while chrooted into the target.
Final Commands
~~~~~~~~~~~~~~
**Config Example**::
final_commands:
05_callhome_finished: wget http://example.com/i-am-done
curtin-0.1.0~bzr365/doc/topics/reporting.rst 0000644 0000000 0000000 00000011232 12673006714 017132 0 ustar 0000000 0000000 =========
Reporting
=========
Curtin is capable of reporting its progress via the reporting framework.
This enables the user to obtain status information from curtin.
Events
------
Reporting consists of notifcation of a series of 'events. Each event has:
- **event_type**: 'start' or 'finish'
- **description**: human readable text
- **name**: and id for this event
- **result**: only present when event_type is 'finish', its value is one of "SUCCESS", "WARN", or "FAIL". A result of WARN indicates something is likely wrong, but a non-fatal error. A result of "FAIL" is fatal.
- **origin**: literal value 'curtin'
- **timestamp**: the unix timestamp at which this event occurred
names are unique and hierarchical. For example, a series of names might look like:
- cmd-install (start)
- cmd-install/stage-early (start)
- cmd-install/stage-early (finish)
- cmd-install (finish)
You are guaranteed to always get a finish for each sub-item before finish of
the parent item, and guaranteed to get finish for all events.
A FAIL result of a sub-item will bubble up to its parent item.
Configuration
-------------
Reporting configuration is done through the ``reporting`` item in config. An
example config::
reporting:
keyname1:
type: webhook
endpoint: "http://127.0.1.1:8000/"
keyname2:
type: print
install:
log_file: /tmp/install.log
post_files: [/tmp/install.log, /var/log/syslog]
Each entry in the ``reporting`` dictionary must be a dictionary. The key is
only used for reference and to aid in config merging.
Each entry must have a 'type'. The currently supported values are:
- **log**: logs via python logger
- **print**: prints messages to stdout (for debugging)
- **webhook**: posts json formated data to a remote url. Supports Oauth.
Additionally, the webhook reporter will post files on finish of curtin. The user can declare which files should be posted in the ``install`` item via ``post_files`` as shown above. If post_files is not present, it will default to the value of log_file.
Webhook Reporter
----------------
The webhook reporter posts the event in json format to an endpoint. To enable,
provide curtin with config like::
reporting:
mylistener:
type: webhook
endpoint: http://example.com/endpoint/path
consumer_key: "ck_foo"
consumer_secret: "cs_foo"
token_key: "tk_foo"
token_secret: "tk_secret"
The ``endpoint`` key is required. Oauth information (consumer_key,
consumer_secret, token_key, token_secret) is not required, but if provided
then oauth will be used to authenticate to the endpoint on each post.
Example Events
~~~~~~~~~~~~~~
The following is an example event that would be posted::
{
"origin": "curtin",
"timestamp": 1440688425.6038516,
"event_type": "start",
"name": "cmd-install",
"description": "curtin command install"
}
The post files will look like this::
{
"origin": "curtin",
"files": [
{
"content: "fCBzZmRpc2s....gLS1uby1yZX",
"path": "/var/log/curtin/install.log",
"encoding": "base64"
},
{
"content: "fCBzZmRpc2s....gLS1uby1yZX",
"path": "/var/log/syslog",
"encoding": "base64"
}
],
"description": "curtin command install",
"timestamp": 1440688425.6038516,
"name": "cmd-install",
"result": "SUCCESS",
"event_type": "finish"
}
Example Http Request
~~~~~~~~~~~~~~~~~~~~
The following is an example http request from curtin::
Accept-Encoding: identity
Host: localhost:8000
Content-Type: application/json
Connection: close
User-Agent: Curtin/0.1
Content-Length: 156
{
"origin": "curtin",
"timestamp": 1440688425.6038516,
"event_type": "start",
"name": "cmd-install/stage-early",
"description": "preparing for installation"
}
Development / Debug Reporting
-----------------------------
For debugging and development a simple web server is provided in
`tools/report-webhook-logger`.
Run the web service like::
./tools/report-webhook-logger 8000
And then run your install with appropriate config, like::
sudo ./bin/curtin -vvv install \
--set install/logfile=/tmp/foo \
--set reporting/mypost/type=webhook \
--set reporting/mypost/endpoint=http://localhost:8000/
file://$root_tgz
Legacy Reporter
---------------
The legacy 'reporter' config entry is still supported. This was utilized by
MAAS for start/end and posting of the install log at the end of isntallation.
Its configuration looks like this:
**Legacy Reporter Config Example**::
reporter:
url: http://example.com/your/path/to/post
consumer_key: "ck_foo"
consumer_secret: "cs_foo"
token_key: "tk_foo"
token_secret: "tk_secret"
curtin-0.1.0~bzr365/examples/basic.yaml 0000644 0000000 0000000 00000000706 12673006714 016110 0 ustar 0000000 0000000 early_commands:
98_update: apt-get update
99_upgrade: DEBIAN_FRONTEND=noninteractive apt-get dist-upgrade --assume-yes
partitioning_commands:
10_partition: curtin block-meta --device=/dev/vdc simple
network_commands:
10_network: curtin net-meta --device=eth0 dhcp
sources:
05_primary:
uri: "http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-root.tar.gz"
type: "tgz"
# vi: ts=4 expandtab syntax=yaml
curtin-0.1.0~bzr365/examples/finalize.windows 0000644 0000000 0000000 00000002441 12673006714 017356 0 ustar 0000000 0000000 #!/usr/bin/env python
import os
import sys
import tempfile
from curtin.log import LOG
from curtin import util
def curthooks():
state = util.load_command_environment()
target = state['target']
if target is None:
sys.stderr.write("Unable to find target. "
"Use --target or set TARGET_MOUNT_POINT\n")
sys.exit(2)
cfg = config.load_command_config({}, state)
cloudbase_init = cfg.get('cloudbase_init', None)
if not cloudbase_init:
return False
cloudbase_init_cfg = os.path.join(
target,
"Program Files (x86)",
"Cloudbase Solutions",
"Cloudbase-Init",
"conf",
"cloudbase-init.conf")
cloudbase_init_unattended_cfg = os.path.join(
target,
"Program Files (x86)",
"Cloudbase Solutions",
"Cloudbase-Init",
"conf",
"cloudbase-init-unattend.conf")
if os.path.isfile(cloudbase_init_cfg) is False:
sys.stderr.write("Unable to find cloudbase-init.cfg.\n")
sys.exit(2)
fp = open(cloudbase_init_cfg, 'a')
fp_u = open(cloudbase_init_unattended_cfg, 'a')
for i in cloudbase_init['config'].splitlines():
fp.write("%s\r\n" % i)
fp_u.write("%s\r\n" % i)
fp.close()
fp_u.close()
curthooks()
curtin-0.1.0~bzr365/examples/network-all.yaml 0000644 0000000 0000000 00000005473 12673006714 017274 0 ustar 0000000 0000000 network_commands:
builtin: null
10_network: curtin net-meta custom
# YAML example of a network config.
network:
version: 1
config:
# Physical interfaces.
- type: physical
name: eth0
mac_address: "c0:d6:9f:2c:e8:80"
- type: physical
name: eth1
mac_address: "aa:d6:9f:2c:e8:80"
- type: physical
name: eth2
mac_address: "c0:bb:9f:2c:e8:80"
- type: physical
name: eth3
mac_address: "66:bb:9f:2c:e8:80"
- type: physical
name: eth4
mac_address: "98:bb:9f:2c:e8:80"
# VLAN interface.
- type: vlan
name: eth0.101
vlan_link: eth0
vlan_id: 101
mtu: 1500
subnets:
- type: static
address: 192.168.0.2/24
gateway: 192.168.0.1
dns_nameservers:
- 192.168.0.10
- 10.23.23.134
dns_search:
- barley.maas
- sacchromyces.maas
- brettanomyces.maas
- type: static
address: 192.168.2.10/24
# Bond.
- type: bond
name: bond0
# if 'mac_address' is omitted, the MAC is taken from
# the first slave.
mac_address: "aa:bb:cc:dd:ee:ff"
bond_interfaces:
- eth1
- eth2
params:
bond-mode: active-backup
subnets:
- type: dhcp6
# A Bond VLAN.
- type: vlan
name: bond0.200
vlan_link: bond0
vlan_id: 200
subnets:
- type: dhcp4
# A bridge.
- type: bridge
name: br0
bridge_interfaces:
- eth3
- eth4
ipv4_conf:
rp_filter: 1
proxy_arp: 0
forwarding: 1
ipv6_conf:
autoconf: 1
disable_ipv6: 1
use_tempaddr: 1
forwarding: 1
# basically anything in /proc/sys/net/ipv6/conf/.../
params:
bridge_stp: 'off'
bridge_fd: 0
bridge_maxwait: 0
subnets:
- type: static
address: 192.168.14.2/24
- type: static
address: 2001:1::1/64 # default to /64
# A global nameserver.
- type: nameserver
address: 8.8.8.8
search: barley.maas
# global nameservers and search in list form
- type: nameserver
address:
- 4.4.4.4
- 8.8.4.4
search:
- wark.maas
- foobar.maas
# A global route.
- type: route
destination: 10.0.0.0/8
gateway: 11.0.0.1
metric: 3
curtin-0.1.0~bzr365/examples/network-bond.yaml 0000644 0000000 0000000 00000002400 12673006714 017431 0 ustar 0000000 0000000 network_commands:
builtin: null
10_network: curtin net-meta custom
# YAML example of a network config.
network:
version: 1
config:
# Physical interfaces.
- type: physical
name: eth0
mac_address: "c0:d6:9f:2c:e8:80"
- type: physical
name: eth1
mac_address: "aa:d6:9f:2c:e8:80"
- type: physical
name: eth2
mac_address: "c0:bb:9f:2c:e8:80"
- type: physical
name: eth3
mac_address: "66:bb:9f:2c:e8:80"
- type: physical
name: eth4
mac_address: "98:bb:9f:2c:e8:80"
# Bond.
- type: bond
name: bond0
# if 'mac_address' is omitted, the MAC is taken from
# the first slave.
mac_address: "aa:bb:cc:dd:ee:ff"
bond_interfaces:
- eth1
- eth2
params:
bond-mode: active-backup
subnets:
- type: dhcp6
# A Bond VLAN.
- type: vlan
name: bond0.200
vlan_link: bond0
vlan_id: 200
subnets:
- type: static
address: 192.168.0.2/24
gateway: 192.168.0.1
dns_nameservers:
- 192.168.0.10
curtin-0.1.0~bzr365/examples/network-bridge.yaml 0000644 0000000 0000000 00000001401 12673006714 017743 0 ustar 0000000 0000000 network_commands:
builtin: null
10_network: curtin net-meta custom
# YAML example of a network config.
network:
version: 1
config:
# Physical interfaces.
- type: physical
name: eth0
mac_address: "c0:d6:9f:2c:e8:80"
- type: physical
name: eth1
mac_address: "aa:d6:9f:2c:e8:80"
# A bridge.
- type: bridge
name: br0
bridge_interfaces:
- eth0
- eth1
params:
bridge_stp: 'off'
bridge_fd: 0
bridge_maxwait: 0
subnets:
- type: static
address: 192.168.14.2/24
- type: static
address: 2001:1::1/64 # default to /64
curtin-0.1.0~bzr365/examples/network-simple.yaml 0000644 0000000 0000000 00000001330 12673006714 020001 0 ustar 0000000 0000000 network_commands:
builtin: null
10_network:
- curtin
- net-meta
- custom
# YAML example of a simple network config
network:
version: 1
config:
# Physical interfaces.
- type: physical
name: eth0
mac_address: "c0:d6:9f:2c:e8:80"
subnets:
- type: dhcp4
- type: physical
name: eth1
mtu: 1492
mac_address: "aa:d6:9f:2c:e8:80"
subnets:
- type: static
address: 192.168.14.2/24
gateway: 192.168.14.1
- type: static
address: 192.168.14.4/24
- type: physical
name: eth2
mac_address: "cf:d6:af:48:e8:80"
curtin-0.1.0~bzr365/examples/network-vlan.yaml 0000644 0000000 0000000 00000001111 12673006714 017445 0 ustar 0000000 0000000 network_commands:
builtin: null
10_network: curtin net-meta custom
# YAML example of a network config.
network:
version: 1
config:
# Physical interfaces.
- type: physical
name: eth0
mac_address: "c0:d6:9f:2c:e8:80"
# VLAN interface.
- type: vlan
name: eth0.101
vlan_link: eth0
vlan_id: 101
mtu: 1500
subnets:
- type: static
address: 192.168.0.2/24
gateway: 192.168.0.1
dns_nameservers:
- 192.168.0.10
curtin-0.1.0~bzr365/examples/partitioning-demos/ 0000755 0000000 0000000 00000000000 12673006714 017754 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/examples/tests/ 0000755 0000000 0000000 00000000000 12673006714 015302 5 ustar 0000000 0000000 curtin-0.1.0~bzr365/examples/partitioning-demos/custom-partitioning-demo-bcache.yaml 0000644 0000000 0000000 00000002061 12673006714 027003 0 ustar 0000000 0000000 partitioning_commands:
builtin: curtin block-meta custom
storage:
version: 1
config:
- id: sda
type: disk
ptable: msdos
model: QEMU HARDDISK
serial: QM00002
- id: sdb
type: disk
ptable: msdos
model: QEMU HARDDISK
serial: QM00003
- id: sda1
type: partition
number: 1
size: 7GB
device: sda
flag: boot
- id: sda2
type: partition
number: 2
size: 2GB
device: sda
- id: sdb1
type: partition
number: 1
size: 1GB
device: sdb
- id: bcache0
type: bcache
backing_device: sda2
cache_device: sdb1
- id: sda1_root
type: format
fstype: ext4
volume: sda1
- id: bcache_home
type: format
fstype: ext4
volume: bcache0
- id: sda1_mount
type: mount
path: /
device: sda1_root
- id: home_mount
type: mount
path: /home
device: bcache_home
curtin-0.1.0~bzr365/examples/partitioning-demos/custom-partitioning-demo-dmcrypt.yaml 0000644 0000000 0000000 00000001623 12673006714 027263 0 ustar 0000000 0000000 partitioning_commands:
builtin: curtin block-meta custom
storage:
version: 1
config:
- id: sda
type: disk
ptable: msdos
model: QEMU HARDDISK
serial: QM00002
grub_device: True
- id: sda1
type: partition
number: 1
size: 512MB
device: sda
flag: boot
- id: sda2
type: partition
number: 2
size: 9GB
device: sda
- id: sda2_crypto
type: dm_crypt
volume: sda2
key: testkey
dm_name: sda2_crypto
- id: sda1_boot
type: format
fstype: ext4
volume: sda1
- id: sda2_root
type: format
fstype: ext4
volume: sda2_crypto
- id: sda2_mount
type: mount
path: /
device: sda2_root
- id: sda1_mount
type: mount
path: /boot
device: sda1_boot
curtin-0.1.0~bzr365/examples/partitioning-demos/custom-partitioning-demo-gpt.yaml 0000644 0000000 0000000 00000001374 12673006714 026376 0 ustar 0000000 0000000 storage:
version: 1
config:
- id: sda
type: disk
ptable: gpt
model: QEMU HARDDISK
serial: QM00002
- id: bios_boot_partition
type: partition
size: 1MB
device: sda
flag: bios_grub
- id: sda1
type: partition
size: 8GB
device: sda
- id: sda2
type: partition
size: 1GB
device: sda
- id: sda1_root
type: format
fstype: ext4
volume: sda1
- id: sda2_home
type: format
fstype: ext4
volume: sda2
- id: sda1_mount
type: mount
path: /
device: sda1_root
- id: sda2_mount
type: mount
path: /home
device: sda2_home
curtin-0.1.0~bzr365/examples/partitioning-demos/custom-partitioning-demo-lvm.yaml 0000644 0000000 0000000 00000002654 12673006714 026404 0 ustar 0000000 0000000 partitioning_commands:
builtin: curtin block-meta custom
storage:
version: 1
config:
- id: sda
type: disk
ptable: msdos
model: QEMU HARDDISK
serial: QM00002
- id: sdb
type: disk
ptable: msdos
mode: QEMU HARDDISK
serial: QM00003
- id: sda1
type: partition
number: 1
size: 8GB
device: sda
flag: boot
- id: sda2
type: partition
number: 2
size: 1GB
device: sda
- id: storage_volgroup
type: lvm_volgroup
name: volgroup1
devices:
- sda2
- sdb
- id: storage_1
type: lvm_partition
volgroup: storage_volgroup
name: lv1
size: 2G
- id: storage_2
type: lvm_partition
name: lv2
volgroup: storage_volgroup
- id: sda1_root
type: format
fstype: ext4
volume: sda1
- id: storage_1_fs
type: format
fstype: ext4
volume: storage_1
- id: storage_2_fs
type: format
fstype: fat32
volume: storage_2
- id: sda1_mount
type: mount
path: /
device: sda1_root
- id: storage_1_mount
type: mount
path: /media/storage1
device: storage_1_fs
- id: storage_2_mount
type: mount
path: /media/storage2
device: storage_2_fs
curtin-0.1.0~bzr365/examples/partitioning-demos/custom-partitioning-demo-raid.yaml 0000644 0000000 0000000 00000002762 12673006714 026525 0 ustar 0000000 0000000 partitioning_commands:
builtin: curtin block-meta custom
storage:
version: 1
config:
- id: sda
type: disk
ptable: msdos
model: QEMU HARDDISK
serial: QM00002
- id: sdb
type: disk
ptable: msdos
model: QEMU HARDDISK
serial: QM00003
- id: sdc
type: disk
ptable: msdos
model: QEMU HARDDISK
serial: QM00004
- id: sdd
type: disk
ptable: msdos
model: QEMU HARDDISK
serial: QM00005
- id: sda1
type: partition
number: 1
size: 8GB
device: sda
flag: boot
- id: sdb1
type: partition
number: 1
size: 1GB
device: sdb
flag: raid
- id: sdc1
type: partition
number: 1
size: 1GB
device: sdc
flag: raid
- id: sdd1
type: partition
number: 1
size: 1GB
device: sdc
flag: raid
- id: md0
type: raid
name: md0
raidlevel: 1
devices:
- sdb1
- sdc1
spare_devices:
- sdd1
- id: sda1_root
type: format
fstype: ext4
volume: sda1
- id: raid_storage
type: format
fstype: ext4
volume: md0
- id: sda1_mount
type: mount
path: /
device: sda1_root
- id: raid_mount
type: mount
path: /media/storage
device: raid_storage
curtin-0.1.0~bzr365/examples/partitioning-demos/custom-partitioning-demo.yaml 0000644 0000000 0000000 00000001372 12673006714 025604 0 ustar 0000000 0000000 partitioning_commands:
builtin: curtin block-meta custom
storage:
version: 1
config:
- id: sda
type: disk
ptable: msdos
model: QEMU HARDDISK
serial: QM00002
- id: sda1
type: partition
number: 1
size: 8GB
device: sda
flag: boot
- id: sda2
type: partition
number: 2
size: 1GB
device: sda
- id: sda1_root
type: format
fstype: ext4
volume: sda1
- id: sda2_home
type: format
fstype: ext4
volume: sda2
- id: sda1_mount
type: mount
path: /
device: sda1_root
- id: sda2_mount
type: mount
path: /home
device: sda2_home
curtin-0.1.0~bzr365/examples/tests/allindata.yaml 0000644 0000000 0000000 00000007427 12673006714 020131 0 ustar 0000000 0000000 storage:
version: 1
config:
- id: sda
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdb
name: main_disk
grub_device: 1
- id: bios_boot_partition
type: partition
size: 1MB
device: sda
flag: bios_grub
- id: sda1
type: partition
size: 1GB
device: sda
- id: sda2
type: partition
size: 1GB
device: sda
- id: sda3
type: partition
size: 1GB
device: sda
- id: sda4
type: partition
size: 1GB
device: sda
- id: sda5
type: partition
size: 3GB
device: sda
- id: sdb
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdc
name: second_disk
- id: sdb1
type: partition
size: 1GB
device: sdb
- id: sdb2
type: partition
size: 1GB
device: sdb
- id: sdb3
type: partition
size: 1GB
device: sdb
- id: sdb4
type: partition
size: 1GB
device: sdb
- id: sdc
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdd
name: third_disk
- id: sdc1
type: partition
size: 1GB
device: sdc
- id: sdc2
type: partition
size: 1GB
device: sdc
- id: sdc3
type: partition
size: 1GB
device: sdc
- id: sdc4
type: partition
size: 1GB
device: sdc
- id: sdd
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vde
name: fourth_disk
- id: sdd1
type: partition
size: 1GB
device: sdd
- id: sdd2
type: partition
size: 1GB
device: sdd
- id: sdd3
type: partition
size: 1GB
device: sdd
- id: sdd4
type: partition
size: 1GB
device: sdd
- id: mddevice0
name: md0
type: raid
raidlevel: 5
devices:
- sda1
- sdb1
- sdc1
spare_devices:
- sdd1
- id: mddevice1
name: md1
type: raid
raidlevel: raid6
devices:
- sda2
- sdb2
- sdc2
- sdd2
spare_devices:
- sda3
- id: mddevice2
name: md2
type: raid
raidlevel: 1
devices:
- sda4
- sdb3
spare_devices:
- sdc3
- sdb4
- id: mddevice3
name: md3
type: raid
raidlevel: raid0
devices:
- sdc4
- sdd3
- id: volgroup1
name: vg1
type: lvm_volgroup
devices:
- mddevice0
- mddevice1
- mddevice2
- mddevice3
- id: lvmpart1
name: lv1
size: 1G
type: lvm_partition
volgroup: volgroup1
- id: lvmpart2
name: lv2
size: 1G
type: lvm_partition
volgroup: volgroup1
- id: lvmpart3
name: lv3
type: lvm_partition
volgroup: volgroup1
- id: dmcrypt0
type: dm_crypt
volume: lvmpart3
key: testkey
dm_name: dmcrypt0
- id: lv1_fs
name: storage
type: format
fstype: ext3
volume: lvmpart1
- id: lv2_fs
name: storage
type: format
fstype: ext4
volume: lvmpart2
- id: dmcrypt_fs
name: storage
type: format
fstype: xfs
volume: dmcrypt0
- id: sda5_root
type: format
fstype: ext4
volume: sda5
- id: sda5_mount
type: mount
path: /
device: sda5_root
- id: lv1_mount
type: mount
path: /srv/data
device: lv1_fs
- id: lv2_mount
type: mount
path: /srv/backup
device: lv2_fs
curtin-0.1.0~bzr365/examples/tests/basic.yaml 0000644 0000000 0000000 00000003051 12673006714 017246 0 ustar 0000000 0000000 storage:
version: 1
config:
- id: sda
type: disk
ptable: msdos
model: QEMU HARDDISK
path: /dev/vdb
name: main_disk
wipe: superblock
grub_device: true
- id: sda1
type: partition
number: 1
size: 3GB
device: sda
flag: boot
- id: sda2
type: partition
number: 2
size: 1GB
device: sda
- id: sda1_root
type: format
fstype: ext4
volume: sda1
- id: sda2_home
type: format
fstype: ext4
volume: sda2
- id: sda1_mount
type: mount
path: /
device: sda1_root
- id: sda2_mount
type: mount
path: /home
device: sda2_home
- id: sparedisk_id
type: disk
path: /dev/vdc
name: sparedisk
wipe: superblock
- id: btrfs_disk_id
type: disk
path: /dev/vdd
name: btrfs_volume
wipe: superblock
- id: btrfs_disk_fmt_id
type: format
fstype: btrfs
volume: btrfs_disk_id
- id: btrfs_disk_mnt_id
type: mount
path: /btrfs
device: btrfs_disk_fmt_id
- id: pnum_disk
type: disk
path: /dev/vde
name: pnum_disk
wipe: superblock
ptable: gpt
- id: pnum_disk_p1
type: partition
number: 1
size: 1GB
device: pnum_disk
- id: pnum_disk_p2
type: partition
number: 10
size: 1GB
device: pnum_disk
curtin-0.1.0~bzr365/examples/tests/basic_network.yaml 0000644 0000000 0000000 00000001245 12673006714 021022 0 ustar 0000000 0000000 network:
version: 1
config:
# Physical interfaces.
- type: physical
name: eth0
mac_address: "52:54:00:12:34:00"
subnets:
- type: dhcp4
- type: physical
name: eth1
mtu: 1492
mac_address: "52:54:00:12:34:02"
subnets:
- type: static
address: 10.0.2.100/24
- type: static
address: 10.0.2.200/24
dns_nameservers:
- 8.8.8.8
dns_search:
- barley.maas
- type: physical
name: eth2
mac_address: "52:54:00:12:34:04"
curtin-0.1.0~bzr365/examples/tests/basic_network_static.yaml 0000644 0000000 0000000 00000000635 12673006714 022373 0 ustar 0000000 0000000 network:
version: 1
config:
# Physical interfaces.
- type: physical
name: eth0
mac_address: "52:54:00:12:34:00"
subnets:
- type: static
address: 10.0.2.15/24
gateway: 10.0.2.2
- type: nameserver
address:
- 10.0.2.3
search:
- wark.maas
- foobar.maas
curtin-0.1.0~bzr365/examples/tests/bcache_basic.yaml 0000644 0000000 0000000 00000001772 12673006714 020543 0 ustar 0000000 0000000 storage:
config:
- id: id_rotary0
type: disk
name: rotary0
path: /dev/vdb
ptable: msdos
wipe: superblock
grub_device: true
- id: id_ssd0
type: disk
name: ssd0
path: /dev/vdc
wipe: superblock
- id: id_rotary0_part1
type: partition
name: rotary0-part1
device: id_rotary0
number: 1
offset: 1M
size: 999M
wipe: superblock
- id: id_rotary0_part2
type: partition
name: rotary0-part2
device: id_rotary0
number: 2
size: 9G
wipe: superblock
- id: id_bcache0
type: bcache
name: bcache0
backing_device: id_rotary0_part2
cache_device: id_ssd0
cache_mode: writeback
- id: bootfs
type: format
label: boot-fs
volume: id_rotary0_part1
fstype: ext4
- id: rootfs
type: format
label: root-fs
volume: id_bcache0
fstype: ext4
- id: rootfs_mount
type: mount
path: /
device: rootfs
- id: bootfs_mount
type: mount
path: /boot
device: bootfs
version: 1
curtin-0.1.0~bzr365/examples/tests/bonding_network.yaml 0000644 0000000 0000000 00000001262 12673006714 021360 0 ustar 0000000 0000000 network:
version: 1
config:
# Physical interfaces.
- type: physical
name: eth0
mac_address: "52:54:00:12:34:00"
subnets:
- type: dhcp4
- type: physical
name: eth1
mac_address: "52:54:00:12:34:02"
- type: physical
name: eth2
mac_address: "52:54:00:12:34:04"
# Bond.
- type: bond
name: bond0
mac_address: "52:54:00:12:34:06"
bond_interfaces:
- eth1
- eth2
params:
bond-mode: active-backup
subnets:
- type: static
address: 10.23.23.2/24
curtin-0.1.0~bzr365/examples/tests/lvm.yaml 0000644 0000000 0000000 00000002700 12673006714 016763 0 ustar 0000000 0000000 storage:
version: 1
config:
- id: sda
type: disk
ptable: msdos
model: QEMU HARDDISK
path: /dev/vdb
name: main_disk
- id: sda1
type: partition
size: 3GB
device: sda
flag: boot
- id: sda_extended
type: partition
size: 5G
flag: extended
device: sda
- id: sda2
type: partition
size: 2G
flag: logical
device: sda
- id: sda3
type: partition
size: 3G
flag: logical
device: sda
- id: volgroup1
name: vg1
type: lvm_volgroup
devices:
- sda2
- sda3
- id: lvmpart1
name: lv1
size: 1G
type: lvm_partition
volgroup: volgroup1
- id: lvmpart2
name: lv2
type: lvm_partition
volgroup: volgroup1
- id: sda1_root
type: format
fstype: ext4
volume: sda1
- id: lv1_fs
name: storage
type: format
fstype: fat32
volume: lvmpart1
- id: lv2_fs
name: storage
type: format
fstype: ext3
volume: lvmpart2
- id: sda1_mount
type: mount
path: /
device: sda1_root
- id: lv1_mount
type: mount
path: /srv/data
device: lv1_fs
- id: lv2_mount
type: mount
path: /srv/backup
device: lv2_fs
curtin-0.1.0~bzr365/examples/tests/mdadm_bcache.yaml 0000644 0000000 0000000 00000003265 12673006714 020543 0 ustar 0000000 0000000 storage:
version: 1
config:
- id: sda
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdb
name: main_disk
- id: bios_boot_partition
type: partition
size: 1MB
device: sda
flag: bios_grub
- id: sda1
type: partition
size: 3GB
device: sda
- id: sda2
type: partition
size: 1GB
device: sda
- id: sda3
type: partition
size: 1GB
device: sda
- id: sda4
type: partition
size: 1GB
device: sda
- id: sda5
type: partition
size: 1GB
device: sda
- id: sda6
type: partition
size: 1GB
device: sda
- id: mddevice
name: md0
type: raid
raidlevel: 1
devices:
- sda2
- sda3
spare_devices:
- sda4
- id: bcache0
type: bcache
name: cached_array
backing_device: mddevice
cache_device: sda5
cache_mode: writeback
- id: bcache1
type: bcache
name: cached_array_2
backing_device: sda6
cache_device: sda5
cache_mode: writeback
- id: sda1_root
type: format
fstype: ext4
volume: sda1
- id: raid_storage
type: format
fstype: ext4
volume: bcache0
- id: bcache_storage
type: format
fstype: ext4
volume: bcache1
- id: sda1_mount
type: mount
path: /
device: sda1_root
- id: raid_mount
type: mount
path: /media/data
device: raid_storage
- id: bcache1_mount
type: mount
path: /media/bcache1
device: bcache_storage
curtin-0.1.0~bzr365/examples/tests/mirrorboot.yaml 0000644 0000000 0000000 00000001524 12673006714 020366 0 ustar 0000000 0000000 storage:
version: 1
config:
- id: sda
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdb
name: main_disk
grub_device: 1
- id: bios_boot_partition
type: partition
size: 1MB
device: sda
flag: bios_grub
- id: sda1
type: partition
size: 3GB
device: sda
- id: sdb
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdc
name: second_disk
- id: sdb1
type: partition
size: 3GB
device: sdb
- id: mddevice
name: md0
type: raid
raidlevel: 1
devices:
- sda1
- sdb1
- id: md_root
type: format
fstype: ext4
volume: mddevice
- id: md_mount
type: mount
path: /
device: md_root
curtin-0.1.0~bzr365/examples/tests/raid10boot.yaml 0000644 0000000 0000000 00000002412 12673006714 020131 0 ustar 0000000 0000000 storage:
version: 1
config:
- id: sda
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdb
name: main_disk
grub_device: 1
- id: bios_boot_partition
type: partition
size: 1MB
device: sda
flag: bios_grub
- id: sda1
type: partition
size: 3GB
device: sda
- id: sdb
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdc
name: second_disk
- id: sdb1
type: partition
size: 3GB
device: sdb
- id: sdc
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdd
name: third_disk
- id: sdc1
type: partition
size: 3GB
device: sdc
- id: sdd
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vde
name: fourth_disk
- id: sdd1
type: partition
size: 3GB
device: sdd
- id: mddevice0
name: md0
type: raid
raidlevel: 10
devices:
- sda1
- sdb1
- sdc1
- sdd1
- id: md_root
type: format
fstype: ext4
volume: mddevice0
- id: md_mount
type: mount
path: /
device: md_root
curtin-0.1.0~bzr365/examples/tests/raid5bcache.yaml 0000644 0000000 0000000 00000003530 12673006714 020321 0 ustar 0000000 0000000 storage:
config:
- grub_device: true
id: sda
model: QEMU HARDDISK
name: sda
ptable: msdos
path: /dev/vdb
type: disk
wipe: superblock
- id: sdb
model: QEMU HARDDISK
name: sdb
path: /dev/vdc
type: disk
wipe: superblock
- id: sdc
model: QEMU HARDDISK
name: sdc
path: /dev/vdd
type: disk
wipe: superblock
- id: sdd
model: QEMU HARDDISK
name: sdd
path: /dev/vde
type: disk
wipe: superblock
- id: sde
model: QEMU HARDDISK
name: sde
path: /dev/vdf
type: disk
wipe: superblock
- devices:
- sdc
- sdd
- sde
id: md0
name: md0
raidlevel: 5
spare_devices: []
type: raid
- device: sda
id: sda-part1
name: sda-part1
number: 1
offset: 2097152B
size: 1000001536B
type: partition
uuid: 3a38820c-d675-4069-b060-509a3d9d13cc
wipe: superblock
- device: sda
id: sda-part2
name: sda-part2
number: 2
size: 7586787328B
type: partition
uuid: 17747faa-4b9e-4411-97e5-12fd3d199fb8
wipe: superblock
- backing_device: sda-part2
cache_device: sdb
cache_mode: writeback
id: bcache0
name: bcache0
type: bcache
- fstype: ext4
id: sda-part1_format
label: ''
type: format
uuid: 71b1ef6f-5cab-4a77-b4c8-5a209ec11d7c
volume: sda-part1
- fstype: ext4
id: md0_format
label: ''
type: format
uuid: b031f0a0-adb3-43be-bb43-ce0fc8a224a4
volume: md0
- fstype: ext4
id: bcache0_format
label: ''
type: format
uuid: ce45bbaf-5a44-4487-b89e-035c2dd40657
volume: bcache0
- device: bcache0_format
id: bcache0_mount
path: /
type: mount
- device: sda-part1_format
id: sda-part1_mount
path: /boot
type: mount
- device: md0_format
id: md0_mount
path: /srv/data
type: mount
version: 1
curtin-0.1.0~bzr365/examples/tests/raid5boot.yaml 0000644 0000000 0000000 00000002055 12673006714 020060 0 ustar 0000000 0000000 storage:
version: 1
config:
- id: sda
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdb
name: main_disk
grub_device: 1
- id: bios_boot_partition
type: partition
size: 1MB
device: sda
flag: bios_grub
- id: sda1
type: partition
size: 3GB
device: sda
- id: sdb
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdc
name: second_disk
- id: sdb1
type: partition
size: 3GB
device: sdb
- id: sdc
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdd
name: third_disk
- id: sdc1
type: partition
size: 3GB
device: sdc
- id: mddevice
name: md0
type: raid
raidlevel: 5
devices:
- sda1
- sdb1
- sdc1
- id: md_root
type: format
fstype: ext4
volume: mddevice
- id: md_mount
type: mount
path: /
device: md_root
curtin-0.1.0~bzr365/examples/tests/raid6boot.yaml 0000644 0000000 0000000 00000002435 12673006714 020063 0 ustar 0000000 0000000 storage:
version: 1
config:
- id: sda
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdb
name: main_disk
grub_device: 1
- id: bios_boot_partition
type: partition
size: 1MB
device: sda
flag: bios_grub
- id: sda1
type: partition
size: 3GB
device: sda
- id: sdb
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdc
name: second_disk
- id: sdb1
type: partition
size: 3GB
device: sdb
- id: sdc
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vdd
name: third_disk
- id: sdc1
type: partition
size: 3GB
device: sdc
- id: sdd
type: disk
ptable: gpt
model: QEMU HARDDISK
path: /dev/vde
name: fourth_disk
- id: sdd1
type: partition
size: 3GB
device: sdd
- id: mddevice
name: md0
mdname: foobar
type: raid
raidlevel: 6
devices:
- sda1
- sdb1
- sdc1
- sdd1
- id: md_root
type: format
fstype: ext4
volume: mddevice
- id: md_mount
type: mount
path: /
device: md_root
curtin-0.1.0~bzr365/examples/tests/uefi_basic.yaml 0000644 0000000 0000000 00000001411 12673006714 020254 0 ustar 0000000 0000000 storage:
config:
- id: id_disk0
type: disk
name: main_disk
path: /dev/vdb
ptable: gpt
wipe: superblock
grub_device: true
- device: id_disk0
flag: boot
id: id_disk0_part1
number: 1
offset: 1M
size: 499M
type: partition
wipe: superblock
- device: id_disk0
id: id_disk0_part2
number: 2
size: 3G
type: partition
wipe: superblock
- fstype: fat32
id: id_efi_format
label: efi
type: format
volume: id_disk0_part1
- fstype: ext4
id: id_root_format
label: root
type: format
volume: id_disk0_part2
- device: id_root_format
id: id_root_mount
path: /
type: mount
- device: id_efi_format
id: id_efi_mount
path: /boot/efi
type: mount
version: 1
curtin-0.1.0~bzr365/helpers/common 0000644 0000000 0000000 00000053001 12673006714 015176 0 ustar 0000000 0000000 #!/bin/bash
TEMP_D=""
CR="
"
VERBOSITY=${VERBOSITY:-${CURTIN_VERBOSITY:-0}}
error() { echo "$@" 1>&2; }
debug() {
[ ${VERBOSITY:-0} -ge "$1" ] || return
shift
error "$@"
}
partition_main_usage() {
cat <&1) || {
error "wiping entire '$target' with ${info} failed."
error "$out"
return 1
}
else
local fbs=$bs
count=$((size / bs))
if [ "$size" -ge "$mb" ]; then
count=1
fbs=$mb
fi
info="size=$size count=$count bs=$fbs"
debug 1 "wiping start of '$target' with ${info}."
# wipe the first MB (up to 'size')
out=$(dd if=/dev/zero conv=notrunc "of=$target" \
"bs=$fbs" "count=$count" 2>&1) || {
error "wiping start of '$target' with ${info} failed."
error "$out"
return 1
}
if $wipe_end && [ "$size" -gt "$mb" ]; then
# do the last 1MB
count=$((mb / bs))
seek=$(((size / bs) - $count))
info="size=$size count=$count bs=$bs seek=$seek"
debug 1 "wiping end of '$target' with ${info}."
out=$(dd if=/dev/zero conv=notrunc "of=$target" "seek=$seek" \
"bs=$bs" "count=$count" 2>&1)
if [ $? -ne 0 ]; then
error "wiping end of '$target' with ${info} failed."
error "$out";
return 1;
fi
fi
fi
if $rereadpt && [ -b "$target" ]; then
blockdev --rereadpt "$target"
udevadm settle
fi
}
find_partno() {
local devname="$1" partno="$2"
local devbname cand msg="" slash="/"
devbname="${devname#/dev/}"
# /dev/cciss/c0d0 -> ccis!c0d0
devbname="${devbname//$slash/!}"
if [ -d "/sys/class/block/${devbname}" ]; then
local cand candptno name partdev
debug 1 "using sys/class/block/$devbname"
for cand in /sys/class/block/$devbname/*/partition; do
[ -f "$cand" ] || continue
read candptno < "$cand"
[ "$candptno" = "$partno" ] || continue
name=${cand#/sys/class/block/${devbname}/}
name=${name%/partition}
# ccis!c0d0p1 -> ccis/c0d0p1
name=${name//!/$slash}
partdev="/dev/$name"
[ -b "$partdev" ] && _RET="$partdev" && return 0
msg="expected $partdev to exist as partition $partno on $devname"
error "WARN: $msg. it did not exist."
done
else
for cand in "${devname}$partno" "${devname}p${partno}"; do
[ -b "$cand" ] && _RET="$cand" && return 0
done
fi
return 1
}
part2bd() {
# part2bd given a partition, return the block device it is on
# and the number the partition is. ie, 'sda2' -> '/dev/sda 2'
local dev="$1" fp="" sp="" bd="" ptnum=""
dev="/dev/${dev#/dev/}"
fp=$(readlink -f "$dev") || return 1
sp="/sys/class/block/${fp##*/}"
[ -f "$sp/partition" ] || { _RET="$fp 0"; return 0; }
read ptnum < "$sp/partition"
sp=$(readlink -f "$sp") || return 1
# sp now has some /sys/devices/pci..../0:2:0:0/block/sda/sda1
bd=${sp##*/block/}
bd="${bd%/*}"
_RET="/dev/$bd $ptnum"
return 0
}
pt_gpt() {
local target="$1" end=${2:-""} boot="$3" size="" s512=""
local start="2048" rootsize="" bootsize="1048576" maxend=""
local isblk=false
getsize "$target" ||
{ error "failed to get size of $target"; return 1; }
size="$_RET"
if [ -z "$end" ]; then
end=$(($size/512))
else
end=$(($end/512))
fi
if [ "$boot" = true ]; then
maxend=$((($size/512)-$start-$bootsize))
if [ $maxend -lt 0 ]; then
error "Disk is not big enough for /boot partition on $target";
return 1;
fi
else
maxend=$((($size/512)-$start))
fi
[ "$end" -gt "$maxend" ] && end="$maxend"
debug 1 "maxend=$maxend end=$end size=$size"
[ -b "$target" ] && isblk=true
if [ "$boot" = true ]; then
# Creating 'efi', '/boot' and '/' partitions
sgdisk --new "15:$start:+1M" --typecode=15:ef02 \
--new "1::+512M" --typecode=1:8300 \
--new "2::$end" --typecode=2:8300 "$target" ||
{ error "failed to gpt partition $target"; return 1; }
else
# Creating 'efi' and '/' partitions
sgdisk --new "15:$start:+1M" --typecode=15:ef02 \
--new "1::$end" --typecode=1:8300 "$target" ||
{ error "failed to gpt partition $target"; return 1; }
fi
if $isblk; then
local expected="1 15"
[ "$boot" = "true" ] && expected="$expected 2"
blockdev --rereadpt "$target"
udevadm settle
assert_partitions "$target" $expected ||
{ error "$target missing partitions: $_RET"; return 1; }
wipe_partitions "$target" $expected ||
{ error "$target: failed to wipe partitions"; return 1; }
fi
}
assert_partitions() {
local dev="$1" missing="" part=""
shift
for part in "$@"; do
find_partno "$dev" $part || missing="${missing} ${part}"
done
_RET="${missing# }"
[ -z "$missing" ]
}
pt_uefi() {
local target="$1" end=${2:-""} size="" s512=""
local start="2048" rootsize="" maxend=""
local isblk=false
getsize "$target" ||
{ error "failed to get size of $target"; return 1; }
size="$_RET"
if [ -z "$end" ]; then
end=$(($size/512))
else
end=$(($end/512))
fi
maxend=$((($size/512)-$start))
[ "$end" -gt "$maxend" ] && end="$maxend"
debug 1 "maxend=$maxend end=$end size=$size"
[ -b "$target" ] && isblk=true
# Creating 'UEFI' and '/' partitions
sgdisk --new "15:2048:+512M" --typecode=15:ef00 \
--new "1::$end" --typecode=1:8300 "$target" ||
{ error "failed to sgdisk for uefi to $target"; return 1; }
if $isblk; then
blockdev --rereadpt "$target"
udevadm settle
assert_partitions "$target" 1 15 ||
{ error "$target missing partitions: $_RET"; return 1; }
wipe_partitions "$target" 1 15 ||
{ error "$target: failed to wipe partitions"; return 1; }
fi
local pt15
find_partno "$target" 15 && pt15="$_RET" ||
{ error "failed to find partition 15 for $target"; return 1; }
mkfs -t vfat -F 32 -n uefi-boot "$pt15" ||
{ error "failed to partition :$pt15' for UEFI vfat"; return 1; }
}
pt_mbr() {
local target="$1" end=${2:-""} boot="$3" size="" s512="" ptype="L"
local start="2048" rootsize="" maxsize="4294967296"
local maxend="" isblk=false def_bootsize="1048576" bootsize=0
local isblk=false
getsize "$target" ||
{ error "failed to get size of $target"; return 1; }
size="$_RET"
if $boot; then
bootsize=$def_bootsize
fi
s512=$(($size/512))
if [ $s512 -ge $maxsize ]; then
debug 1 "disk is larger than max for mbr (2TB)"
s512=$maxsize
fi
# allow 33 sectors for the secondary gpt header in the case that
# the user wants to later 'sgdisk --mbrtogpt'
local gpt2hsize="33"
if [ -n "$end" ]; then
rootsize=$(((end/512)-start-bootsize))
else
rootsize=$((s512-start-bootsize-$gpt2hsize))
fi
[ -b "$target" ] && isblk=true
# interact with sfdisk in units of 512 bytes (--unit S)
# we start all partitions at 2048 of those (1M)
local sfdisk_out="" sfdisk_in="" sfdisk_cmd="" t="" expected=""
if "$boot"; then
t="$start,$bootsize,$ptype,-${CR}"
t="$t$(($start+$bootsize)),$rootsize,$ptype,*"
sfdisk_in="$t"
expected="1 2"
else
sfdisk_in="$start,$rootsize,$ptype,*"
expected=1
fi
sfdisk_cmd=( sfdisk --no-reread --force --Linux --unit S "$target" )
debug 1 "sfdisking with: echo '$sfdisk_in' | ${sfdisk_cmd[*]}"
sfdisk_out=$(echo "$sfdisk_in" | "${sfdisk_cmd[@]}" 2>&1)
ret=$?
[ $ret -eq 0 ] || {
error "failed to partition $target [${sfdisk_out}]";
return 1;
}
if $isblk; then
blockdev --rereadpt "$target"
udevadm settle
assert_partitions "$target" ${expected} ||
{ error "$target missing partitions: $_RET"; return 1; }
wipe_partitions "$target" ${expected} ||
{ error "failed to wipe partition 1 on $target"; return 1; }
fi
}
pt_prep() {
local target="$1" end=${2:-""}
local cmd="" isblk=false
[ -b "$target" ] && isblk=true
wipedev "$target" ||
{ error "failed to clear $target"; return 1; }
cmd=(
sgdisk
--new "1::+8M" --typecode=1:4100
--new "2::$end" --typecode=2:8300
"$target"
)
"${cmd[@]}" ||
fail "Failed to create GPT partitions (${cmd[*]})"
udevadm trigger
udevadm settle
if $isblk; then
blockdev --rereadpt "$target"
udevadm settle
assert_partitions "$target" 1 2 ||
{ error "$target missing partitions: $_RET"; return 1; }
# wipe the full prep partition
wipe_partitions --full "$target" 1 ||
{ error "$target: failed to wipe full PReP partition"; return 1;}
wipe_partitions "$target" 2 ||
{ error "$target: failed to wipe partition 2"; return 1;}
fi
return 0
}
partition_main() {
local short_opts="hE:f:bv"
local long_opts="help,end:,format:,boot,verbose"
local getopt_out=$(getopt --name "${0##*/}" \
--options "${short_opts}" --long "${long_opts}" -- "$@") &&
eval set -- "${getopt_out}" ||
{ partition_main_usage 1>&2; return 1; }
local cur="" next=""
local format="mbr" boot=false target="" end="" ret=0
while [ $# -ne 0 ]; do
cur="$1"; next="$2";
case "$cur" in
-h|--help) partition_main_usage ; exit 0;;
-E|--end) end=$next; shift;;
-f|--format) format=$next; shift;;
-b|--boot) boot=true;;
-v|--verbose) VERBOSITY=$((${VERBOSITY}+1));;
--) shift; break;;
esac
shift;
done
[ $# -gt 1 ] && { partition_main_usage "got $# args, expected 1" 1>&2; return 1; }
[ $# -eq 0 ] && { partition_main_usage "must provide target-dev" 1>&2; return 1; }
target="$1"
if [ -n "$end" ]; then
human2bytes "$end" ||
{ error "failed to convert '$end' to bytes"; return 1; }
end="$_RET"
fi
[ "$format" = "gpt" -o "$format" = "mbr" ] ||
[ "$format" = "uefi" -o "$format" = "prep" ] ||
{ partition_main_usage "invalid format: $format" 1>&2; return 1; }
TEMP_D=$(mktemp -d "${TMPDIR:-/tmp}/${0##*/}.XXXXXX") ||
fail "failed to make tempdir"
trap cleanup EXIT
[ -e "$target" ] || { error "$target does not exist"; return 1; }
[ -f "$target" -o -b "$target" ] ||
{ error "$target not a block device"; return 1; }
wipedev "$target" ||
{ error "wiping $target failed"; return 1; }
if [ "$format" = "mbr" ]; then
pt_mbr "$target" "$end" "$boot"
elif [ "$format" = "gpt" ]; then
pt_gpt "$target" "$end" "$boot"
elif [ "$format" = "uefi" ]; then
pt_uefi "$target" "$end"
elif [ "$format" = "prep" ]; then
pt_prep "$target" "$end"
fi
ret=$?
return $ret
}
human2bytes() {
# converts size suitable for input to resize2fs to bytes
# s:512 byte sectors, K:kilobytes, M:megabytes, G:gigabytes
# none: block size of the image
local input=${1} defunit=${2:-1024}
local unit count;
case "$input" in
*s) count=${input%s}; unit=512;;
*K) count=${input%K}; unit=1024;;
*M) count=${input%M}; unit=$((1024*1024));;
*G) count=${input%G}; unit=$((1024*1024*1024));;
*) count=${input} ; unit=${defunit};;
esac
_RET=$((${count}*${unit}))
}
getsize() {
# return size of target in bytes
local target="$1"
if [ -b "$target" ]; then
_RET=$(blockdev --getsize64 "$target")
elif [ -f "$target" ]; then
_RET=$(stat "--format=%s" "$target")
else
return 1;
fi
}
is_md() {
case "${1##*/}" in
md[0-9]) return 0;;
esac
return 1
}
get_carryover_params() {
local cmdline=" $1 " extra="" lead="" carry_extra="" carry_lead=""
# return a string to append to installed systems boot parameters
# it may include a '--' after a '---'
# see LP: 1402042 for some history here.
# this is similar to 'user-params' from d-i
local preferred_sep="---" # KERNEL_CMDLINE_COPY_TO_INSTALL_SEP
local legacy_sep="--"
case "$cmdline" in
*\ ${preferred_sep}\ *)
extra=${cmdline#* ${preferred_sep} }
lead=${cmdline%% ${preferred_sep} *}
;;
*\ ${legacy_sep}\ *)
extra="${cmdline#* ${legacy_sep} }"
lead=${cmdline%% ${legacy_sep} *}
;;
*)
extra=""
lead="$cmdline"
;;
esac
if [ -n "$extra" ]; then
carry_extra=$(set -f;
c="";
for p in $extra; do
case "$p" in
(BOOTIF=*|initrd=*|BOOT_IMAGE=*) continue;;
esac
c="$c $p";
done
echo "${c# }"
)
fi
# these get copied even if they werent after the separator
local padded=" $carry_extra "
carry_lead=$(set -f;
padded=" ${carry_extra} "
c=""
for p in $lead; do
# skip any that are already in carry_extra
[ "${padded#* $p }" != "$padded" ] && continue
case "$p" in
(console=*) c="$c $p";;
esac
done
echo "${c# }"
)
_RET="${carry_lead:+${carry_lead} }${carry_extra}"
}
install_grub() {
local long_opts="uefi,update-nvram"
local getopt_out="" mp_efi=""
getopt_out=$(getopt --name "${0##*/}" \
--options "" --long "${long_opts}" -- "$@") &&
eval set -- "${getopt_out}"
local uefi=0
local update_nvram=0
while [ $# -ne 0 ]; do
cur="$1"; next="$2";
case "$cur" in
--uefi) uefi=$((${uefi}+1));;
--update-nvram) update_nvram=$((${update_nvram}+1));;
--) shift; break;;
esac
shift;
done
[ $# -lt 2 ] && { grub_install_usage "must provide mount-point and target-dev" 1>&2; return 1; }
local mp="$1"
local cmdline tmp r=""
shift
local grubdevs
grubdevs=( "$@" )
if [ "${#grubdevs[@]}" = "1" -a "${grubdevs[0]}" = "none" ]; then
grubdevs=( )
fi
# find the mp device
mp_dev=$(awk -v "MP=$mp" '$2 == MP { print $1 }' /proc/mounts) || {
error "unable to determine device for mount $mp";
return 1;
}
[ -z "$mp_dev" ] && {
error "did not find '$mp' in /proc/mounts"
cat /proc/mounts 1>&2
return 1
}
[ -b "$mp_dev" ] || { error "$mp_dev is not a block device!"; return 1; }
# get dpkg arch
local dpkg_arch=""
dpkg_arch=$(chroot "$mp" dpkg --print-architecture)
r=$?
[ $r -eq 0 ] || {
error "failed to get dpkg architecture [$r]"
return 1;
}
# set correct grub package
local grub_name="grub-pc"
local grub_target="i386-pc"
if [ "${dpkg_arch#ppc64}" != "${dpkg_arch}" ]; then
grub_name="grub-ieee1275"
grub_target="powerpc-ieee1275"
elif [ "$uefi" -ge 1 ]; then
grub_name="grub-efi-$dpkg_arch"
case "$dpkg_arch" in
amd64)
grub_target="x86_64-efi";;
arm64)
grub_target="arm64-efi";;
esac
fi
# check that the grub package is installed
tmp=$(chroot "$mp" dpkg-query --show \
--showformat='${Status}\n' $grub_name)
r=$?
if [ $r -ne 0 -a $r -ne 1 ]; then
error "failed to check if $grub_name installed";
return 1;
fi
case "$tmp" in
install\ ok\ installed) :;;
*) debug 1 "$grub_name not installed, not doing anything";
return 0;;
esac
local grub_d="etc/default/grub.d"
local mygrub_cfg="$grub_d/50-curtin-settings.cfg"
[ -d "$mp/$grub_d" ] || mkdir -p "$mp/$grub_d" ||
{ error "Failed to create $grub_d"; return 1; }
# LP: #1179940 . The 50-cloudig-settings.cfg file is written by the cloud
# images build and defines/override some settings. Disable it.
local cicfg="$grub_d/50-cloudimg-settings.cfg"
if [ -f "$mp/$cicfg" ]; then
debug 1 "moved $cicfg out of the way"
mv "$mp/$cicfg" "$mp/$cicfg.disabled"
fi
# get the user provided / carry-over kernel arguments
local newargs=""
read cmdline < /proc/cmdline &&
get_carryover_params "$cmdline" && newargs="$_RET" || {
error "Failed to get carryover parrameters from cmdline";
return 1;
}
debug 1 "carryover command line params: $newargs"
: > "$mp/$mygrub_cfg" ||
{ error "Failed to write '$mygrub_cfg'"; return 1; }
{
[ "${REPLACE_GRUB_LINUX_DEFAULT:-1}" = "0" ] ||
echo "GRUB_CMDLINE_LINUX_DEFAULT=\"$newargs\""
echo "# disable grub os prober that might find other OS installs."
echo "GRUB_DISABLE_OS_PROBER=true"
echo "GRUB_TERMINAL=console"
} >> "$mp/$mygrub_cfg"
local short="" bd="" grubdev grubdevs_new=""
grubdevs_new=()
for grubdev in "${grubdevs[@]}"; do
if is_md "$grubdev"; then
short=${grubdev##*/}
for bd in "/sys/block/$short/slaves/"/*; do
[ -d "$bd" ] || continue
bd=${bd##*/}
bd="/dev/${bd%[0-9]}" # FIXME: part2bd
grubdevs_new[${#grubdevs_new[@]}]="$bd"
done
else
grubdevs_new[${#grubdevs_new[@]}]="$grubdev"
fi
done
grubdevs=( "${grubdevs_new[@]}" )
if [ "$uefi" -ge 1 ]; then
nvram="--no-nvram"
if [ "$update_nvram" -ge 1 ]; then
nvram=""
fi
debug 1 "installing ${grub_name} to: /boot/efi"
chroot "$mp" env DEBIAN_FRONTEND=noninteractive sh -ec '
dpkg-reconfigure "$1"
update-grub
# grub-install in 12.04 does not contain --no-nvram, --target,
# or --efi-directory
target="--target=$2"
no_nvram="$3"
efi_dir="--efi-directory=/boot/efi"
gi_out=$(grub-install --help 2>&1)
echo "$gi_out" | grep -q -- "$no_nvram" || no_nvram=""
echo "$gi_out" | grep -q -- "--target" || target=""
echo "$gi_out" | grep -q -- "--efi-directory" || efi_dir=""
grub-install $target $efi_dir \
--bootloader-id=ubuntu --recheck $no_nvram' -- \
"${grub_name}" "${grub_target}" "$nvram"