simplestreams-0.1.0~bzr341/.gitignore0000644000000000000000000000010212314577450015663 0ustar 00000000000000*__pycache__* *pyc gnupg .gnupg exdata exdata-query *.sjson *.gpg simplestreams-0.1.0~bzr341/LICENSE0000644000000000000000000010333012314577450014707 0ustar 00000000000000 GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software. A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public. The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version. An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU Affero General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Remote Network Interaction; Use with the GNU General Public License. Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements. You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see . simplestreams-0.1.0~bzr341/Makefile0000644000000000000000000000343212314577450015344 0ustar 00000000000000TENV := ./tools/tenv PUBKEY := examples/keys/example.pub PUBKEYS := $(PUBKEY) SECKEY := examples/keys/example.sec example_sstream_files := $(wildcard examples/*/streams/v1/*.json) EXDATA_SIGN ?= 1 ifeq ($(EXDATA_SIGN),1) EXDATA_SIGN_ARG := --sign endif build: @echo nothing to do for $@ test: test2 test3 test3: examples-sign $(TENV) nosetests3 -v tests/ test2: examples-sign $(TENV) nosetests -v tests/ lint: $(TENV) ./tools/run-pylint pep8: ./tools/run-pep8 check: lint pep8 test exdata: exdata/fake exdata/data exdata/data: exdata-query gnupg $(TENV) env REAL_DATA=1 ./tools/make-test-data $(EXDATA_SIGN_ARG) exdata-query/ exdata/data exdata/fake: exdata-query gnupg $(TENV) ./tools/make-test-data $(EXDATA_SIGN_ARG) exdata-query/ exdata/fake exdata-query: rsync -avz --delete --exclude "FILE_DATA_CACHE" --exclude ".bzr/*" cloud-images.ubuntu.com::uec-images/query/ exdata-query $(PUBKEY) $(SECKEY): @mkdir -p $$(dirname "$(PUBKEY)") $$(dirname "$(SECKEY)") $(TENV) gen-example-key $(PUBKEY) $(SECKEY) gnupg: gnupg/README gnupg/README: $(PUBKEYS) $(SECKEY) rm -Rf gnupg @umask 077 && mkdir -p gnupg $(TENV) gpg --import $(SECKEY) >/dev/null 2>&1 for pubkey in $(PUBKEYS); do \ $(TENV) gpg-trust-pubkey $$pubkey; done @echo "this is used by $(TENV) as the gpg directory" > gnupg/README # this is less than ideal, but Make and ':' in targets do not work together # so instead of proper rules, we have this phoney rule that makes the # targets. This would probably cause issue with -j. examples-sign: gnupg @for f in $(example_sstream_files); do \ [ "$$f.gpg" -nt "$$f" -a "$${f%.json}.sjson" -nt "$$f" ] || \ { echo "$(TENV) js2signed $$f" 1>&2; $(TENV) js2signed $$f; } || exit; \ done .PHONY: check exdata/fake exdata/data exdata-query examples-sign test test2 test3 simplestreams-0.1.0~bzr341/README.txt0000644000000000000000000000351112314577450015400 0ustar 00000000000000== Intro == This is a documentation, examples, a python library and some tools for interacting with simple streams format. The intent of simple streams format is to make well formated data available about "products". There is more documentation in doc/README. There are examples in examples/. == Simple Streams Getting Started == = Mirroring == To mirror one source (http or file) to a local directory, see tools/do-mirror. For example, to mirror the 'foocloud' example content, do: ./tools/tenv do-mirror examples/foocloud/ my.out streams/v1/index.json That will create a full mirror in my.out/. ./tools/tenv do-mirror --mirror=http://download.cirros-cloud.net/ \ --max=1 examples/cirros/ cirros.mirror/ That will create a mirror of cirros data in cirros.mirror, with only the latest file from each product. = Hooks = To use the "command hooks mirror" for invoking commands to synchronize, between one source and another, see bin/sstream-sync. For an example, the following runs the debug hook against the example 'foocloud' data: ./tools/tenv sstream-sync --hook=hook-debug \ --path=streams/v1/index.json examples/foocloud/ You can also run it with cloud-images.ubuntu.com data like this: ./tools/tenv sstream-sync \ --item-skip-download --hook=./tools/hook-debug \ --path=streams/v1/index.sjson http://cloud-images.ubuntu.com/releases/ The 'hook-debug' program simply outputs the data it is invoked with. It does not actually mirror anything. == Glance == Example mirror of a download image source to glance with swift serving a localized image-id format: ./tools/sstream-mirror-glance --region=RegionOne \ --cloud-name=localcloud "--content-id=localcloud.%(region)s:partners" \ --output-swift=published/ --max=1 --name-prefix="ubuntu/" \ http://cloud-images.ubuntu.com/releases/ streams/v1/index.json simplestreams-0.1.0~bzr341/bin/0000755000000000000000000000000012314577450014452 5ustar 00000000000000simplestreams-0.1.0~bzr341/debian/0000755000000000000000000000000012314577450015124 5ustar 00000000000000simplestreams-0.1.0~bzr341/doc/0000755000000000000000000000000012314577450014447 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/0000755000000000000000000000000012314577450015520 5ustar 00000000000000simplestreams-0.1.0~bzr341/pylintrc0000644000000000000000000000067012314577450015474 0ustar 00000000000000[General] init-hook='import sys; sys.path.append("tests/")' [MESSAGES CONTROL] # See: http://pylint-messages.wikidot.com/all-codes # W0142: *args and **kwargs are fine. # W0511: TODOs in code comments are fine. # W0702: No exception type(s) specified # W0703: Catch "Exception" # C0103: Invalid name # C0111: Missing docstring disable=W0142,W0511,W0702,W0703,C0103,C0111 [REPORTS] reports=no include-ids=yes [FORMAT] max-line-length=79 simplestreams-0.1.0~bzr341/setup.py0000644000000000000000000000127212314577450015416 0ustar 00000000000000VERSION = '0.1.0' from distutils.core import setup from glob import glob import os def is_f(p): return os.path.isfile(p) setup( name="python-simplestreams", description='Library and tools for using Simple Streams data', version=VERSION, author='Scott Moser', author_email='scott.moser@canonical.com', license="AGPL", url='http://launchpad.net/simplestreams/', packages=['simplestreams', 'simplestreams.mirrors', 'simplestreams.objectstores'], scripts=glob('bin/*'), data_files=[ ('/usr/lib/simplestreams', glob('tools/hook-*')), ('/usr/share/doc/simplestreams', [f for f in glob('doc/*') if is_f(f)]), ] ) simplestreams-0.1.0~bzr341/simplestreams/0000755000000000000000000000000012314577450016572 5ustar 00000000000000simplestreams-0.1.0~bzr341/tests/0000755000000000000000000000000012314577450015044 5ustar 00000000000000simplestreams-0.1.0~bzr341/tools/0000755000000000000000000000000012314577450015042 5ustar 00000000000000simplestreams-0.1.0~bzr341/bin/sstream-mirror0000755000000000000000000001024612314577450017371 0ustar 00000000000000#!/usr/bin/python3 # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import argparse import sys from simplestreams import filters from simplestreams import log from simplestreams import mirrors from simplestreams import objectstores from simplestreams import util def main(): parser = argparse.ArgumentParser() parser.add_argument('--keep', action='store_true', default=False, help='keep items in target up to MAX items ' 'even after they have fallen out of the source') parser.add_argument('--max', type=int, default=None, help='store at most MAX items in the target') parser.add_argument('--path', default=None, help='sync from index or products file in mirror') parser.add_argument('--no-item-download', action='store_true', default=False, help='do not download items with a "path"') parser.add_argument('--dry-run', action='store_true', default=False, help='only report what would be done') parser.add_argument('--mirror', action='append', default=[], dest="mirrors", help='additional mirrors to find referenced files') parser.add_argument('--verbose', '-v', action='count', default=0) parser.add_argument('--log-file', default=sys.stderr, type=argparse.FileType('w')) parser.add_argument('--keyring', action='store', default=None, help='keyring to be specified to gpg via --keyring') parser.add_argument('source_mirror') parser.add_argument('output_d') parser.add_argument('filters', nargs='*', default=[]) args = parser.parse_args() (mirror_url, initial_path) = util.path_from_mirror_url(args.source_mirror, args.path) def policy(content, path): # pylint: disable=W0613 if initial_path.endswith('sjson'): return util.read_signed(content, keyring=args.keyring) else: return content filter_list = filters.get_filters(args.filters) mirror_config = {'max_items': args.max, 'keep_items': args.keep, 'filters': filter_list, 'item_download': not args.no_item_download} level = (log.ERROR, log.INFO, log.DEBUG)[min(args.verbose, 2)] log.basicConfig(stream=args.log_file, level=level) smirror = mirrors.UrlMirrorReader(mirror_url, mirrors=args.mirrors, policy=policy) tstore = objectstores.FileStore(args.output_d) drmirror = mirrors.DryRunMirrorWriter(config=mirror_config, objectstore=tstore) drmirror.sync(smirror, initial_path) def print_diff(char, items): for pedigree, path, size in items: fmt = "{char} {pedigree} {path} {size} Mb" size = int(size / (1024 * 1024)) print(fmt.format( char=char, pedigree=' '.join(pedigree), path=path, size=size)) print_diff('+', drmirror.downloading) print_diff('-', drmirror.removing) print("%d Mb change" % (drmirror.size / (1024 * 1024))) if args.dry_run: return True tmirror = mirrors.ObjectFilterMirror(config=mirror_config, objectstore=tstore) tmirror.sync(smirror, initial_path) if __name__ == '__main__': main() # vi: ts=4 expandtab syntax=python simplestreams-0.1.0~bzr341/bin/sstream-query0000755000000000000000000001116612314577450017226 0ustar 00000000000000#!/usr/bin/python3 # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . from simplestreams import filters from simplestreams import mirrors from simplestreams import log from simplestreams import util import argparse import errno import pprint import signal import sys FORMAT_PRETTY = "PRETTY" def warn(msg): sys.stderr.write("WARN: %s" % msg) class FilterMirror(mirrors.BasicMirrorWriter): def __init__(self, config=None): super(FilterMirror, self).__init__(config=config) if config is None: config = {} self.config = config self.filters = config.get('filters', []) outfmt = config.get('output_format') if not outfmt: outfmt = "%s" self.output_format = outfmt def load_products(self, path=None, content_id=None): return {'content_id': content_id, 'products': {}} def filter_item(self, data, src, target, pedigree): return filters.filter_item(self.filters, data, src, pedigree) def insert_item(self, data, src, target, pedigree, contentsource): # src and target are top level products:1.0 # data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]] # contentsource is a ContentSource if 'path' exists in data or None data = util.products_exdata(src, pedigree) if 'path' in data: data.update({'item_url': contentsource.url}) if self.output_format == FORMAT_PRETTY: pprint.pprint(data) else: try: print(self.output_format % (data)) except KeyError as e: sys.stderr.write("output format failed. Missing %s\n" % e.args) sys.stderr.write("item: %s\n" % data) def main(): parser = argparse.ArgumentParser() parser.add_argument('--max', type=int, default=None, dest='max_items', help='store at most MAX items in the target') parser.add_argument('--path', default=None, help='sync from index or products file in mirror') fmt_group = parser.add_mutually_exclusive_group() fmt_group.add_argument('--output-format', '-o', action='store', dest='output_format', default=None, help="specify output format per python str.format") fmt_group.add_argument('--pretty', action='store_const', const=FORMAT_PRETTY, dest='output_format', help="pretty print output") parser.add_argument('--verbose', '-v', action='count', default=0) parser.add_argument('--log-file', default=sys.stderr, type=argparse.FileType('w')) parser.add_argument('--keyring', action='store', default=None, help='keyring to be specified to gpg via --keyring') parser.add_argument('mirror_url') parser.add_argument('filters', nargs='*', default=[]) cmdargs = parser.parse_args() (mirror_url, path) = util.path_from_mirror_url(cmdargs.mirror_url, cmdargs.path) level = (log.ERROR, log.INFO, log.DEBUG)[min(cmdargs.verbose, 2)] log.basicConfig(stream=cmdargs.log_file, level=level) initial_path = path def policy(content, path): # pylint: disable=W0613 if initial_path.endswith('sjson'): return util.read_signed(content, keyring=cmdargs.keyring) else: return content smirror = mirrors.UrlMirrorReader(mirror_url, policy=policy) filter_list = filters.get_filters(cmdargs.filters) cfg = {'max_items': cmdargs.max_items, 'filters': filter_list, 'output_format': cmdargs.output_format} tmirror = FilterMirror(config=cfg) try: tmirror.sync(smirror, path) except IOError as e: if e.errno == errno.EPIPE: sys.exit(0x80 | signal.SIGPIPE) raise if __name__ == '__main__': main() # vi: ts=4 expandtab syntax=python simplestreams-0.1.0~bzr341/bin/sstream-sync0000755000000000000000000001340712314577450017035 0ustar 00000000000000#!/usr/bin/python3 # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . from simplestreams import mirrors from simplestreams.mirrors import command_hook from simplestreams import log from simplestreams import util import argparse import errno import os import signal import sys import yaml def which(program): def is_exe(fpath): return os.path.isfile(fpath) and os.access(fpath, os.X_OK) fpath, _fname = os.path.split(program) if fpath: if is_exe(program): return program else: for path in os.environ["PATH"].split(os.pathsep): path = path.strip('"') exe_file = os.path.join(path, program) if is_exe(exe_file): return exe_file return None def warn(msg): sys.stderr.write("WARN: %s" % msg) def main(): parser = argparse.ArgumentParser() defhook = command_hook.DEFAULT_HOOK_NAME hooks = [("--hook-%s" % hook.replace("_", "-"), hook, False) for hook in command_hook.HOOK_NAMES] hooks.append(('--hook', defhook, False,)) parser.add_argument('--config', '-c', help='read config file', type=argparse.FileType('rb')) for (argname, cfgname, _required) in hooks: parser.add_argument(argname, dest=cfgname, required=False) parser.add_argument('--keep', action='store_true', default=False, dest='keep_items', help='keep items in target up to MAX items ' 'even after they have fallen out of the source') parser.add_argument('--max', type=int, default=None, dest='max_items', help='store at most MAX items in the target') parser.add_argument('--item-skip-download', action='store_true', default=False, help='Do not download items that are to be inserted.') parser.add_argument('--delete', action='store_true', default=False, dest='delete_filtered_items', help='remove filtered items from the target') parser.add_argument('--path', default=None, help='sync from index or products file in mirror') parser.add_argument('--verbose', '-v', action='count', default=0) parser.add_argument('--log-file', default=sys.stderr, type=argparse.FileType('w')) parser.add_argument('--keyring', action='store', default=None, help='keyring to be specified to gpg via --keyring') parser.add_argument('mirror_url') cmdargs = parser.parse_args() known_cfg = [('--item-skip-download', 'item_skip_download', False), ('--max', 'max_items', False), ('--keep', 'keep_items', False), ('--delete', 'delete_filtered_items', False), ('mirror_url', 'mirror_url', True), ('--path', 'path', True)] known_cfg.extend(hooks) cfg = {} if cmdargs.config: cfg = yaml.safe_load(cmdargs.config) if not cfg: cfg = {} known_names = [i[1] for i in known_cfg] unknown = [key for key in cfg if key not in known_names] if unknown: warn("unknown keys in config: %s\n" % str(unknown)) missing = [] fallback = cfg.get(defhook, getattr(cmdargs, defhook, None)) for (argname, cfgname, _required) in known_cfg: val = getattr(cmdargs, cfgname) if val is not None: cfg[cfgname] = val if val == "": cfg[cfgname] = None if ((cfgname in command_hook.HOOK_NAMES or cfgname == defhook) and cfg.get(cfgname) is not None): if which(cfg[cfgname]) is None: msg = "invalid input for %s. '%s' is not executable\n" sys.stderr.write(msg % (argname, val)) sys.exit(1) if (cfgname in command_hook.REQUIRED_FIELDS and cfg.get(cfgname) is None and not fallback): missing.append((argname, cfgname,)) pfm = util.path_from_mirror_url (cfg['mirror_url'], cfg['path']) = pfm(cfg['mirror_url'], cfg.get('path')) if missing: sys.stderr.write("must provide input for (--hook/%s for default):\n" % defhook) for (flag, cfg) in missing: sys.stderr.write(" cmdline '%s' or cfgname '%s'\n" % (flag, cfg)) sys.exit(1) level = (log.ERROR, log.INFO, log.DEBUG)[min(cmdargs.verbose, 2)] log.basicConfig(stream=cmdargs.log_file, level=level) def policy(content, path): # pylint: disable=W0613 if cfg['path'].endswith('sjson'): return util.read_signed(content, keyring=cmdargs.keyring) else: return content smirror = mirrors.UrlMirrorReader(cfg['mirror_url'], policy=policy) tmirror = command_hook.CommandHookMirror(config=cfg) try: tmirror.sync(smirror, cfg['path']) except IOError as e: if e.errno == errno.EPIPE: sys.exit(0x80 | signal.SIGPIPE) raise if __name__ == '__main__': main() # vi: ts=4 expandtab syntax=python simplestreams-0.1.0~bzr341/debian/changelog.trunk0000644000000000000000000000023412314577450020137 0ustar 00000000000000simplestreams (0.1.0~bzrREVNO-1~trunk1) UNRELEASED; urgency=low * Initial release -- Scott Moser Tue, 26 Mar 2013 01:10:01 +0000 simplestreams-0.1.0~bzr341/debian/compat0000644000000000000000000000000212314577450016322 0ustar 000000000000007 simplestreams-0.1.0~bzr341/debian/control0000644000000000000000000000377712314577450016545 0ustar 00000000000000Source: simplestreams Section: python Priority: extra Standards-Version: 3.9.4 Maintainer: Ubuntu Developers Build-Depends: debhelper (>= 7), python-all, python-setuptools, python3, python3-nose, python3-setuptools, python3-yaml Homepage: http://launchpad.net/simplestreams X-Python-Version: >= 2.7 X-Python3-Version: >= 3.2 Package: simplestreams Architecture: all Priority: extra Depends: python3-simplestreams, python3-yaml, ${misc:Depends}, ${python3:Depends} Replaces: python-simplestreams (<= 0.1.0~bzr230) Conflicts: python-simplestreams (<= 0.1.0~bzr230) Description: Library and tools for using Simple Streams data This package provides a client for interacting with simple streams data as is produced to describe Ubuntu's cloud images. Package: python3-simplestreams Architecture: all Priority: extra Depends: gnupg, ${misc:Depends}, ${python3:Depends} Suggests: python3-requests (>= 1.1) Description: Library and tools for using Simple Streams data This package provides a client for interacting with simple streams data as is produced to describe Ubuntu's cloud images. Package: python-simplestreams Architecture: all Priority: extra Depends: gnupg, python-boto, ${misc:Depends}, ${python:Depends} Suggests: python-requests (>= 1.1) Description: Library and tools for using Simple Streams data This package provides a client for interacting with simple streams data as is produced to describe Ubuntu's cloud images. Package: python-simplestreams-openstack Architecture: all Priority: extra Depends: python-glanceclient, python-keystoneclient, python-simplestreams, python-swiftclient, ${misc:Depends} Description: Library and tools for using Simple Streams data This package depends on libraries necessary to use the openstack dependent functionality in simplestreams. That includes interacting with glance, swift and keystone. simplestreams-0.1.0~bzr341/debian/copyright0000644000000000000000000000115612314577450017062 0ustar 00000000000000Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: simplestreams Upstream-Contact: Scott Moser Source: https://launchpad.net/simplestreams Files: * Copyright: 2013, Canonical Ltd. License: AGPLv3 GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 . Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. . The complete text of the AGPL version 3 can be seen in http://www.gnu.org/licenses/agpl-3.0.html simplestreams-0.1.0~bzr341/debian/python-simplestreams.install0000644000000000000000000000005412314577450022722 0ustar 00000000000000usr/lib/python2*/*-packages/simplestreams/* simplestreams-0.1.0~bzr341/debian/python3-simplestreams.install0000644000000000000000000000005412314577450023005 0ustar 00000000000000usr/lib/python3*/*-packages/simplestreams/* simplestreams-0.1.0~bzr341/debian/rules0000755000000000000000000000122512314577450016204 0ustar 00000000000000#!/usr/bin/make -f PYVERS := $(shell pyversions -r) PY3VERS := $(shell py3versions -r) %: dh $@ --with=python2,python3 override_dh_auto_install: dh_auto_install set -ex; for python in $(PY3VERS) $(PYVERS); do \ $$python setup.py build --executable=/usr/bin/python3 && \ $$python setup.py install --root=$(CURDIR)/debian/tmp --install-layout=deb; \ done # there:are no packages for python3-{boto, swiftclient, # glanceclient, keystonclient}, so do not package the bits # that would depend on them. for bad in openstack mirrors/glance objectstores/swift objectstores/s3; do \ rm $(CURDIR)/debian/tmp/usr/lib/python3/*/simplestreams/$$bad.py; done simplestreams-0.1.0~bzr341/debian/simplestreams.install0000644000000000000000000000011012314577450021374 0ustar 00000000000000usr/bin/* usr/lib/simplestreams/hook-debug usr/share/doc/simplestreams/ simplestreams-0.1.0~bzr341/debian/source/0000755000000000000000000000000012314577450016424 5ustar 00000000000000simplestreams-0.1.0~bzr341/debian/source/format0000644000000000000000000000001412314577450017632 0ustar 000000000000003.0 (quilt) simplestreams-0.1.0~bzr341/doc/README0000644000000000000000000001444112314577450015333 0ustar 00000000000000== Simple Sync Format == Simple Sync format consists of 2 different file formats. * A "products list" (format=products:1.0) * A "index" (format=index:1.0) Files contain json formated data. Data can come in one of 2 formats: * Json file: .json A .json file can be accompanied by a .json.gpg file which contains signature data for .json file. Due to race conditions caused by the fact that .json and .json.gpg may not be able to be obtained from a storage location at the same time, the preferred delivery of signed data is via '.sjson' format. * Signed Json File: .sjson This is a gpg cleartext signed message. http://rfc-ref.org/RFC-TEXTS/2440/chapter7.html The payload the same content that would be included in the json file. Special dictionary entries: * 'path': If a 'product' dictionary in an index file or a item dictionary in a products file contains a 'path' element, then that indicates there is content to be downloaded associated with that element. A 'path' must value must be relative to the base of the mirror. It is * 'md5', 'sha256', 'sha512': If an item contains a 'path' and one of these fields, then the content referenced must have the given checksum(s). * 'size': For an item with a 'path', this indicates the expected download size. It should be present for a item with a path in a products file. Having access to expected size allows the client to provide progress and also reduces the potential for hash collision attacks. * 'updated' This field can exist at the top level of a products or index, and contains a rfc-2822 timestamp indicating when the file was last updated This allows a client to quickly note that it is up to date. == Simple Sync Mirrors == The default/expected location of an index file is 'streams/v1/index.sjson' or 'streams/v1/index.json' underneath the top level of a mirror. 'path' entries as described above are relative to the top level of a mirror, not relative to the location of the index. For example: http://example.com/my-mirror/ would be the top level of a mirror, and the expected path of an index is http://example.com/my-mirror/streams/v1/index.sjson To describe a file that lives at: http://example.com/my-mirror/streams/v1/products.sjson The 'path' element must be: 'streams/v1/products.sjson' == Products List == products list: (format=products:1.0) For Ubuntu, a product is 'server:precise:amd64' A Products list has a 'content_id' and multiple products. a product has multiple versions a version has multiple items An item can be globally uniquely identified by the path to it. Ie, the 'content_id' for a products list and the key in each element of the tree form a unique tuple for that item. Given: content_id = tree['content_id'] prod_name = tree['products'].keys()[0] ver_name = tree['products'][prod_name]['versions'].keys(0) item_name = tree['products'][prod_name]['versions'][ver_name].keys(0) that unique tuple is: (content_id, prod_name, ver_name, item_name) The following is a description of each of these fields: * content_id is formed similarly to an ISCSI qualified name (IQN) An example is: com.ubuntu.cloud:released:aws It should have a reverse domain portion followed by a portion that represents a name underneith that domain. * product_name: product name is unique within a products list. The same product name may appear in multiple products_lists. For example, In Ubuntu, 'server:precise:amd64' will appear in both 'com.ubuntu.cloud:released:aws' and 'com.ubuntu.cloud:released:download'. That name collision should imply that the two separate pairs are equivalent in some manner. * version_name: A 'version' of a product represents a release, build or collection of that product. A key in the 'versions' dictionary should be sortable by rules of a 'LANG=C sort()'. That allows the client to trivially order versions to find the most recent. Ubuntu uses "serial" numbers for these keys, in the format YYYYMMDD[.0-9]. * item_name: Inside of a version, there may be multiple items. An example would be a binary build and a source tarball. For Ubuntu download images, these are things like '.tar.gz', '-disk1.img' and '-root.tar.gz'. The item name does not need to be user-friendly. It must be consistent. Because this id is unique amoungst the given 'version_name', a client needs only to store that key, rather than trying to determine which keys inside the item dictionary identify it. An 'item' dictionary may contain a 'path' element. 'path' entries for a given item must be immutable. That is, for a given 'path' under a mirror, the content must never change. == Index == This is a index of products files that are available. It has a top level 'index' dictionary. Each entry in that dictionary is a content_id of a products file. The entry should have a 'path' item that indicates where to download the product. All other data inside the product entry is not required, but helps a client to find what they're looking for. item groups of the same "type". this is 'stream:1.0' format. * stream collection: a list of content streams A stream collection is simply a way to provide an index of known content streams, and information about them. This is 'stream-collection:1.0' Useful definitions * item group an item group is a list of like items. Ie, all produced by the same build. requirements: * serial: a 'serial' entry that can be sorted by YYYYMMDD[.X] * items: a list of items Example item groups are: * output of the amd64 cloud image build done on 2012-04-04 * amd64 images from the cirros release version 0.3.1 * item There are 1 or more items in a item group. requirements: * name: must be unique within the item group. special fields: * path: If an item has a 'path', then the target must be obtainable and should be downloaded when mirroring. * md5sum: stores checksum Example: * "disk1.img" produced from the amd64 cloud image build done on 2012-04-04 * -root.tar.gz produced from the same build. Notes: * index files are not required to be signed, as they only contain references to other content that is signed, and that is hosted on the same mirror. simplestreams-0.1.0~bzr341/doc/files/0000755000000000000000000000000012314577450015551 5ustar 00000000000000simplestreams-0.1.0~bzr341/doc/files/my.cfg0000644000000000000000000000135412314577450016662 0ustar 00000000000000# This is an example CommandHookMirror config # You can utilize it with: # export MIRROR_D=mirror.out; ./tools/cmd-hook-sync hook.cfg doc/example ${MIRROR_D} # stream_load: | for d in "${MIRROR_D}/%(iqn)s/"*; do [ -d "$d" ] && echo "${d##*/}"; done; true item_insert: | echo "%(_hookname)s: %(iqn)s/%(serial)s/%(name)s" cp "%(path_local)s" "$MIRROR_D/%(iqn)s/%(serial)s/%(name)s" group_insert_pre: | echo "%(_hookname)s: %(iqn)s/%(serial)s" mkdir -p "$MIRROR_D/%(iqn)s/%(serial)s" group_remove_post: | echo "%(_hookname)s: %(iqn)s/%(serial)s" rm -Rf "$MIRROR_D/%(iqn)s/%(serial)s" # ignore files unless they have match the provided regex item_filter: echo "%(name)s" | grep -q "disk1.img" # vi: ts=4 expandtab syntax=yaml simplestreams-0.1.0~bzr341/doc/files/openstack-sync.cfg0000644000000000000000000000312112314577450021170 0ustar 00000000000000# This is an example CommandHookMirror config for uploading to glance # PYTHONPATH=$PWD ./tools/cmd-hook-sync ./my.cfg \ # http://download.cirros-cloud.net/streams/v1/unsigned/streams.yaml stream_load: | set -e; set -f; iqn="%(iqn)s" output=$(glance image-list --property-filter "iqn=$iqn") ids=$(echo "$output" | awk '$2 ~ uuid { print $2 }' "uuid=[0-9a-f-]{36}") for id in $ids; do out=$(glance image-show $id) serial=$(echo "$out" | awk '$2 == "Property" && $3 == sname { print $5 }' sname="'serial'") # for debug, list what we find to stderr echo "$iqn $serial $id" 1>&2 # report we have the given serial for this iqn echo "$serial" done item_insert: | iqn="%(iqn)s" serial="%(serial)s" path_local="%(path_local)s" [ "${arch}" = "amd64" ] && arch="x86_64" uuid=$(uuidgen) glance image-create --disk-format=qcow2 --container-format=bare \ "--name=${pubname:-${name##*/}}" "--id=$uuid" \ ${arch:+"--property=architecture=${arch}"} \ "--property=iqn=$iqn" "--property=serial=$serial" \ ${md5:+"--checksum=$md5"} \ "--file=${path_local}" group_remove_pre: | iqn="%(iqn)s" serial="%(serial)s" set -e; set -f; output=$(glance image-list "--property-filter=iqn=$iqn" \ "--property-filter=serial=$serial") ids=$(echo "$output" | awk '$2 ~ uuid { print $2 }' "uuid=[0-9a-f-]{36}") for id in $ids; do echo "remove $iqn $serial $id" glance image-delete "$id"; done # ignore files unless they have match the provided regex item_filter: echo "%(name)s" | grep -q "disk.img$" # vi: ts=4 expandtab syntax=yaml simplestreams-0.1.0~bzr341/examples/cirros/0000755000000000000000000000000012314577450017021 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/0000755000000000000000000000000012314577450017332 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/keys/0000755000000000000000000000000012314577450016473 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/cirros/streams/0000755000000000000000000000000012314577450020477 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/cirros/streams/v1/0000755000000000000000000000000012314577450021025 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/cirros/streams/v1/index.json0000644000000000000000000000153412314577450023032 0ustar 00000000000000{ "index": { "net.cirros-cloud:devel:download": { "datatype": "image-downloads", "path": "streams/v1/net.cirros-cloud:devel:download.json", "updated": "Sat, 04 May 2013 01:58:57 +0000", "products": [ "net.cirros-cloud.devel:standard:0.3:arm", "net.cirros-cloud.devel:standard:0.3:i386", "net.cirros-cloud.devel:standard:0.3:x86_64" ], "format": "products:1.0" }, "net.cirros-cloud:released:download": { "datatype": "image-downloads", "path": "streams/v1/net.cirros-cloud:released:download.json", "updated": "Sat, 04 May 2013 01:58:57 +0000", "products": [ "net.cirros-cloud:standard:0.3:i386", "net.cirros-cloud:standard:0.3:x86_64", "net.cirros-cloud:standard:0.3:arm" ], "format": "products:1.0" } }, "updated": "Sat, 04 May 2013 01:58:57 +0000", "format": "index:1.0" } simplestreams-0.1.0~bzr341/examples/cirros/streams/v1/net.cirros-cloud:devel:download.json0000644000000000000000000002564312314577450030060 0ustar 00000000000000{ "datatype": "image-downloads", "updated": "Sat, 04 May 2013 01:58:57 +0000", "content_id": "net.cirros-cloud:devel:download", "products": { "net.cirros-cloud.devel:standard:0.3:arm": { "arch": "arm", "stream": "devel", "versions": { "20130111": { "items": { "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1~pre4/cirros-0.3.1~pre4-arm-uec.tar.gz", "sha256": "2b77d7c4225793b7271f550310f6e0827b64434b88fec589dd97e98e8828d254", "md5": "797e2d488c799eab0a8eb09a9c1ff4a3", "size": 7314153 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1~pre4/cirros-0.3.1~pre4-arm-rootfs.img.gz", "sha256": "ff2d70f474ee78209083d88caa04add60ada5cb71cfec57a69f6b696ef57eee2", "md5": "986c9cabd412f12cb5027b7d7eb4ec03", "size": 10949810 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1~pre4/cirros-0.3.1~pre4-arm-lxc.tar.gz", "sha256": "a9225fed02b071d0cf9a60cb3d17578d19714b9de3fe4083e34eb3c1110f3f83", "md5": "55b092bde364aad125a1db57c932f1d0", "size": 3466163 } }, "version": "0.3.1~pre4", "pubname": "cirros-0.3.1~pre4-arm" }, "20120611": { "items": { "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1~pre1/cirros-0.3.1~pre1-arm-uec.tar.gz", "sha256": "679823406624c380a1d3c5af659a41aab25ff42007b0eb0a7afd4b58142e738e", "md5": "22bb53be0daf975e35a4fbc856ae89c2", "size": 7173849 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1~pre1/cirros-0.3.1~pre1-arm-rootfs.img.gz", "sha256": "5dd775b57a624108c82838a47e57aee576955ca162c2e8c2e50ee8ffea5f93d2", "md5": "ad7422c2124c59466724dea9658db20f", "size": 10762344 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1~pre1/cirros-0.3.1~pre1-arm-lxc.tar.gz", "sha256": "8e619113cfeb0a799b89dd570fd3c978ac83b5f78ea7b257a3b2e2a2bb2de30c", "md5": "b0a7d585b42d8cfff18e2a76c5115495", "size": 3428595 } }, "version": "0.3.1~pre1", "pubname": "cirros-0.3.1~pre1-arm" }, "20120827": { "items": { "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1~pre3/cirros-0.3.1~pre3-arm-uec.tar.gz", "sha256": "cc5362c2c9e84547100aed3e6df0011e2b4d91c47243c85de2643d1d72ed3946", "md5": "d1703407ad1483e2bbf68d4d78987581", "size": 7302383 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1~pre3/cirros-0.3.1~pre3-arm-rootfs.img.gz", "sha256": "dc5c0f6f02592d6af164ae175b5f5e69d5d016734672982e9b4f29294782e61a", "md5": "c98688744bdf66b56b19405acfc48966", "size": 10922886 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1~pre3/cirros-0.3.1~pre3-arm-lxc.tar.gz", "sha256": "136e2d3a4a870eb6a4b3dcbde44209bcd050d36b3c20c2a1f21f4dd7f6e00004", "md5": "fbc83952ca51071c1aea8b0d82c51d1b", "size": 3456925 } }, "version": "0.3.1~pre3", "pubname": "cirros-0.3.1~pre3-arm" } } }, "net.cirros-cloud.devel:standard:0.3:i386": { "arch": "i386", "stream": "devel", "versions": { "20130111": { "items": { "disk.img": { "ftype": "disk.img", "path": "0.3.1~pre4/cirros-0.3.1~pre4-i386-disk.img", "sha256": "ac09fe84e4aa5315b017ee8a522dcf0f2780ebd27b9a9eaca56a24c5e0818977", "md5": "108c6f694a10b5376bde18e71a238ae0", "size": 12380672 }, "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1~pre4/cirros-0.3.1~pre4-i386-uec.tar.gz", "sha256": "da62e3351c60bf40ab2b07d808aed8e0cbeed1df0b766c3870aab29330569614", "md5": "c078a3e7b7c758a69217391f54fa4cba", "size": 8195614 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1~pre4/cirros-0.3.1~pre4-i386-rootfs.img.gz", "sha256": "3944219a3957a787735a54bdc267e2daf6c7c2e02cb4fac133ae8fa6ba4bfbaa", "md5": "538576ec0ca05ab9eaf1ada9fc7f3084", "size": 11566741 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1~pre4/cirros-0.3.1~pre4-i386-lxc.tar.gz", "sha256": "a4ff395c1e9bf8dda5b7e25f75fdc90f9726631ffb5f6df821be0da0209dd7ef", "md5": "ae66a7dd1c050f4d313d1fe8febd87d0", "size": 3192308 } }, "version": "0.3.1~pre4", "pubname": "cirros-0.3.1~pre4-i386" }, "20120611": { "items": { "disk.img": { "ftype": "disk.img", "path": "0.3.1~pre1/cirros-0.3.1~pre1-i386-disk.img", "sha256": "3a453c98a4f46cb9b8acc34e4d2b39b8d9315082f80cefc4659320741ab94fcf", "md5": "483f1377c284b5ba61e8038a7bb53849", "size": 12204544 }, "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1~pre1/cirros-0.3.1~pre1-i386-uec.tar.gz", "sha256": "ceaea29b7808e434712cac7380659367aa41e049810fab53a36d6ecfe5ae014c", "md5": "b5a730c2cfd08c78e8af41be06082a46", "size": 8165911 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1~pre1/cirros-0.3.1~pre1-i386-rootfs.img.gz", "sha256": "751fc761ab9e1db5ddb156cb8d54f0bb67e68bf82a92e700f9e280d6da0cad79", "md5": "e413bfe8627aef63bc8c4cb96954f1e3", "size": 11494115 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1~pre1/cirros-0.3.1~pre1-i386-lxc.tar.gz", "sha256": "e727a344cc19caf173b376bdf1321f13dded5eaa83d47eba0c65354e7a6d5c81", "md5": "04d84c5ed7f4d2ced01a7da82a4d573b", "size": 3158748 } }, "version": "0.3.1~pre1", "pubname": "cirros-0.3.1~pre1-i386" }, "20120827": { "items": { "disk.img": { "ftype": "disk.img", "path": "0.3.1~pre3/cirros-0.3.1~pre3-i386-disk.img", "sha256": "4f531df04463eb9fd4a4ca05d013f6e48ef66bf87287a3d22037a6d845784390", "md5": "919b7f3c9d1740f57cc208acbf098ae5", "size": 12287488 }, "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1~pre3/cirros-0.3.1~pre3-i386-uec.tar.gz", "sha256": "5df8c17124c5f0b5a38c8cdd763c6432d5ca8b32f6148ffc8e77486a36cdd0f5", "md5": "04dfffbb656e536775d522148bc031b2", "size": 8184352 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1~pre3/cirros-0.3.1~pre3-i386-rootfs.img.gz", "sha256": "d99a1eee31ee469eaf5cac51de711071282112b8b1d398759db92eefe1daf83e", "md5": "a9f6519bf540331d74c3f938b7752ce9", "size": 11546128 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1~pre3/cirros-0.3.1~pre3-i386-lxc.tar.gz", "sha256": "9cf93aeefdb00601bdf44401adf5c89473883ae408fee0a9d65ed8dd355c2cac", "md5": "8f59168a9f5b4962d3327f50d5d18ad7", "size": 3186156 } }, "version": "0.3.1~pre3", "pubname": "cirros-0.3.1~pre3-i386" } } }, "net.cirros-cloud.devel:standard:0.3:x86_64": { "arch": "x86_64", "stream": "devel", "versions": { "20130111": { "items": { "disk.img": { "ftype": "disk.img", "path": "0.3.1~pre4/cirros-0.3.1~pre4-x86_64-disk.img", "sha256": "6f50d6a8874610ad25196cad3296e0cb55274fb3aa6963cef04012b413cca3af", "md5": "c32b60592301c1cf714a93fea0a25352", "size": 13118976 }, "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1~pre4/cirros-0.3.1~pre4-x86_64-uec.tar.gz", "sha256": "bb31bb2372f66949799f0ee7f272078e1d0339f3f790d40f1dbcadf9c24225f3", "md5": "414dc72831718ebaaf8a994b59e71f62", "size": 8633179 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1~pre4/cirros-0.3.1~pre4-x86_64-rootfs.img.gz", "sha256": "8f14fa9de3deee186fc2fa7778913f93d93f0fb0d2e0436d7e2af92f0b98f5e6", "md5": "8cc226def4fa6a50b4dbb0ebae57675e", "size": 12343217 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1~pre4/cirros-0.3.1~pre4-x86_64-lxc.tar.gz", "sha256": "867bc516aa0721ed7c4e7069249f8f891c7f186b7d7cecc3c131ef4225c6df4d", "md5": "019857146c4d5c2f99fa03c04ba300db", "size": 3534613 } }, "version": "0.3.1~pre4", "pubname": "cirros-0.3.1~pre4-x86_64" }, "20120611": { "items": { "disk.img": { "ftype": "disk.img", "path": "0.3.1~pre1/cirros-0.3.1~pre1-x86_64-disk.img", "sha256": "0cfdda25485a0b51cc9356199446028403e0745699011265ff304dd51ce3b36b", "md5": "8875836383dcf5de32e708945a5455b5", "size": 12992512 }, "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1~pre1/cirros-0.3.1~pre1-x86_64-uec.tar.gz", "sha256": "1d45297df580bef76ec2c2303124e4673b76c2b61c4a1064581171b0f8e35a79", "md5": "af842e35f335fc55c78311303474b121", "size": 8602015 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1~pre1/cirros-0.3.1~pre1-x86_64-rootfs.img.gz", "sha256": "99c2c0f64f51d1184311597c87f3911799c345786a6987fd76fe742d5ef29481", "md5": "980984e8787426059edf4ab6fe1e680f", "size": 12277772 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1~pre1/cirros-0.3.1~pre1-x86_64-lxc.tar.gz", "sha256": "f32cc1a108322c9b8967d97c301d4d9412bf31a321cea026773d932a5594dab6", "md5": "d68c898e4f4e8afc9221e7c8f4f34e7d", "size": 3497630 } }, "version": "0.3.1~pre1", "pubname": "cirros-0.3.1~pre1-x86_64" }, "20120827": { "items": { "disk.img": { "ftype": "disk.img", "path": "0.3.1~pre3/cirros-0.3.1~pre3-x86_64-disk.img", "sha256": "3471124e2943922fe2a14ef06ef051c8b21e43835de7a06c58b33035b4124943", "md5": "4b65df206c61e30af1947ee21d5aad40", "size": 13089280 }, "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1~pre3/cirros-0.3.1~pre3-x86_64-uec.tar.gz", "sha256": "901b13ea2f4f9216e859c1f90f5acc36b85468043ba243b75a919a9e34a4a70d", "md5": "0a4cb79338406b275ad0d5be08c9e0dd", "size": 8623028 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1~pre3/cirros-0.3.1~pre3-x86_64-rootfs.img.gz", "sha256": "cba564b4f9176ba0efceb09f646a38485c647e938b7d1150c1b8d00e012905b1", "md5": "1c649a0761e80e6e81b34371e3186deb", "size": 12328845 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1~pre3/cirros-0.3.1~pre3-x86_64-lxc.tar.gz", "sha256": "9abee551668c7afada5b550d6f5760c105ba8b0011e37cbe77985b3834862395", "md5": "9fb61f717b367c0d0459eada51fee35f", "size": 3527698 } }, "version": "0.3.1~pre3", "pubname": "cirros-0.3.1~pre3-x86_64" } } } }, "format": "products:1.0" } simplestreams-0.1.0~bzr341/examples/cirros/streams/v1/net.cirros-cloud:released:download.json0000644000000000000000000001624612314577450030544 0ustar 00000000000000{ "datatype": "image-downloads", "updated": "Sat, 04 May 2013 01:58:57 +0000", "content_id": "net.cirros-cloud:released:download", "products": { "net.cirros-cloud:standard:0.3:i386": { "arch": "i386", "stream": "released", "versions": { "20111020": { "items": { "disk.img": { "ftype": "disk.img", "path": "0.3.0/cirros-0.3.0-i386-disk.img", "sha256": "3309675d0d409128b1c2651d576bc8092ca9ab93e15f3d3aa458f40947569b61", "md5": "90169ba6f09b5906a7f0755bd00bf2c3", "size": 9159168 }, "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.0/cirros-0.3.0-i386-uec.tar.gz", "sha256": "b57e0acb32852f89734ff11a511ae0897e8cecb41882d03551649289b6854a1b", "md5": "115ca6afa47089dc083c0dc9f9b7ff03", "size": 6596586 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.0/cirros-0.3.0-i386-rootfs.img.gz", "sha256": "981eb0be5deed016a6b7d668537a2ca8c7c8f8ac02f265acb63eab9a8adc4b98", "md5": "174d97541fadaf4e88d526f656c1e0a5", "size": 8566441 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.0/cirros-0.3.0-i386-lxc.tar.gz", "sha256": "cac7887628527604b65a89e8caa34096d51c2dc1acfe405c15db2fc58495142a", "md5": "e760bf470841f57c2c1bb426d407d169", "size": 1845928 } }, "version": "0.3.0", "pubname": "cirros-0.3.0-i386" }, "20130207": { "items": { "disk.img": { "ftype": "disk.img", "path": "0.3.1/cirros-0.3.1-i386-disk.img", "sha256": "b8aa1ce5d11939eaa01205fc31348532a31b82790921d45ceb397fbe76492787", "md5": "6ba617eafc992e33e7c141c679225e53", "size": 12251136 }, "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1/cirros-0.3.1-i386-uec.tar.gz", "sha256": "88dda2e505b862a95d0e0044013addcaa3200e602150c9d73e32c2e29345d6f3", "md5": "52845de5142e58faf211e135d2b45721", "size": 8197543 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1/cirros-0.3.1-i386-rootfs.img.gz", "sha256": "27decd793c659063988dbb48ecd159a3f6664f9552d0fda9c58ce8984af5ba54", "md5": "cf14209217f41ea26844bf0b9cdd20ef", "size": 11565966 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1/cirros-0.3.1-i386-lxc.tar.gz", "sha256": "70a8c9c175589f7ac7054c6151cf2bb7eb9e210cefbe310446df2fb1a436b504", "md5": "fc404d9fc9dc5f0eb33ebaa03920a046", "size": 3191593 } }, "version": "0.3.1", "pubname": "cirros-0.3.1-i386" } } }, "net.cirros-cloud:standard:0.3:x86_64": { "arch": "x86_64", "stream": "released", "versions": { "20111020": { "items": { "disk.img": { "ftype": "disk.img", "path": "0.3.0/cirros-0.3.0-x86_64-disk.img", "sha256": "648782e9287288630250d07531fed9944ecc3986764a6664f0bf6c050ec06afd", "md5": "50bdc35edb03a38d91b1b071afb20a3c", "size": 9761280 }, "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.0/cirros-0.3.0-x86_64-uec.tar.gz", "sha256": "043a3e090a5d76d23758a3919fcaff93f77ce7b97594d9d10fc8d00e85f83191", "md5": "f56d3cffa47b7d209d2b6905628f07b9", "size": 6957349 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.0/cirros-0.3.0-x86_64-rootfs.img.gz", "sha256": "fb0f51c0ec8cfae9d2cef18e4897142e6a81688378fc52f3836ddb5d027a6761", "md5": "83cd7edde7c99ae520e26d338d397875", "size": 9184057 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.0/cirros-0.3.0-x86_64-lxc.tar.gz", "sha256": "0e78796a30641dd7184f752c302a87d66f1eba9985c876911e4f26b4d8ba4a88", "md5": "1a480e5150db4d93be2aa3c9ced94fa1", "size": 2115217 } }, "version": "0.3.0", "pubname": "cirros-0.3.0-x86_64" }, "20130207": { "items": { "disk.img": { "ftype": "disk.img", "path": "0.3.1/cirros-0.3.1-x86_64-disk.img", "sha256": "e01302fb2d2b13ae65226a0300335172e4487bbe60bb1e5c8b0843a25f126d34", "md5": "d972013792949d0d3ba628fbe8685bce", "size": 13147648 }, "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1/cirros-0.3.1-x86_64-uec.tar.gz", "sha256": "51eb03b83123a68d4f866c7c15b195204e62db9e33475509a38b79b3122cde38", "md5": "e1849016cb71a00808093b7bf986f36a", "size": 8633554 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1/cirros-0.3.1-x86_64-rootfs.img.gz", "sha256": "8af2572729827ef89eabcdba36eaa130abb04e83e5910a0c20009ef48f4be237", "md5": "069f6411ea252bf4b343953017f35968", "size": 12339939 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1/cirros-0.3.1-x86_64-lxc.tar.gz", "sha256": "a086fcf2758468972d45957dc78ec6317c06f356930dbbc6cad6a8d1855f135e", "md5": "38252f1a49fec0ebedebc820854497e0", "size": 3534564 } }, "version": "0.3.1", "pubname": "cirros-0.3.1-x86_64" } } }, "net.cirros-cloud:standard:0.3:arm": { "arch": "arm", "stream": "released", "versions": { "20111020": { "items": { "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.0/cirros-0.3.0-arm-uec.tar.gz", "sha256": "b871823406f818430f57744333b1bb17ce0047e551a316f316641f1bd70d9152", "md5": "c31e05f7829ad45f9d9995c35d232769", "size": 5761642 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.0/cirros-0.3.0-arm-rootfs.img.gz", "sha256": "731821e5293f6b66688a2127094537ce79c103f45a82b27b00a08992e3aa5a7a", "md5": "b268cfa7e634f070af401f8169adff79", "size": 7914961 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.0/cirros-0.3.0-arm-lxc.tar.gz", "sha256": "fcd3723c956a1c232730dc28513b466657cbe984232ba2fcc30a4e1f55aa91e9", "md5": "91add49e56cbe6b5004015a4d2f51dbc", "size": 2043822 } }, "version": "0.3.0", "pubname": "cirros-0.3.0-arm" }, "20130207": { "items": { "uec.tar.gz": { "ftype": "uec.tar.gz", "path": "0.3.1/cirros-0.3.1-arm-uec.tar.gz", "sha256": "09dcd3ea6f1d48b3519232973e4dc00fc5e73cbea974cda6b5f7cfa380c6b428", "md5": "d04e6f26aed123bba2c096581b269e7f", "size": 7314471 }, "rootfs.img.gz": { "ftype": "rootfs.img.gz", "path": "0.3.1/cirros-0.3.1-arm-rootfs.img.gz", "sha256": "9841beefe7a60969585d07cb9f403ce8e73fd56f47b13f8ed111443a0ca50fb3", "md5": "24205ff08d67b94b13588cee256e83b3", "size": 10945168 }, "lxc.tar.gz": { "ftype": "lxc.tar.gz", "path": "0.3.1/cirros-0.3.1-arm-lxc.tar.gz", "sha256": "2060e59e642b3b2bdf6e34aba3ed15f468bc6f9a8417fc196d01d29b2075493e", "md5": "7ddea367ecb7ecb91554e18bed7c71bd", "size": 3466149 } }, "version": "0.3.1", "pubname": "cirros-0.3.1-arm" } } } }, "format": "products:1.0" } simplestreams-0.1.0~bzr341/examples/foocloud/files/0000755000000000000000000000000012314577450020434 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/streams/0000755000000000000000000000000012314577450021010 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/0000755000000000000000000000000012314577450021506 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/0000755000000000000000000000000012314577450022740 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/0000755000000000000000000000000012314577450023106 5ustar 00000000000000././@LongLink0000000000000000000000000000015600000000000011217 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64-disk1.imgsimplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64-0000644000000000000000000000010212314577450031013 0ustar 00000000000000fake-content: foovendor-6.1-beta2-server-cloudimg-amd64-disk1.img ././@LongLink0000000000000000000000000000016000000000000011212 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64-root.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64-0000644000000000000000000000010412314577450031015 0ustar 00000000000000fake-content: foovendor-6.1-beta2-server-cloudimg-amd64-root.tar.gz ././@LongLink0000000000000000000000000000015300000000000011214 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64.0000644000000000000000000000007712314577450031027 0ustar 00000000000000fake-content: foovendor-6.1-beta2-server-cloudimg-amd64.tar.gz ././@LongLink0000000000000000000000000000015500000000000011216 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386-disk1.imgsimplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386-d0000644000000000000000000000010112314577450030734 0ustar 00000000000000fake-content: foovendor-6.1-beta2-server-cloudimg-i386-disk1.img ././@LongLink0000000000000000000000000000015700000000000011220 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386-root.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386-r0000644000000000000000000000010312314577450030754 0ustar 00000000000000fake-content: foovendor-6.1-beta2-server-cloudimg-i386-root.tar.gz ././@LongLink0000000000000000000000000000015200000000000011213 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386.t0000644000000000000000000000007612314577450030770 0ustar 00000000000000fake-content: foovendor-6.1-beta2-server-cloudimg-i386.tar.gz ././@LongLink0000000000000000000000000000016200000000000011214 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-amd64-disk1.imgsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-am0000644000000000000000000000007412314577450030667 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-amd64-disk1.img ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-amd64-root.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-am0000644000000000000000000000007612314577450030671 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-amd64-root.tar.gz ././@LongLink0000000000000000000000000000015700000000000011220 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-amd64.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-am0000644000000000000000000000007112314577450030664 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-amd64.tar.gz ././@LongLink0000000000000000000000000000016100000000000011213 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-i386-disk1.imgsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-i30000644000000000000000000000007312314577450030604 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-i386-disk1.img ././@LongLink0000000000000000000000000000016300000000000011215 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-i386-root.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-i30000644000000000000000000000007512314577450030606 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-i386-root.tar.gz ././@LongLink0000000000000000000000000000015600000000000011217 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-i386.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121001/foovendor-6.1-server-cloudimg-i30000644000000000000000000000007012314577450030601 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-i386.tar.gz ././@LongLink0000000000000000000000000000016400000000000011216 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-amd64-disk1.imgsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-0000644000000000000000000000007412314577450030517 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-amd64-disk1.img ././@LongLink0000000000000000000000000000016600000000000011220 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-amd64-root.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-0000644000000000000000000000007612314577450030521 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-amd64-root.tar.gz ././@LongLink0000000000000000000000000000016100000000000011213 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-amd64.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-0000644000000000000000000000007112314577450030514 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-amd64.tar.gz ././@LongLink0000000000000000000000000000016300000000000011215 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-i386-disk1.imgsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-0000644000000000000000000000007312314577450030516 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-i386-disk1.img ././@LongLink0000000000000000000000000000016500000000000011217 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-i386-root.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-0000644000000000000000000000007512314577450030520 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-i386-root.tar.gz ././@LongLink0000000000000000000000000000016000000000000011212 Lustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-i386.tar.gzsimplestreams-0.1.0~bzr341/examples/foocloud/files/release-20121026.1/foovendor-6.1-server-cloudimg-0000644000000000000000000000007012314577450030513 0ustar 00000000000000fake-content: foovendor-6.1-server-cloudimg-i386.tar.gz simplestreams-0.1.0~bzr341/examples/foocloud/streams/v1/0000755000000000000000000000000012314577450021336 5ustar 00000000000000simplestreams-0.1.0~bzr341/examples/foocloud/streams/v1/com.example.foovendor:released:aws.json0000644000000000000000000000603512314577450031051 0ustar 00000000000000{ "datatype": "image-ids", "updated": "Wed, 20 Mar 2013 17:56:57 +0000", "content_id": "com.example.foovendor:released:aws", "products": { "com.example.foovendor:pinky:server:amd64": { "endpoint": "https://ec2.us-east-1.amazonaws.com", "stream": "released", "versions": { "20121026.1": { "items": { "usee1pe": { "name": "ebs/foovendor-pinky-12.04-amd64-server-20121026.1", "root_store": "ebs", "id": "ami-e720ad8e" }, "usee1pi": { "name": "foovendor-pinky-12.04-amd64-server-20121026.1", "root_store": "instance-store", "id": "ami-1f24a976" } }, "label": "release" }, "20120929": { "items": { "usee1pe": { "name": "ebs/foovendor-pinky-12.04-amd64-server-20120929", "root_store": "ebs", "id": "ami-3b4ff252" }, "usee1pi": { "name": "foovendor-pinky-12.04-amd64-server-20120929", "root_store": "instance-store", "id": "ami-cd4cf1a4" } }, "label": "beta2" }, "20121001": { "items": { "usee1pe": { "name": "ebs/foovendor-pinky-12.04-amd64-server-20121001", "root_store": "ebs", "id": "ami-9878c0f1" }, "usee1pi": { "name": "foovendor-pinky-12.04-amd64-server-20121001", "root_store": "instance-store", "id": "ami-52863e3b" } }, "label": "release" } }, "virt_type": "paravirtual", "region": "us-east-1", "version": "6.1", "build": "server", "release": "pinky", "arch": "amd64" }, "com.example.foovendor:pinky:server:i386": { "endpoint": "https://ec2.us-east-1.amazonaws.com", "stream": "released", "versions": { "20121026.1": { "items": { "usee1pe": { "name": "ebs/foovendor-pinky-12.04-i386-server-20121026.1", "root_store": "ebs", "id": "ami-e720ad8e" }, "usee1pi": { "name": "foovendor-pinky-12.04-i386-server-20121026.1", "root_store": "instance-store", "id": "ami-1f24a976" } }, "label": "release" }, "20120929": { "items": { "usee1pe": { "name": "ebs/foovendor-pinky-12.04-i386-server-20120929", "root_store": "ebs", "id": "ami-3b4ff252" }, "usee1pi": { "name": "foovendor-pinky-12.04-i386-server-20120929", "root_store": "instance-store", "id": "ami-cd4cf1a4" } }, "label": "beta2" }, "20121001": { "items": { "usee1pe": { "name": "ebs/foovendor-pinky-12.04-i386-server-20121001", "root_store": "ebs", "id": "ami-9878c0f1" }, "usee1pi": { "name": "foovendor-pinky-12.04-i386-server-20121001", "root_store": "instance-store", "id": "ami-52863e3b" } }, "label": "release" } }, "virt_type": "paravirtual", "region": "us-east-1", "version": "6.1", "build": "server", "release": "pinky", "arch": "i386" } }, "format": "products:1.0" } simplestreams-0.1.0~bzr341/examples/foocloud/streams/v1/com.example.foovendor:released:download.json0000644000000000000000000001547312314577450032074 0ustar 00000000000000{ "datatype": "image-downloads", "updated": "Wed, 20 Mar 2013 17:56:57 +0000", "content_id": "com.example.foovendor:released:download", "products": { "com.example.foovendor:pinky:server:amd64": { "version": "6.1", "build": "server", "stream": "released", "versions": { "20121026.1": { "items": { "tar.gz": { "name": "foovendor-pinky-6.1-amd64-server-20121026.1.tar.gz", "path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-amd64.tar.gz", "md5": "187ea3b68f9080d4c447b910c8d0838e", "size": 57, "ftype": "tar.gz" }, "disk1.img": { "name": "foovendor-pinky-6.1-amd64-server-20121026.1-disk1.img", "path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-amd64-disk1.img", "md5": "da499b246265a34db8576983371c6c2a", "size": 60, "ftype": "disk1.img" }, "root.tar.gz": { "name": "foovendor-pinky-6.1-amd64-server-20121026.1-root.tar.gz", "path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-amd64-root.tar.gz", "md5": "da939f41059a1b1eca16c0c5856f47cd", "size": 62, "ftype": "root.tar.gz" } }, "label": "release" }, "20120328": { "items": { "tar.gz": { "name": "foovendor-pinky-6.1-beta2-amd64-server-20120328.tar.gz", "path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64.tar.gz", "md5": "c245123c1a7c16dd43962b71c604c5ee", "ftype": "tar.gz" }, "disk1.img": { "name": "foovendor-pinky-6.1-beta2-amd64-server-20120328-disk1.img", "path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64-disk1.img", "md5": "34cec541a18352783e736ba280a12201", "ftype": "disk1.img" }, "root.tar.gz": { "name": "foovendor-pinky-6.1-beta2-amd64-server-20120328-root.tar.gz", "path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-amd64-root.tar.gz", "md5": "55686ef088f7baf0ebea9349055daa85", "ftype": "root.tar.gz" } }, "label": "beta2" }, "20121001": { "items": { "tar.gz": { "name": "foovendor-pinky-6.1-amd64-server-20121001.tar.gz", "path": "files/release-20121001/foovendor-6.1-server-cloudimg-amd64.tar.gz", "md5": "187ea3b68f9080d4c447b910c8d0838e", "size": 57, "ftype": "tar.gz" }, "disk1.img": { "name": "foovendor-pinky-6.1-amd64-server-20121001-disk1.img", "path": "files/release-20121001/foovendor-6.1-server-cloudimg-amd64-disk1.img", "md5": "da499b246265a34db8576983371c6c2a", "size": 60, "ftype": "disk1.img" }, "root.tar.gz": { "name": "foovendor-pinky-6.1-amd64-server-20121001-root.tar.gz", "path": "files/release-20121001/foovendor-6.1-server-cloudimg-amd64-root.tar.gz", "md5": "da939f41059a1b1eca16c0c5856f47cd", "size": 62, "ftype": "root.tar.gz" } }, "label": "release" } }, "release": "pinky", "arch": "amd64" }, "com.example.foovendor:pinky:server:i386": { "version": "6.1", "build": "server", "stream": "released", "versions": { "20121026.1": { "items": { "tar.gz": { "name": "foovendor-pinky-6.1-i386-server-20121026.1.tar.gz", "path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-i386.tar.gz", "md5": "1534e487730c8162131dde430aa5fa5a", "size": 56, "ftype": "tar.gz" }, "disk1.img": { "name": "foovendor-pinky-6.1-i386-server-20121026.1-disk1.img", "path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-i386-disk1.img", "md5": "d8fa9a536e8cf1bebcee1e26875060bb", "size": 59, "ftype": "disk1.img" }, "root.tar.gz": { "name": "foovendor-pinky-6.1-i386-server-20121026.1-root.tar.gz", "path": "files/release-20121026.1/foovendor-6.1-server-cloudimg-i386-root.tar.gz", "md5": "de2b46e0fe2ce8c6eda2b4cd809505a9", "size": 61, "ftype": "root.tar.gz" } }, "label": "release" }, "20120328": { "items": { "tar.gz": { "name": "foovendor-pinky-6.1-beta2-i386-server-20120328.tar.gz", "path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386.tar.gz", "md5": "2cd18b60f892af68c9d49c64ce1638e4", "ftype": "tar.gz" }, "disk1.img": { "name": "foovendor-pinky-6.1-beta2-i386-server-20120328-disk1.img", "path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386-disk1.img", "md5": "e80df7995beb31571e104947e4d7b001", "ftype": "disk1.img" }, "root.tar.gz": { "name": "foovendor-pinky-6.1-beta2-i386-server-20120328-root.tar.gz", "path": "files/beta-2/foovendor-6.1-beta2-server-cloudimg-i386-root.tar.gz", "md5": "5d86b3e75e56e10e1019fe1153fe488f", "ftype": "root.tar.gz" } }, "label": "beta2" }, "20121001": { "items": { "tar.gz": { "name": "foovendor-pinky-6.1-i386-server-20121001.tar.gz", "path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386.tar.gz", "md5": "1534e487730c8162131dde430aa5fa5a", "size": 56, "ftype": "tar.gz" }, "disk1.img": { "name": "foovendor-pinky-6.1-i386-server-20121001-disk1.img", "path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386-disk1.img", "md5": "d8fa9a536e8cf1bebcee1e26875060bb", "size": 59, "ftype": "disk1.img" }, "root.tar.gz": { "name": "foovendor-pinky-6.1-i386-server-20121001-root.tar.gz", "path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386-root.tar.gz", "md5": "de2b46e0fe2ce8c6eda2b4cd809505a9", "size": 61, "ftype": "root.tar.gz" } }, "label": "release" }, "samepaths": { "items": { "tar.gz": { "name": "foovendor-pinky-6.1-i386-server-20121001.tar.gz", "path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386.tar.gz", "md5": "1534e487730c8162131dde430aa5fa5a", "size": 56, "ftype": "tar.gz" }, "disk1.img": { "name": "foovendor-pinky-6.1-i386-server-20121001-disk1.img", "path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386-disk1.img", "md5": "d8fa9a536e8cf1bebcee1e26875060bb", "size": 59, "ftype": "disk1.img" }, "root.tar.gz": { "name": "foovendor-pinky-6.1-i386-server-20121001-root.tar.gz", "path": "files/release-20121001/foovendor-6.1-server-cloudimg-i386-root.tar.gz", "md5": "de2b46e0fe2ce8c6eda2b4cd809505a9", "size": 61, "ftype": "root.tar.gz" } }, "label": "release" } }, "release": "pinky", "arch": "i386" } }, "format": "products:1.0" } simplestreams-0.1.0~bzr341/examples/foocloud/streams/v1/index.json0000644000000000000000000000142212314577450023337 0ustar 00000000000000{ "index": { "com.example.foovendor:released:aws": { "datatype": "image-ids", "path": "streams/v1/com.example.foovendor:released:aws.json", "updated": "Wed, 20 Mar 2013 17:56:57 +0000", "products": [ "com.example.foovendor:pinky:server:amd64", "com.example.foovendor:pinky:server:i386" ], "format": "products:1.0" }, "com.example.foovendor:released:download": { "datatype": "image-downloads", "path": "streams/v1/com.example.foovendor:released:download.json", "updated": "Wed, 20 Mar 2013 17:56:57 +0000", "products": [ "com.example.foovendor:pinky:server:amd64", "com.example.foovendor:pinky:server:i386" ], "format": "products:1.0" } }, "updated": "Wed, 20 Mar 2013 17:56:57 +0000", "format": "index:1.0" } simplestreams-0.1.0~bzr341/examples/keys/README.txt0000644000000000000000000000064312314577450020174 0ustar 00000000000000example.sec, example.pub: These are example gpg public and private keys. They are generated by tools/gen-example-key. You should be sure to *not* trust anything simply because it is signed with these keys. cloud-images.pub: pub 4096R/476CF100 2012-10-27 Ubuntu Cloud Image Builder sub 4096R/9D817405 2012-10-27 This is the public key for the entity that signs data on cloud-images.ubuntu.com data. simplestreams-0.1.0~bzr341/examples/keys/cirros.pub0000644000000000000000000001230712314577450020507 0ustar 00000000000000-----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1.4.12 (GNU/Linux) mQINBFEsA6QBEACyuJn4L+OaplLC8+W0cQw7Qqothe2bKArhXOYBwjiu231OCacd k8B1ebvCcpNXYQ6KyAE7tDuaU0bxUNcgDKkpBDxH99nys4DHIovUgDOJS01G4RAI Qagy5FtpEDrU1eJ41x5RP1GEA/waOxYvLnjbG7tAm3bpPpQhHldeD/ByYKset35P he6b0kggUZM8ca9HDUty4I4yWPxyb2pSWLCHnM0oN05sFOig2o6X47dz8FYLtfiT Ab1bLlhthVwc0vrYll7YFAoGKPwwFDA+ANuOSKcfHgR9gJtzWXODvAK0gYVsSWOh 0G2EVEufeohws1oS2tED3BCH1wyCYqLCxD2IQM1EShKUVhY81mQc4sdyoz1uurbO gu9tPN7fCp6PRmS83q+Ic2m521dcykeWA/qkYksYIwwcfXnkZOs2XouXIv3lDKT0 rq+bS9qf87dyGAV3Jgat5IbK3nWzN+iOGu+q3X5XVWZxImK25MqHLRrxR3/n4Rkk jQsa3dOk/o2Adj70gGyONKDH9/UTaBB6fEKBbUZyCW82K2HcZeuhCzYxV1BR5iQ/ 1Jlk53tNac5gjIR2IzBu6r4zW+KWCnnE1AtNkJM4fe0ggsGvcV1j5VNSeL8XZOjn 4UCOXtgJ/rYkxaCcqCJNHzrDJCYltlUq7Tdx2FZlp+Z9RDhS4GoZBKnZHQARAQAB tC1DaXJyb3MgU2lnbmluZyBLZXkgPHNpZ25pbmdAY2lycm9zLWNsb3VkLm5ldD6J AjgEEwECACIFAlEsA6QCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEJWT hISPTObQ+ooP/12NlKbc3EwG+37w9xR1/n05pUh1x0j0P++QEO8AFNiFhrt6gtoz z50LpCYaqINuNDrY+7rXLKh5IWlNSxKnI/H06sTZaOsYtYshv9MlIZW8aJrtFuAv HVLqDbHs4TnzAD0DL8GioepPjg424g9ocnh+J/A/sFEUw6zY+Q4kYeF+6kWWKovT fWsb966fNRQmAclg4JkpJjyB+cRUbj8MAvP8ZMhvPDAs575kC7kOlk1isF1mZb6z MVKybFzq4kg/s2Xgq8l3jBtx4BVGKXR8evTaJh9ABqN893yK+jpicAzeya9Yqshv GoQkgk6X5xzcUQ3feGtM/zThUKQ4KA2tifoEDPQYKI2jHiHrf+SUsJQlAXo/VIC4 PFkUxyTKxnti3elwmj3WVGTuU87lfsdvbg0OM9FpfYhDVi1wYpjoFux3XRjHTrdy ll6iWzhpcDPGsV8jqqXqIphoU2sEHwMZC/GwII87AFsRRiItUX9lTAMrKNzxunVe YWTt1no1A/te5IsQK6hA31K9L0M30h0AC4vJvHpiwDl1PUfrAdyCDtcS++eT9jdK NccMpyYmE5ZORR9xkOsfiTMNN8aP2fW2t/wB5rBOacTLVCZnu6FZ+rbB522G8vOa yO4sBogj5H9SijrsUJq1W4CkGtFmMEEeCLjjwuhOd5bauTPrxb8G16/RuQINBFEs A6QBEADEk7k2S2P2VH6l2HFUz5BaVnFtcGSMT8xQki2gGXt5bl67czs3Ej2vSVWa D0lZzYxDixobwuih2ox5g9upez8SZKx6mxuLUDU493sWOTF+8uVH5h5ig+3+TOBx U5uFks7xJpuihxmAwxslJUpoQyUA8a2WzzsXGUGKdnh/lTmm8h/XeUWuAu6Ht6ed zgzgFaleG3BROaMVO5DWsHp3EZAq3HjEpCo6dFBV/A5b9ZQb6jIVpTInLFb6OoIh W1w1UYIFkcsRf51Oek6W6TB9lkp5raNH3/Irg3M0zsK5i4Zaq15EdgqliH0XhpLC 1MJNLEJU6gP5Yr9GiWgRdWFmVykxG2AacHxH/hFPWZC32L4y4GX7WeAzJJg2k6Uz 5qz/DgJFa1/vfZW/0Df6TJAnSp4W12tesxgLghdwCzLdogEKqBoWLwoFhnNBe+5h xpP8eO+nOvwfZeB1mXhVKcq0LNLrKGOediakxhrUlHUtrRDHcRSjzD1tRZO8EsCc C2jcJjeVZ/ReH/Q/F7P3gQ3QoQzLkY0EZ+FO1z5g8T5C3KOyB7r8dHD2I2J29SOX i+pFcSpHYw6mZXC4CvxrzpugxvvvmemZFc/Lw99NomFZZoXIUK5qhma+tMdyRQ88 MakkIll951X5U7LLtrKW5wWOSunXSYRpNSLbqCsfC/7u3NiUAwARAQABiQIfBBgB AgAJBQJRLAOkAhsMAAoJEJWThISPTObQ/YgQAI/l1puhMa/+THkHH9j9RaEHA3cR MVgp33QR6zA30+Rl2iFWSZjmSLOVXtlTRe2vIng9Gj5f+Oi24pdQcl6qu6UZ/89l 8X7l3pt1zOeGyliK2blCKXdBLR0tlHb+twCI5fPZ+K5NGPtMYS8zbAnrKKfWfuOr 04iiV3rJ5j4MHwGbu63HpgzwHQOQYcFWuqBk/bjPFwudt5+u/FVNAdstDkDDk9dF W4DUFv0Yd2q1aI3/pF1HqzzmrP/1mFA7RoyiT/csihpp0WWlazN7+fTsNd6cikN5 hJtzEKD66abjLB2S3IYIgbJHKR1XylYTnFQG9fxlCR4W3xB27XaXAoGOni1DwAIT 3yUoAMWCxuXcMd5oqwQDNArc1VyRedhrQPQM4jF5vRljoLV2kkZ6WkDFNQBzB3I5 mD8NRGmkWgFGvyj6hhNgIwWijOBIG4FbwLT/HUDhiS0uoIG0lL6qGa5V+j9jnWaM M77OG9U/z+SHuC0AL/DpKDvj0Xii+qpXeIiHQxA0XQMWTgnQozfKeBQLsKN0RMHb IBzQg9qvws2gOtORniO7UMfv+U4rL9AQSMEan/Tohk6ZmUisHvi4vtWNsAxqNjXJ 7dqmmy8SBwkI/+eirT2Q9+vRGHqMC1Z7JDMGPSMGARRks9xKRln0FdPTyxTmObOQ 26TVo7HvWs+vVZDbuQINBFEsBg4BEADO6FgdZBI5PwpDvTplDVm10cQo7N9mcCJn eRcrj4VQ89OTfSmJH3Qpf6JkG/UGS/kO6vIDF5b2SzRPAYT1QM3lVYhixU1gVPv1 mp4zHOJse5thnaD7HlpvM/VBnN8h+qAo6YSYhlmYc9FqLSmZRoNE2+WGp/6G/5iB r2lOs1Iii1HW2XYgl9Rw/HnwHjXAaZiCCt0EXvLoutw3KvIirHHfkVEsX8p1dloR HCSu40l2t29e1ujkl/LWb1UkBv9a/OrrQEVqpDfnO/ai/PJ8KSEMUYnB9eIhM5J4 OvhiaKVJI5uJVsMRFm8NUCmguVB1eXeQ1ndvWn1CeKJeyKO04BFR2ZwMVq9Pdcbe Ga8Rz8poHI/dsut/EPPZdOnTr3LbqH+f0f+P+UsRMLXuV8O4tKZu4mrl9XEdb3yJ 18atOL2bHNvAHXXN45UXCLU6j1qWolO9yOuBNw6fPbsagK3kfvo6SBlAp1LcHjHd OKVWM3cqUll7/xo3cBiIvMRTvZlzGSAssGWe0kt9O/0a5pPJRxyj8G1HGZ8BzDZz JpPD9dcU/NFD5qB+gZjGrJqycdgw79JMyUCHRZpzeRDDZb0AN+NFiVqgQFnhMLek UkILyJnVnLHaSM0vh0/++P3O6lqsQ99AKaFm1pNZCUMNF9ZU0i8/if8VKhu7Hly0 dbjDMsjWMQARAQABiQREBBgBAgAPBQJRLAYOAhsCBQkNKGiAAikJEJWThISPTObQ wV0gBBkBAgAGBQJRLAYOAAoJEGRyWpml3bhA+SQQAMegxIPVOb7TnOzf5VazTa0a sC+PuEWDNIz2jns2TfNHhRQx+Eb5BXQKW0OwXxyAv4+TgA2WTpWueCzEtnfXdIxH rnjp96MLUOB/j1jIzAJADGVjcEc1kO6VVF61ba3D7ST2Z7YTQ07krsZiGD78UlxF zoOHBkiQTIkXjIAFOu0/TqlwN4JBNRr61GtMHyI5awwgB0RParT0Vfkki3NlR2T/ bym02l2uIlGLsq0owiRdY5H9RoOyMdhKC72CSBv5UyeAqk9P5B99OyWHoQ7WZTWO aRHkWpXaql4wWyzdoQdtJH5J5XN9/5RFUxlltcXRy/+grJUrlpd886hN8vrrg3eB fOyts/K2JIk99WBMRf4BxwyFA9wWvoGoozFVjSzBFtluOqPtSWRHgzm5cqdZTCVT TcSxgYkCdPlHClxv78aVi9MmEEk5TIWSHb/jjE5Lv0vOPCgdmRlhAVC7ghFj+BO1 5a5Um3TS5StZskOrAQtzg2Y7asWET9fsaWXCS/iom5saekmkbBXO4EwiQKlkoFWU 8oGSWFcIc4LnJ467QHDMGMBFq4AQfOLgi9+G1HiphXdJrDesBXV755/mBZfiIKOM W2QLoCWi065FTRD3V7y0V5dpKjt6wVkNrVozeF0rWivErVu/r8H6Aee2L0VCPB6j NF73z67E9Dzj4QT3LtoaWZgQAI445UYbBD/SdwtbYWTwlRkJY88isJGA2nPpDMfq PR9JlBHoCCBf0IrsXoLXuFoCFlviROqOCsfvJ6R7bsZmmWwqWvBqENvrW8rwJTVB XcSj23TNLaqQ/kef8JDuscVn7tDwb7dCIye0fG/eZskTsMJ9Osgt9I6S5Fz0yFHv 7xJlXjEwkxYXdyGoXEW3MbQuM7Y3LHSm8Y/M6boo9OMyrdK1XObiqhgYUoBzbD0y nBdxQvr18wNLoM1KysvOs0yvBHhMRZXyt3pdTFCzLBYl5C94267DhiL9EJ3x8qMG E7Yqgu9sjHrgGy/rZUlGj/l2uzd6pYzlcyaABKLO9T0H1Y2Xpx/+d9GGyy0DHVdO FYORNtZIi3Gc5mKlL38P8eL4AAoPLp5nasy9Y3aAAKOBR2YW/BhnEpMjRndOvoxy NCQgPZxyAKKnzCFyLvlbRN0AYhOWQDjXFHbc+RZsrJcgAR6XS9PwBe6yW1sh888c dCSi9/XNyKJX9IR8ucdInZUAc4qrfi9zCt//4ZoHeW48OS7OgnoA3Y6RZR+UbSQE 6aBTt6z5e6SyaKRx7OlcqK3G19KmEuFzH18THtPD1s+kiUMHKXouooUoLbMEvW8P oKte55W4stewkhS8DZV5IkvYZzPMQ8ks6GMdzHiNVkEujUkUNsEualm5Q8hEzPBA 8dxY =NgGK -----END PGP PUBLIC KEY BLOCK----- simplestreams-0.1.0~bzr341/examples/keys/cloud-images.pub0000644000000000000000000000621212314577450021555 0ustar 00000000000000-----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1.4.12 (GNU/Linux) mQINBFCMc9EBEADDKn9mOi9VZhW+0cxmu3aFZWMg0p7NEKuIokkEdd6P+BRITccO ddDLaBuuamMbt/V1vrxWC5J+UXe33TwgO6KGfH+ECnXD5gYdEOyjVKkUyIzYV5RV U5BMrxTukHuh+PkcMVUy5vossCk9MivtCRIqM6eRqfeXv6IBV9MFkAbG3x96ZNI/ TqaWTlaHGszz2Axf9JccHCNfb3muLI2uVnUaojtDiZPm9SHTn6O0p7Tz7M7+P8qy vc6bdn5FYAk+Wbo+zejYVBG/HLLE4+fNZPESGVCWZtbZODBPxppTnNVm3E84CTFt pmWFBvBE/q2G9e8s5/mP2ATrzLdUKMxr3vcbNX+NY1Uyvn0Z02PjbxThiz1G+4qh 6Ct7gprtwXPOB/bCITZL9YLrchwXiNgLLKcGF0XjlpD1hfELGi0aPZaHFLAa6qq8 Ro9WSJljY/Z0g3woj6sXpM9TdWe/zaWhxBGmteJl33WBV7a1GucN0zF1dHIvev4F krp13Uej3bMWLKUWCmZ01OHStLASshTqVxIBj2rgsxIcqH66DKTSdZWyBQtgm/kC qBvuoQLFfUgIlGZihTQ96YZXqn+VfBiFbpnh1vLt24CfnVdKmzibp48KkhfqduDE Xxx/f/uZENH7t8xCuNd3p+u1zemGNnxuO8jxS6Ico3bvnJaG4DAl48vaBQARAQAB tG9VYnVudHUgQ2xvdWQgSW1hZ2UgQnVpbGRlciAoQ2Fub25pY2FsIEludGVybmFs IENsb3VkIEltYWdlIEJ1aWxkZXIpIDx1YnVudHUtY2xvdWRidWlsZGVyLW5vcmVw bHlAY2Fub25pY2FsLmNvbT6JAjgEEwECACIFAlCMc9ECGwMGCwkIBwMCBhUIAgkK CwQWAgMBAh4BAheAAAoJEH/z9AhHbPEAvRIQAMLE4ZMYiLvwSoWPAicM+3FInaqP 2rf1ZEf1k6175/G2n8cG3vK0nIFQE9Cus+ty2LrTggm79onV2KBGGScKe3ga+meO txj601Wd7zde10IWUa1wlTxPXBxLo6tpF4s4aw6xWOf4OFqYfPU4esKblFYn1eMK Dd53s3/123u8BZqzFC8WSMokY6WgBa+hvr5J3qaNT95UXo1tkMf65ZXievcQJ+Hr bp1m5pslHgd5PqzlultNWePwzqmHXXf14zI1QKtbc4UjXPQ+a59ulZLVdcpvmbjx HdZfK0NJpQX+j5PU6bMuQ3QTMscuvrH4W41/zcZPFaPkdJE5+VcYDL17DBFVzknJ eC1uzNHxRqSMRQy9fzOuZ72ARojvL3+cyPR1qrqSCceX1/Kp838P2/CbeNvJxadt liwI6rzUgK7mq1Bw5LTyBo3mLwzRJ0+eJHevNpxl6VoFyuoA3rCeoyE4on3oah1G iAJt576xXMDoa1Gdj3YtnZItEaX3jb9ZB3iz9WkzZWlZsssdyZMNmpYV30Ayj3CE KyurYF9lzIQWyYsNPBoXORNh73jkHJmL6g1sdMaxAZeQqKqznXbuhBbt8lkbEHMJ Stxc2IGZaNpQ+/3LCwbwCphVnSMq+xl3iLg6c0s4uRn6FGX+8aknmc/fepvRe+ba ntqvgz+SMPKrjeevuQINBFCMc9EBEADKGFPKBL7/pMSTKf5YH1zhFH2lr7tf5hbz ztsx6j3y+nODiaQumdG+TPMbrFlgRlJ6Ah1FTuJZqdPYObGSQ7qd/VvvYZGnDYJv Z1kPkNDmCJrWJs+6PwNARvyLw2bMtjCIOAq/k8wByKkMzegobJgWsbr2Jb5fT4cv FxYpm3l0QxQSw49rriO5HmwyiyG1ncvaFUcpxXJY8A2s7qX1jmjsqDY1fWsv5PaN ue0Fr3VXfOi9p+0CfaPY0Pl4GHzat/D+wLwnOhnjl3hFtfbhY5bPl5+cD51SbOnh 2nFv+bUK5HxiZlz0bw8hTUBN3oSbAC+2zViRD/9GaBYY1QjimOuAfpO1GZmqohVI msZKxHNIIsk5H98mN2+LB3vH+B6zrSMDm3d2Hi7ZA8wH26mLIKLbVkh7hr8RGQjf UZRxeQEf+f8F3KVoSqmfXGJfBMUtGQMTkaIeEFpMobVeHZZ3wk+Wj3dCMZ6bbt2i QBaoa7SU5ZmRShJkPJzCG3SkqN+g9ZcbFMQsybl+wLN7UnZ2MbSk7JEy6SLsyuVi 7EjLmqHmG2gkybisnTu3wjJezpG12oz//cuylOzjuPWUWowVQQiLs3oANzYdZ0Hp SuNjjtEILSRnN5FAeogs0AKH6sy3kKjxtlj764CIgn1hNidSr2Hyb4xbJ/1GE3Rk sjJi6uYIJwARAQABiQIfBBgBAgAJBQJQjHPRAhsMAAoJEH/z9AhHbPEA6IsP/3jJ DaowJcKOBhU2TXZglHM+ZRMauHRZavo+xAKmqgQc/izgtyMxsLwJQ+wcTEQT5uqE 4DoWH2T7DGiHZd/89Qe6HuRExR4p7lQwUop7kdoabqm1wQfcqr+77Znp1+KkRDyS lWfbsh9ARU6krQGryODEOpXJdqdzTgYhdbVRxq6dUopz1Gf+XDreFgnqJ+okGve2 fJGERKYynUmHxkFZJPWZg5ifeGVt+YY6vuOCg489dzx/CmULpjZeiOQmWyqUzqy2 QJ70/sC8BJYCjsESId9yPmgdDoMFd+gf3jhjpuZ0JHTeUUw+ncf+1kRf7LAALPJp 2PTSo7VXUwoEXDyUTM+dI02dIMcjTcY4yxvnpxRFFOtklvXt8Pwa9x/aCmJb9f0E 5FO0nj7l9pRd2g7UCJWETFRfSW52iktvdtDrBCft9OytmTl492wAmgbbGeoRq3ze QtzkRx9cPiyNQokjXXF+SQcq586oEd8K/JUSFPdvth3IoKlfnXSQnt/hRKv71kbZ IXmR3B/q5x2Msr+NfUxyXfUnYOZ5KertdprUfbZjudjmQ78LOvqPF8TdtHg3gD2H +G2z+IoH7qsOsc7FaJsIIa4+dljwV3QZTE7JFmsas90bRcMuM4D37p3snOpHAHY3 p7vH1ewg+vd9ySST0+OkWXYpbMOIARfBKyrGM3nu =+MFT -----END PGP PUBLIC KEY BLOCK----- simplestreams-0.1.0~bzr341/examples/keys/example.pub0000644000000000000000000000126712314577450020644 0ustar 00000000000000-----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1.4.11 (GNU/Linux) mI0EUSw66gEEAM6AjeB/KtvuLbkbn6F0Whew2sYx5O2j2smSgwJ0oevnlRzneyXh kUIR+wH5KBDIz5Ikp35ZrZFYoP++7VMALDTp9l+OOlrbz4rQzwI8HvXumkhT+BgE lfN10eu0rBkVNxqt9lXuMNYwgJJtfBPzXVBQju6QDYx5Uodxk9C9TXapABEBAAG0 XFNpbXBsZSBTdHJlYW1zIFRlc3QgVXNlciAoVGVzdCBVc2FnZSBPbmx5LiBEbyBO b3QgSW1wb3J0LikgPHNpbXBsZXN0cmVhbXNAYm9ndXMuZXhhbXBsZS5jb20+iL4E EwECACgFAlEsOuoCGy8FCRLMAwAGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJ EKlxSiA5Z1NuK0ID/R8iBwVt/9tqFy7eeJzCDK0O7QcpmhvKyjPLLmsUhysx8kC7 S89yuWyr6iQcjAoAMK6EkOZgRoFfOIA+hZkFC0blFHLCrdSghSQQm6hL/XJuWtkf HBdu7yKudGzGyYmpGYYG69zz+he5EZUtY1fR9PSGSM+ZwCLznHdCJix7bNi0 =HEJL -----END PGP PUBLIC KEY BLOCK----- simplestreams-0.1.0~bzr341/examples/keys/example.sec0000644000000000000000000000217412314577450020626 0ustar 00000000000000-----BEGIN PGP PRIVATE KEY BLOCK----- Version: GnuPG v1.4.11 (GNU/Linux) lQHXBFEsOuoBBADOgI3gfyrb7i25G5+hdFoXsNrGMeTto9rJkoMCdKHr55Uc53sl 4ZFCEfsB+SgQyM+SJKd+Wa2RWKD/vu1TACw06fZfjjpa28+K0M8CPB717ppIU/gY BJXzddHrtKwZFTcarfZV7jDWMICSbXwT811QUI7ukA2MeVKHcZPQvU12qQARAQAB AAP2KSNzIEY1Q5svgLEAHCoRyKZy7wkBklYSQBXwA404tMZt7lQvNFy7k24Bk2MP mEhpEbQ7qfAzo8EEUe63WNGv/H6yl37pcS2rKxGdar9/dFV2t/TbJaehKotIvG9D x1CvuT/7DQRj9rGWrDrhx2XIz8hpD21bxrDb2TEF5WjfAQIA05tN+7zxTXSKq7Wt 215ba4kBhKKuvrYMN5m871O/cNJfVh9ABea4xRFOOTkcpXsHAl1JPV52wRlHpBoe b36goQIA+dMapApF9zYflJ5rp3RLCwH0YwmY55+MBL5OTN7PM6DFs61rQG1ZO3x8 VJ1GNvpVhEuIBUmsD2O4BVYekDsxCQIAuqMgVMVDfj7P2MeMKZ888XphQuYeJdIq IckpdKnZSKSU0oXKiB9y0AyptB1Aih8IhF00EIURWjMI//19JkBXuJNetFxTaW1w bGUgU3RyZWFtcyBUZXN0IFVzZXIgKFRlc3QgVXNhZ2UgT25seS4gRG8gTm90IElt cG9ydC4pIDxzaW1wbGVzdHJlYW1zQGJvZ3VzLmV4YW1wbGUuY29tPoi+BBMBAgAo BQJRLDrqAhsvBQkSzAMABgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRCpcUog OWdTbitCA/0fIgcFbf/bahcu3nicwgytDu0HKZobysozyy5rFIcrMfJAu0vPcrls q+okHIwKADCuhJDmYEaBXziAPoWZBQtG5RRywq3UoIUkEJuoS/1yblrZHxwXbu8i rnRsxsmJqRmGBuvc8/oXuRGVLWNX0fT0hkjPmcAi85x3QiYse2zYtA== =svaP -----END PGP PRIVATE KEY BLOCK----- simplestreams-0.1.0~bzr341/simplestreams/__init__.py0000644000000000000000000000002512314577450020700 0ustar 00000000000000# vi: ts=4 expandtab simplestreams-0.1.0~bzr341/simplestreams/contentsource.py0000644000000000000000000002577412314577450022056 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import errno import io import os import sys if sys.version_info > (3, 0): import urllib.parse as urlparse # pylint: disable=F0401,E0611 else: import urlparse READ_BUFFER_SIZE = 1024 * 10 READ_BUFFER_SIZE = 1024 * 10 try: # We try to use requests because we can do gzip encoding with it. # however requests < 1.1 didn't have 'stream' argument to 'get' # making it completely unsuitable for downloading large files. import requests from distutils.version import LooseVersion import pkg_resources _REQ = pkg_resources.get_distribution('requests') _REQ_VER = LooseVersion(_REQ.version) # pylint: disable=E1103 if _REQ_VER < LooseVersion('1.1'): raise Exception("Couldn't use requests") URL_READER_CLASSNAME = "RequestsUrlReader" except: if sys.version_info > (3, 0): import urllib.request as urllib_request # pylint: disable=F0401, E0611 import urllib.error as urllib_error # pylint: disable=F0401, E0611 else: import urllib2 as urllib_request urllib_error = urllib_request URL_READER_CLASSNAME = "Urllib2UrlReader" class ContentSource(object): url = None def open(self): # open can be explicitly called to 'open', but might be implicitly # called from read() pass def read(self, size=-1): raise NotImplementedError() def set_start_pos(self, offset): """ Implemented if the ContentSource supports seeking within content. Used to resume failed transfers. """ class SetStartPosNotImplementedError(NotImplementedError): # This is only here to satisfy pylint W0223. Users # have to accept that it may NotImplementedError pass _pylint = offset raise SetStartPosNotImplementedError() def __enter__(self): self.open() return self def __exit__(self, etype, value, trace): self.close() def close(self): raise NotImplementedError() class UrlContentSource(ContentSource): fd = None def __init__(self, url, mirrors=None): if mirrors is None: mirrors = [] self.mirrors = mirrors self.input_url = url self.url = url self.offset = None self.fd = None def _urlinfo(self, url): parsed = urlparse.urlparse(url) if not parsed.scheme: if url.startswith("/"): url = "file://%s" % url else: url = "file://%s/%s" % (os.getcwd(), url) parsed = urlparse.urlparse(url) if parsed.scheme == "file": def binopen(path, offset=None): f = open(path, "rb") if offset is not None: f.seek(offset) return f return (url, binopen, (parsed.path,)) else: return (url, URL_READER, (url,)) def _open(self): for url in [self.input_url] + self.mirrors: try: (normurl, opener, oargs) = self._urlinfo(url) self.url = normurl return opener(*oargs, offset=self.offset) except IOError as e: if e.errno != errno.ENOENT: raise continue myerr = IOError("Unable to open %s. mirrors=%s" % (self.input_url, self.mirrors)) myerr.errno = errno.ENOENT raise myerr def open(self): # pylint: disable=E0202 if self.fd is None: self.fd = self._open() def read(self, size=-1): # pylint: disable=E0202 if self.fd is None: self.open() return self.fd.read(size) def set_start_pos(self, offset): if self.fd is not None: raise Exception("can't set start pos after open()") self.offset = offset def close(self): if self.fd: self.fd.close() self.fd = None class FdContentSource(ContentSource): def __init__(self, fd, url=None): self.fd = fd self.url = url def read(self, size=-1): return self.fd.read(size) def close(self): self.fd.close() class IteratorContentSource(ContentSource): def __init__(self, itgen, url=None): self.itgen = itgen self.url = url self.r_iter = None self.leftover = bytes() self.consumed = False def open(self): if self.r_iter: return try: self.r_iter = self.itgen() except Exception as exc: if self.is_enoent(exc): enoent = IOError(exc) enoent.errno = errno.ENOENT raise enoent raise exc def is_enoent(self, exc): return (isinstance(exc, IOError) and exc.errno == errno.ENOENT) def read(self, size=None): self.open() if self.consumed: return bytes() if (size is None or size < 0): # read everything ret = self.leftover self.leftover = bytes() for buf in self.r_iter: ret += buf self.consumed = True return ret ret = bytes() if self.leftover: if len(self.leftover) > size: ret = self.leftover[0:size] self.leftover = self.leftover[size:] return ret ret = self.leftover self.leftover = bytes() while True: try: ret += next(self.r_iter) if len(ret) >= size: self.leftover = ret[size:] ret = ret[0:size] break except StopIteration: self.consumed = True break return ret def close(self): pass class MemoryContentSource(FdContentSource): def __init__(self, url=None, content=""): if isinstance(content, str): content = content.encode('utf-8') fd = io.BytesIO(content) if url is None: url = "MemoryContentSource://undefined" super(MemoryContentSource, self).__init__(fd=fd, url=url) class UrlReader(object): def read(self, size=-1): raise NotImplementedError() def close(self): raise NotImplementedError() class Urllib2UrlReader(UrlReader): def __init__(self, url, offset=None): (url, username, password) = parse_url_auth(url) self.url = url if username is None: opener = urllib_request.urlopen else: mgr = urllib_request.HTTPPasswordMgrWithDefaultRealm() mgr.add_password(None, url, username, password) handler = urllib_request.HTTPBasicAuthHandler(mgr) opener = urllib_request.build_opener(handler).open try: req = urllib_request.Request(url) if offset is not None: req.add_header('Range', 'bytes=%d-' % offset) self.req = opener(req) except urllib_error.HTTPError as e: if e.code == 404: myerr = IOError("Unable to open %s" % url) myerr.errno = errno.ENOENT raise myerr raise e def read(self, size=-1): return self.req.read(size) def close(self): return self.req.close() class RequestsUrlReader(UrlReader): # This provides a url reader that supports deflate/gzip encoding # but still implements 'read'. # r = RequestsUrlReader(http://example.com) # r.read(10) # r.close() def __init__(self, url, buflen=None, offset=None): self.url = url (url, user, password) = parse_url_auth(url) if user is None: auth = None else: auth = (user, password) headers = None if offset is not None: headers = {'Range': 'bytes=%d-' % offset} self.req = requests.get(url, stream=True, auth=auth, headers=headers) self.r_iter = None if buflen is None: buflen = READ_BUFFER_SIZE self.buflen = buflen self.leftover = bytes() self.consumed = False if (self.req.status_code == requests.codes.NOT_FOUND): # pylint: disable=E1101 myerr = IOError("Unable to open %s" % url) myerr.errno = errno.ENOENT raise myerr ce = self.req.headers.get('content-encoding', '').lower() if 'gzip' in ce or 'deflate' in ce: self._read = self.read_compressed else: self._read = self.read_raw def read(self, size=-1): if size < 0: size = None return self._read(size) def read_compressed(self, size=None): if not self.r_iter: self.r_iter = self.req.iter_content(self.buflen) if self.consumed: return bytes() if (size is None or size < 0): # read everything ret = self.leftover self.leftover = bytes() for buf in self.r_iter: ret += buf self.consumed = True return ret ret = bytes() if self.leftover: if len(self.leftover) > size: ret = self.leftover[0:size] self.leftover = self.leftover[size:] return ret ret = self.leftover self.leftover = bytes() while True: try: ret += next(self.r_iter) if len(ret) >= size: self.leftover = ret[size:] ret = ret[0:size] break except StopIteration: self.consumed = True break return ret def read_raw(self, size=None): return self.req.raw.read(size) def close(self): self.req.close() def parse_url_auth(url): parsed = urlparse.urlparse(url) authtok = "%s:%s@" % (parsed.username, parsed.password) if parsed.netloc.startswith(authtok): url = url.replace(authtok, "", 1) return (url, parsed.username, parsed.password) if URL_READER_CLASSNAME == "RequestsUrlReader": URL_READER = RequestsUrlReader elif URL_READER_CLASSNAME == "Urllib2UrlReader": URL_READER = Urllib2UrlReader else: raise Exception("Unknown URL_READER_CLASSNAME: %s" % URL_READER_CLASSNAME) # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/simplestreams/filters.py0000644000000000000000000000424112314577450020615 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . from simplestreams import util import re class ItemFilter(object): def __init__(self, content, noneval=""): rparsefmt = r"(\w+)[ ]*([!]{0,1}[=~])[ ]*(.*)[ ]*$" parsed = re.match(rparsefmt, content) if not parsed: raise ValueError("Unable to parse expression %s" % content) (key, op, val) = parsed.groups() if op in ("!=", "="): self._matches = val.__eq__ elif op in ("!~", "~"): self._matches = re.compile(val).search else: raise ValueError("Bad parsing of %s" % content) self.negator = (op[0] != "!") self.op = op self.key = key self.value = val self.content = content self.noneval = noneval def __str__(self): return "%s %s %s [none=%s]" % (self.key, self.op, self.value, self.noneval) def __repr__(self): return self.__str__() def matches(self, item): val = str(item.get(self.key, self.noneval)) return (self.negator == bool(self._matches(val))) def get_filters(filters, noneval=""): flist = [] for f in filters: flist.append(ItemFilter(f, noneval=noneval)) return flist def filter_item(filters, data, src, pedigree): data = util.products_exdata(src, pedigree) for f in filters: if not f.matches(data): return False return True simplestreams-0.1.0~bzr341/simplestreams/log.py0000644000000000000000000000374012314577450017731 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import logging from logging import (DEBUG, ERROR, FATAL, INFO, # pylint: disable=W0611 NOTSET, WARN, WARNING) class NullHandler(logging.Handler): def emit(self, record): pass def basicConfig(**kwargs): # basically like logging.basicConfig but only output for our logger if kwargs.get('filename'): handler = logging.FileHandler(filename=kwargs['filename'], mode=kwargs.get('filemode', 'a')) elif kwargs.get('stream'): handler = logging.StreamHandler(stream=kwargs['stream']) else: handler = NullHandler() level = kwargs.get('level', NOTSET) handler.setFormatter(logging.Formatter(fmt=kwargs.get('format'), datefmt=kwargs.get('datefmt'))) handler.setLevel(level) logging.getLogger().setLevel(level) logger = _getLogger() for h in list(logger.handlers): logger.removeHandler(h) logger.setLevel(level) logger.addHandler(handler) def _getLogger(name='sstreams'): return logging.getLogger(name) if not logging.getLogger().handlers: logging.getLogger().addHandler(NullHandler()) LOG = _getLogger() # vi: ts=4 expandtab syntax=python simplestreams-0.1.0~bzr341/simplestreams/mirrors/0000755000000000000000000000000012314577450020267 5ustar 00000000000000simplestreams-0.1.0~bzr341/simplestreams/objectstores/0000755000000000000000000000000012314577450021300 5ustar 00000000000000simplestreams-0.1.0~bzr341/simplestreams/openstack.py0000644000000000000000000001060112314577450021131 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . from keystoneclient.v2_0 import client as ksclient import os import re OS_ENV_VARS = ( 'OS_AUTH_TOKEN', 'OS_AUTH_URL', 'OS_CACERT', 'OS_IMAGE_API_VERSION', 'OS_IMAGE_URL', 'OS_PASSWORD', 'OS_REGION_NAME', 'OS_STORAGE_URL', 'OS_TENANT_ID', 'OS_TENANT_NAME', 'OS_USERNAME', 'OS_INSECURE' ) def load_keystone_creds(**kwargs): # either via arguments or OS_* values in environment, the kwargs # that are required are: # 'username', 'auth_url', # ('auth_token' or 'password') # ('tenant_id' or 'tenant_name') ret = {} for name in OS_ENV_VARS: lc = name.lower() # take off 'os_' short = lc[3:] if short in kwargs: ret[lc] = kwargs.get(lc) elif name in os.environ: # take off 'os_' ret[short] = os.environ[name] if 'insecure' in ret: if isinstance(ret['insecure'], str): ret['insecure'] = (ret['insecure'].lower() not in ("", "0", "no", "off")) else: ret['insecure'] = bool(ret['insecure']) missing = [] for req in ('username', 'auth_url'): if not ret.get(req): missing.append(req) if not (ret.get('auth_token') or ret.get('password')): missing.append("(auth_token or password)") if not (ret.get('tenant_id') or ret.get('tenant_name')): raise ValueError("(tenant_id or tenant_name)") if missing: raise ValueError("Need values for: %s" % missing) return ret def get_regions(client=None, services=None, kscreds=None): # if kscreds had 'region_name', then return that if kscreds and kscreds.get('region_name'): return [kscreds.get('region_name')] if client is None: creds = kscreds if creds is None: creds = load_keystone_creds() client = get_ksclient(**creds) endpoints = client.service_catalog.get_endpoints() if services is None: services = list(endpoints.keys()) regions = set() for service in services: for r in endpoints.get(service, {}): regions.add(r['region']) return list(regions) def get_ksclient(**kwargs): pt = ('username', 'password', 'tenant_id', 'tenant_name', 'auth_url', 'cacert', 'insecure') kskw = {k: kwargs.get(k) for k in pt if k in kwargs} return ksclient.Client(**kskw) def get_service_conn_info(service='image', client=None, **kwargs): # return a dict with token, insecure, cacert, endpoint if not client: client = get_ksclient(**kwargs) endpoint = _get_endpoint(client, service, **kwargs) return {'token': client.auth_token, 'insecure': kwargs.get('insecure'), 'cacert': kwargs.get('cacert'), 'endpoint': endpoint, 'tenant_id': client.tenant_id} def _get_endpoint(client, service, **kwargs): """Get an endpoint using the provided keystone client.""" endpoint_kwargs = { 'service_type': service, 'endpoint_type': kwargs.get('endpoint_type') or 'publicURL', } if kwargs.get('region_name'): endpoint_kwargs['attr'] = 'region' endpoint_kwargs['filter_value'] = kwargs.get('region_name') endpoint = client.service_catalog.url_for(**endpoint_kwargs) return _strip_version(endpoint) def _strip_version(endpoint): """Strip a version from the last component of an endpoint if present""" # Get rid of trailing '/' if present if endpoint.endswith('/'): endpoint = endpoint[:-1] url_bits = endpoint.split('/') # regex to match 'v1' or 'v2.0' etc if re.match(r'v\d+\.?\d*', url_bits[-1]): endpoint = '/'.join(url_bits[:-1]) return endpoint simplestreams-0.1.0~bzr341/simplestreams/util.py0000644000000000000000000004064312314577450020130 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import errno import hashlib import os import re import subprocess import tempfile import time import json import simplestreams.contentsource as cs from simplestreams.log import LOG try: ALGORITHMS = list(getattr(hashlib, 'algorithms')) except AttributeError: ALGORITHMS = list(hashlib.algorithms_available) # pylint: disable=E1101 ALIASNAME = "_aliases" PGP_SIGNED_MESSAGE_HEADER = "-----BEGIN PGP SIGNED MESSAGE-----" PGP_SIGNATURE_HEADER = "-----BEGIN PGP SIGNATURE-----" PGP_SIGNATURE_FOOTER = "-----END PGP SIGNATURE-----" _UNSET = object() CHECKSUMS = ("md5", "sha256", "sha512") READ_SIZE = (1024 * 10) PRODUCTS_TREE_DATA = ( ("products", "product_name"), ("versions", "version_name"), ("items", "item_name"), ) PRODUCTS_TREE_HIERARCHY = [_k[0] for _k in PRODUCTS_TREE_DATA] class SignatureMissingException(Exception): pass try: # python2 _STRING_TYPES = (str, basestring, unicode) except NameError: # python3 _STRING_TYPES = (str,) def stringitems(data): f = {} for k, v in data.items(): if isinstance(v, _STRING_TYPES): f[k] = v elif isinstance(v, (int, float)): f[k] = str(v) return f def products_exdata(tree, pedigree, include_top=True, insert_fieldnames=True): # given 'tree' and 'pedigree' return a 'flattened' dict that contains # entries for all attributes of this item those that apply to its pedigree harchy = PRODUCTS_TREE_DATA exdata = {} if include_top and tree: exdata.update(stringitems(tree)) clevel = tree for (n, key) in enumerate(pedigree): dictname, fieldname = harchy[n] clevel = clevel.get(dictname, {}).get(key, {}) exdata.update(stringitems(clevel)) if insert_fieldnames: exdata[fieldname] = key return exdata def products_set(tree, data, pedigree): harchy = PRODUCTS_TREE_HIERARCHY cur = tree for n in range(0, len(pedigree)): if harchy[n] not in cur: cur[harchy[n]] = {} cur = cur[harchy[n]] if n != (len(pedigree) - 1): if pedigree[n] not in cur: cur[pedigree[n]] = {} cur = cur[pedigree[n]] cur[pedigree[-1]] = data def products_del(tree, pedigree): harchy = PRODUCTS_TREE_HIERARCHY cur = tree for n in range(0, len(pedigree)): if harchy[n] not in cur: return cur = cur[harchy[n]] if n == (len(pedigree) - 1): break if pedigree[n] not in cur: return cur = cur[pedigree[n]] if pedigree[-1] in cur: del cur[pedigree[-1]] def products_prune(tree): for prodname in list(tree.get('products', {}).keys()): keys = list(tree['products'][prodname].get('versions', {}).keys()) for vername in keys: vtree = tree['products'][prodname]['versions'][vername] for itemname in list(vtree.get('items', {}).keys()): if not vtree['items'][itemname]: del vtree['items'][itemname] if 'items' not in vtree or not vtree['items']: del tree['products'][prodname]['versions'][vername] if ('versions' not in tree['products'][prodname] or not tree['products'][prodname]['versions']): del tree['products'][prodname] if 'products' in tree and not tree['products']: del tree['products'] def walk_products(tree, cb_product=None, cb_version=None, cb_item=None, ret_finished=_UNSET): # walk a product tree. callbacks are called with (item, tree, (pedigree)) for prodname, proddata in tree['products'].items(): if cb_product: ret = cb_product(proddata, tree, (prodname,)) if ret_finished != _UNSET and ret == ret_finished: return if (not cb_version and not cb_item) or 'versions' not in proddata: continue for vername, verdata in proddata['versions'].items(): if cb_version: ret = cb_version(verdata, tree, (prodname, vername)) if ret_finished != _UNSET and ret == ret_finished: return if not cb_item or 'items' not in verdata: continue for itemname, itemdata in verdata['items'].items(): ret = cb_item(itemdata, tree, (prodname, vername, itemname)) if ret_finished != _UNSET and ret == ret_finished: return def expand_tree(tree, refs=None, delete=False): if refs is None: refs = tree.get(ALIASNAME, None) expand_data(tree, refs, delete) def expand_data(data, refs=None, delete=False): if isinstance(data, dict): if isinstance(refs, dict): for key in list(data.keys()): if key == ALIASNAME: continue ref = refs.get(key) if not ref: continue value = data.get(key) if value and isinstance(value, str): data.update(ref[value]) if delete: del data[key] for key in data: expand_data(data[key], refs) elif isinstance(data, list): for item in data: expand_data(item, refs) def resolve_work(src, target, maxnum=None, keep=False, itemfilter=None, sort_reverse=True): # if more than maxnum items are in src, only the most recent maxnum will be # stored in target. If keep is true, then the most recent maxnum items # will be kept in target even if they are no longer in src. # if keep is false the number in target will never be greater than that # in src. add = [] remove = [] reverse = sort_reverse if maxnum is None and keep: raise TypeError("maxnum(%s) cannot be None if keep is True" % maxnum) if not (maxnum is None or isinstance(maxnum, int)): raise TypeError("maxnum(%s) must be integer or None" % maxnum) if not (keep is None or isinstance(keep, int)): raise TypeError("keep(%s) must be integer or None" % keep) # Ensure that all source items are passed through filters # In case the filters have changed from the last run for item in sorted(src, reverse=reverse): if itemfilter is None or itemfilter(item): if item not in target: add.append(item) for item in sorted(target, reverse=reverse): if item not in src: remove.append(item) if keep and len(remove): after_add = len(target) + len(add) while len(remove) and (maxnum > (after_add - len(remove))): remove.pop(0) mtarget = sorted([f for f in target + add if f not in remove], reverse=reverse) if maxnum is not None and len(mtarget) > maxnum: for item in mtarget[maxnum:]: if item in target: remove.append(item) else: add.pop(add.index(item)) remove = sorted(remove, reverse=bool(not reverse)) return(add, remove) def policy_read_signed(content, path, keyring=None): # pylint: disable=W0613 # convenience wrapper around 'read_signed' for use MirrorReader policy return read_signed(content=content, keyring=keyring) def read_signed(content, keyring=None): # ensure that content is signed by a key in keyring. # if no keyring given use default. if content.startswith(PGP_SIGNED_MESSAGE_HEADER): # http://rfc-ref.org/RFC-TEXTS/2440/chapter7.html cmd = ["gpg", "--batch", "--verify"] if keyring: cmd.append("--keyring=%s" % keyring) cmd.append("-") try: _outerr = subp(cmd, data=content) except subprocess.CalledProcessError as e: LOG.debug("failed: %s\n out=%s\n err=%s" % (' '.join(cmd), e.output[0], e.output[1])) raise e ret = {'body': '', 'signature': '', 'garbage': ''} lines = content.splitlines() i = 0 for i in range(0, len(lines)): if lines[i] == PGP_SIGNED_MESSAGE_HEADER: mode = "header" continue elif mode == "header": if lines[i] != "": mode = "body" continue elif lines[i] == PGP_SIGNATURE_HEADER: mode = "signature" continue elif lines[i] == PGP_SIGNATURE_FOOTER: mode = "garbage" continue # dash-escaped content in body if lines[i].startswith("- ") and mode == "body": ret[mode] += lines[i][2:] + "\n" else: ret[mode] += lines[i] + "\n" return ret['body'] else: raise SignatureMissingException("No signature found!") def load_content(content): if isinstance(content, bytes): content = content.decode('utf-8') return json.loads(content) def dump_data(data): return json.dumps(data, indent=1).encode('utf-8') def timestamp(ts=None): return time.strftime("%a, %d %b %Y %H:%M:%S +0000", time.gmtime(ts)) def item_checksums(item): return {k: item[k] for k in CHECKSUMS if k in item} class checksummer(object): _hasher = None algorithm = None expected = None def __init__(self, checksums): # expects a dict of hashname/value if not checksums: self._hasher = None return for meth in CHECKSUMS: if meth in checksums and meth in ALGORITHMS: self._hasher = hashlib.new(meth) self.algorithm = meth self.expected = checksums.get(self.algorithm, None) if not self._hasher: raise TypeError("Unable to find suitable hash algorithm") def update(self, data): if self._hasher is None: return self._hasher.update(data) def hexdigest(self): if self._hasher is None: return None return self._hasher.hexdigest() def check(self): return (self.expected is None or self.expected == self.hexdigest()) def move_dups(src, target, sticky=None): # given src = {e1: {a:a, b:c}, e2: {a:a, b:d, e:f}} # update target with {a:a}, and delete 'a' from entries in dict1 # if a key exists in target, it will not be copied or deleted. if sticky is None: sticky = [] allkeys = set() for entry in src: allkeys.update(list(src[entry].keys())) candidates = allkeys.difference(sticky) updates = {} for entry in list(src.keys()): for k, v in src[entry].items(): if k not in candidates: continue if k in updates: if v != updates[k] or not isinstance(v, str): del updates[k] candidates.remove(k) else: if isinstance(v, str) and target.get(k, v) == v: updates[k] = v else: candidates.remove(k) for entry in list(src.keys()): for k in list(src[entry].keys()): if k in updates: del src[entry][k] target.update(updates) def products_condense(ptree, sticky=None): # walk a products tree, copying up item keys as far as they'll go def call_move_dups(cur, _tree, pedigree): (_mtype, stname) = (("product", "versions"), ("version", "items"))[len(pedigree) - 1] move_dups(cur.get(stname, {}), cur, sticky=sticky) walk_products(ptree, cb_version=call_move_dups) walk_products(ptree, cb_product=call_move_dups) def assert_safe_path(path): if path == "" or path is None: return path = str(path) if os.path.isabs(path): raise TypeError("Path '%s' is absolute path" % path) bad = (".." + os.path.sep, "..." + os.path.sep) for tok in bad: if path.startswith(tok): raise TypeError("Path '%s' starts with '%s'" % (path, tok)) bad = (os.path.sep + ".." + os.path.sep, os.path.sep + "..." + os.path.sep) for tok in bad: if tok in path: raise TypeError("Path '%s' contains '%s'" % (path, tok)) def read_url(url): return cs.UrlContentSource(url).read() def mkdir_p(path): try: os.makedirs(path) except OSError as exc: if exc.errno != errno.EEXIST: raise return def get_local_copy(contentsource, read_size=READ_SIZE): (tfd, tpath) = tempfile.mkstemp() tfile = os.fdopen(tfd, "wb") try: LOG.debug("getting local copy of %s", contentsource.url) while True: buf = contentsource.read(read_size) tfile.write(buf) if len(buf) != read_size: break return (tpath, True) except Exception as e: os.unlink(tpath) raise e def subp(args, data=None, capture=True, shell=False, env=None): if not capture: stdout, stderr = (None, None) else: stdout, stderr = (subprocess.PIPE, subprocess.PIPE) sp = subprocess.Popen(args, stdout=stdout, stderr=stderr, stdin=subprocess.PIPE, shell=shell, env=env) if isinstance(data, str): data = data.encode('utf-8') (out, err) = sp.communicate(data) if isinstance(out, bytes): out = out.decode('utf-8') if isinstance(err, bytes): err = err.decode('utf-8') rc = sp.returncode # pylint: disable=E1101 if rc != 0: raise subprocess.CalledProcessError(rc, args, output=(out, err)) return (out, err) def get_sign_cmd(path, output=None, inline=False): cmd = ['gpg'] defkey = os.environ.get('SS_GPG_DEFAULT_KEY') if defkey: cmd.extend(['--default-key', defkey]) batch = os.environ.get('SS_GPG_BATCH', "1").lower() if batch not in ("0", "false"): cmd.append('--batch') if output: cmd.extend(['--output', output]) if inline: cmd.append('--clearsign') else: cmd.extend(['--armor', '--detach-sign']) cmd.extend([path]) return cmd def make_signed_content_paths(content): # loads json content. If it is a products:1.0 file # then it fixes up 'path' elements to point to signed names (.sjson) # returns tuple of (changed, updated) data = json.loads(content) if data.get("format") != "index:1.0": return (False, None) for content_ent in list(data.get('index', {}).values()): path = content_ent.get('path') if path.endswith(".json"): content_ent['path'] = signed_fname(path, inline=True) return (True, json.dumps(data, indent=1)) def signed_fname(fname, inline=True): if inline: sfname = fname[0:-len(".json")] + ".sjson" else: sfname = fname + ".gpg" return sfname def rm_f_file(fname, skip=None): if skip is None: skip = [] if fname in skip: return try: os.unlink(fname) except OSError as exc: if exc.errno != errno.ENOENT: raise def sign_file(fname, inline=True, outfile=None): if outfile is None: outfile = signed_fname(fname, inline=inline) rm_f_file(outfile, skip=["-"]) return subp(get_sign_cmd(path=fname, output=outfile, inline=inline))[0] def sign_content(content, outfile="-", inline=True): rm_f_file(outfile, skip=["-"]) return subp(args=get_sign_cmd(path="-", output=outfile, inline=inline), data=content)[0] def path_from_mirror_url(mirror, path): if path is not None: return (mirror, path) path_regex = "streams/v1/.*[.](sjson|json)$" result = re.search(path_regex, mirror) if result: path = mirror[result.start():] mirror = mirror[:result.start()] else: path = "streams/v1/index.sjson" return (mirror, path) # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/simplestreams/mirrors/__init__.py0000644000000000000000000005203512314577450022405 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import errno import io import json import simplestreams.filters as filters import simplestreams.util as util import simplestreams.contentsource as cs from simplestreams.log import LOG class MirrorReader(object): def __init__(self, policy=util.policy_read_signed): """ policy should be a function which returns the extracted payload or raises an exception if the policy is violated. """ self.policy = policy def load_products(self, path): _, content = self.read_json(path) return util.load_content(content) def read_json(self, path): raw = self.source(path).read().decode('utf-8') return raw, self.policy(content=raw, path=path) def source(self, path): raise NotImplementedError() class MirrorWriter(object): def load_products(self, path=None, content_id=None): raise NotImplementedError() def sync_products(self, reader, path=None, products=None, content=None): # reader: a Reader for opening files referenced in products # path: the path of where to store this. # if path is None, do not store the products file itself # products: a products file in products:1.0 format # content: a rendered products tree, allowing you to store # externally signed content. # # One of content, path, or products is required. # * if path is not given, no rendering of products tree will be stored # * if content is None, it will be loaded from reader(path).read() # or rendered (json.dumps(products)) from products. # * if products is None, it will be loaded from content raise NotImplementedError() def sync_index(self, reader, path=None, src=None, content=None): # reader: a Reader for opening files referenced in index or products # files # path: the path of where to store this. # if path is None, do not store the index file itself # src: a dictionary in index:1.0 format # content: a rendered products tree, allowing you to store # externally signed content. # # One of content, path, or products is required. # * if path not given, no rendering of products tree will be stored # * if content is None, it will be loaded from reader(path).read() # or rendered (json.dumps(products)) from products. # * if products is None, it will be loaded from content raise NotImplementedError() def sync(self, reader, path): content, payload = reader.read_json(path) data = util.load_content(payload) fmt = data.get("format", "UNSPECIFIED") if fmt == "products:1.0": return self.sync_products(reader, path, data, content) elif fmt == "index:1.0": return self.sync_index(reader, path, data, content) else: raise TypeError("Unknown format '%s' in '%s'" % (fmt, path)) ## Index Operations ## def filter_index_entry(self, data, src, pedigree): # src is source index tree. # data is src['index'][ped[0]] _pylint = (data, src, pedigree) return True def insert_index(self, path, src, content): # src is the source index tree # content is None or a json rendering (possibly signed) of src _pylint = (path, src, content) def insert_index_entry(self, data, src, pedigree, contentsource): # src is the top level index (index:1.0 format) # data is src['index'][pedigree[0]] # contentsource is a ContentSource if 'path' exists in data or None _pylint = (data, src, pedigree, contentsource) ## Products Operations ## def filter_product(self, data, src, target, pedigree): # src and target are top level products:1.0 # data is src['products'][ped[0]] _pylint = (data, src, target, pedigree) return True def filter_version(self, data, src, target, pedigree): # src and target are top level products:1.0 # data is src['products'][ped[0]]['versions'][ped[1]] _pylint = (data, src, target, pedigree) return True def filter_item(self, data, src, target, pedigree): # src and target are top level products:1.0 # data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]] _pylint = (data, src, target, pedigree) return True def insert_products(self, path, target, content): # path is the path to store data (where it came from on source mirror) # target is the target products:1.0 tree # content is None or a json rendering (possibly signed) of src _pylint = (path, target, content) def insert_product(self, data, src, target, pedigree): # src and target are top level products:1.0 # data is src['products'][ped[0]] _pylint = (data, src, target, pedigree) def insert_version(self, data, src, target, pedigree): # src and target are top level products:1.0 # data is src['products'][ped[0]]['versions'][ped[1]] _pylint = (data, src, target, pedigree) def insert_item(self, data, src, target, pedigree, contentsource): # src and target are top level products:1.0 # data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]] # contentsource is a ContentSource if 'path' exists in data or None _pylint = (data, src, target, pedigree, contentsource) def remove_product(self, data, src, target, pedigree): # src and target are top level products:1.0 # data is src['products'][ped[0]] _pylint = (data, src, target, pedigree) def remove_version(self, data, src, target, pedigree): # src and target are top level products:1.0 # data is src['products'][ped[0]]['versions'][ped[1]] _pylint = (data, src, target, pedigree) def remove_item(self, data, src, target, pedigree): # src and target are top level products:1.0 # data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]] _pylint = (data, src, target, pedigree) class UrlMirrorReader(MirrorReader): def __init__(self, prefix, mirrors=None, policy=util.policy_read_signed): super(UrlMirrorReader, self).__init__(policy=policy) self._cs = cs.UrlContentSource if mirrors is None: mirrors = [] self.mirrors = mirrors self.prefix = prefix self._trailing_slash_checked = self.prefix.endswith("/") def source(self, path): mirrors = [m + path for m in self.mirrors] if self._trailing_slash_checked: return self._cs(self.prefix + path, mirrors=mirrors) # A little hack to fix up the user's path. It's fairly common to # specify URLs without a trailing slash, so we try to that here as # well. We open, then close and then get a new one (so the one we # returned is not yet open (LP: #1237658) self._trailing_slash_checked = True try: csource = self._cs(self.prefix + path, mirrors=None) csource.open() csource.read(1024) csource.close() except Exception as e: if isinstance(e, IOError) and (e.errno == errno.ENOENT): LOG.warn("got ENOENT for (%s, %s), trying with trailing /", self.prefix, path) self.prefix = self.prefix + '/' else: # this raised exception, but it was sneaky to do it # so just ignore it. LOG.debug("trailing / check on (%s, %s) resulted in %s", self.prefix, path, e) return self._cs(self.prefix + path, mirrors=mirrors) class ObjectStoreMirrorReader(MirrorReader): def __init__(self, objectstore, policy=util.policy_read_signed): super(ObjectStoreMirrorReader, self).__init__(policy=policy) self.objectstore = objectstore def source(self, path): return self.objectstore.source(path) class BasicMirrorWriter(MirrorWriter): def __init__(self, config=None): super(BasicMirrorWriter, self).__init__() if config is None: config = {} self.config = config def load_products(self, path=None, content_id=None): super(BasicMirrorWriter, self).load_products(path, content_id) def sync_index(self, reader, path=None, src=None, content=None): (src, content) = _get_data_content(path, src, content, reader) util.expand_tree(src) check_tree_paths(src) itree = src.get('index') for content_id, index_entry in itree.items(): if not self.filter_index_entry(index_entry, src, (content_id,)): continue epath = index_entry.get('path', None) mycs = None if epath: if index_entry.get('format') in ("index:1.0", "products:1.0"): self.sync(reader, path=epath) mycs = reader.source(epath) self.insert_index_entry(index_entry, src, (content_id,), mycs) self.insert_index(path, src, content) def sync_products(self, reader, path=None, src=None, content=None): (src, content) = _get_data_content(path, src, content, reader) util.expand_tree(src) check_tree_paths(src) content_id = src['content_id'] target = self.load_products(path, content_id) if not target: target = util.stringitems(src) util.expand_tree(target) stree = src.get('products', {}) if 'products' not in target: target['products'] = {} tproducts = target['products'] filtered_products = [] prodname = None for prodname, product in stree.items(): if not self.filter_product(product, src, target, (prodname,)): filtered_products.append(prodname) continue if prodname not in tproducts: tproducts[prodname] = util.stringitems(product) tproduct = tproducts[prodname] if 'versions' not in tproduct: tproduct['versions'] = {} src_filtered_items = [] def _filter(itemkey): ret = self.filter_version(product['versions'][itemkey], src, target, (prodname, itemkey)) if not ret: src_filtered_items.append(itemkey) return ret (to_add, to_remove) = util.resolve_work( src=list(product.get('versions', {}).keys()), target=list(tproduct.get('versions', {}).keys()), maxnum=self.config.get('max_items'), keep=self.config.get('keep_items'), itemfilter=_filter) LOG.info("%s/%s: to_add=%s to_remove=%s", content_id, prodname, to_add, to_remove) tversions = tproduct['versions'] skipped_versions = [] for vername in to_add: version = product['versions'][vername] if vername not in tversions: tversions[vername] = util.stringitems(version) added_items = [] for itemname, item in version.get('items', {}).items(): pgree = (prodname, vername, itemname) if not self.filter_item(item, src, target, pgree): continue added_items.append(itemname) ipath = item.get('path', None) ipath_cs = None if ipath: ipath_cs = reader.source(ipath) if reader else None self.insert_item(item, src, target, pgree, ipath_cs) if len(added_items): # do not insert versions that had all items filtered self.insert_version(version, src, target, (prodname, vername)) else: skipped_versions.append(vername) for vername in skipped_versions: if vername in tproduct['versions']: del tproduct['versions'][vername] if self.config.get('delete_filtered_items', False): tkeys = tproduct.get('versions', {}).keys() for v in src_filtered_items: if v not in to_remove and v in tkeys: to_remove.append(v) LOG.info("After deletions %s/%s: to_add=%s to_remove=%s", content_id, prodname, to_add, to_remove) for vername in to_remove: tversion = tversions[vername] for itemname in list(tversion.get('items', {}).keys()): self.remove_item(tversion['items'][itemname], src, target, (prodname, vername, itemname)) self.remove_version(tversion, src, target, (prodname, vername)) del tversions[vername] self.insert_product(tproduct, src, target, (prodname,)) ## FIXME: below will remove products if they're in target ## (result of load_products) but not in the source products. ## that could accidentally delete a lot. ## del_products = [] if self.config.get('delete_products', False): del_products.extend([p for p in list(tproducts.keys()) if p not in stree]) if self.config.get('delete_filtered_products', False): del_products.extend([p for p in filtered_products if p not in stree]) for prodname in del_products: ## FIXME: we remove a product here, but unless that acts ## recursively, nothing will remove the items in that product self.remove_product(tproducts[prodname], src, target, (prodname,)) del tproducts[prodname] self.insert_products(path, target, content) # ObjectStoreMirrorWriter stores data in /.data/ class ObjectStoreMirrorWriter(BasicMirrorWriter): def __init__(self, config, objectstore): super(ObjectStoreMirrorWriter, self).__init__(config=config) self.store = objectstore def products_data_path(self, content_id): return ".data/%s" % content_id def _reference_count_data_path(self): return ".data/references.json" def _load_rc_dict(self): try: raw = self.source(self._reference_count_data_path()).read() return json.load(io.StringIO(raw.decode('utf-8'))) except IOError as e: if e.errno == errno.ENOENT: return {} raise def _persist_rc_dict(self, rc): source = cs.MemoryContentSource(content=json.dumps(rc)) self.store.insert(self._reference_count_data_path(), source) def _build_rc_id(self, src, pedigree): return '/'.join([src['content_id']] + list(pedigree)) def _inc_rc(self, path, src, pedigree): rc = self._load_rc_dict() id_ = self._build_rc_id(src, pedigree) if path not in rc: rc[path] = [id_] else: rc[path].append(id_) self._persist_rc_dict(rc) def _dec_rc(self, path, src, pedigree): rc = self._load_rc_dict() id_ = self._build_rc_id(src, pedigree) entry = rc.get(path, None) ok_to_delete = False if entry is not None: if len(entry) == 1: del rc[path] ok_to_delete = True else: rc[path] = list(filter(lambda x: x != id_, rc[path])) self._persist_rc_dict(rc) return ok_to_delete def load_products(self, path=None, content_id=None): if content_id: try: dpath = self.products_data_path(content_id) return util.load_content(self.source(dpath).read()) except IOError as e: if e.errno != errno.ENOENT: raise if path: try: return util.load_content(self.source(path).read()) except IOError as e: if e.errno != errno.ENOENT: raise return {} raise TypeError("unable to load_products with no path") def source(self, path): return self.store.source(path) def insert_item(self, data, src, target, pedigree, contentsource): util.products_set(target, data, pedigree) if 'path' not in data: return if not self.config.get('item_download', True): return LOG.debug("inserting %s to %s", contentsource.url, data['path']) self.store.insert(data['path'], contentsource, checksums=util.item_checksums(data), mutable=False, size=data.get('size')) self._inc_rc(data['path'], src, pedigree) def insert_index_entry(self, data, src, pedigree, contentsource): epath = data.get('path', None) if not epath: return self.store.insert(epath, contentsource, checksums=util.item_checksums(data)) def insert_products(self, path, target, content): dpath = self.products_data_path(target['content_id']) self.store.insert_content(dpath, util.dump_data(target)) if not path: return if not content: content = util.dump_data(target) self.store.insert_content(path, content) def insert_index(self, path, src, content): if not path: return if not content: content = util.dump_data(src) self.store.insert_content(path, content) def remove_item(self, data, src, target, pedigree): util.products_del(target, pedigree) if 'path' not in data: return if self._dec_rc(data['path'], src, pedigree): self.store.remove(data['path']) class ObjectFilterMirror(ObjectStoreMirrorWriter): def __init__(self, *args, **kwargs): super(ObjectFilterMirror, self).__init__(*args, **kwargs) self.filters = self.config.get('filters', []) def filter_item(self, data, src, target, pedigree): return filters.filter_item(self.filters, data, src, pedigree) class DryRunMirrorWriter(ObjectFilterMirror): def __init__(self, *args, **kwargs): super(DryRunMirrorWriter, self).__init__(*args, **kwargs) self.downloading = [] self.removing = [] # All insert/remove operations are noops. def noop(*args): pass insert_index = noop insert_index_entry = noop insert_products = noop insert_product = noop insert_version = noop remove_product = noop remove_version = noop def insert_item(self, data, src, target, pedigree, contentsource): data = util.products_exdata(src, pedigree) if 'size' in data and 'path' in data: self.downloading.append( (pedigree, data['path'], int(data['size']))) def remove_item(self, data, src, target, pedigree): data = util.products_exdata(src, pedigree) if 'size' in data and 'path' in data: self.removing.append( (pedigree, data['path'], int(data['size']))) @property def size(self): downloading = sum([size for _, _, size in self.downloading]) removing = sum([size for _, _, size in self.removing]) return int(downloading - removing) def _get_data_content(path, data, content, reader): if content is None and path: _, content = reader.read(path) if isinstance(content, bytes): content = content.decode('utf-8') if data is None and content: data = util.load_content(content) if not data: raise ValueError("Data could not be loaded. " "Path or content is required") return (data, content) def check_tree_paths(tree, fmt=None): if fmt is None: fmt = tree.get('format') if fmt == "products:1.0": def check_path(item, tree, pedigree): _pylint = (tree, pedigree) util.assert_safe_path(item.get('path')) util.walk_products(tree, cb_item=check_path) elif fmt == "index:1.0": index = tree.get('index') for content_id in index: util.assert_safe_path(index[content_id].get('path')) # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/simplestreams/mirrors/command_hook.py0000644000000000000000000002364112314577450023305 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import simplestreams.mirrors as mirrors import simplestreams.util as util import os import errno import signal import subprocess import tempfile REQUIRED_FIELDS = ("load_products",) HOOK_NAMES = ( "filter_index_entry", "filter_item", "filter_product", "filter_version", "insert_index", "insert_index_entry", "insert_item", "insert_product", "insert_products", "insert_version", "load_products", "remove_item", "remove_product", "remove_version", ) DEFAULT_HOOK_NAME = "hook" ENV_HOOK_NAME = "HOOK" ENV_FIELDS_NAME = "FIELDS" class CommandHookMirror(mirrors.BasicMirrorWriter): """ CommandHookMirror: invoke commands to implement a SimpleStreamMirror Available command hooks: load_products: invoked to list items in the products in a given content_id. See product_load_output_format. filter_index_entry, filter_item, filter_product, filter_version: invoked to determine if the named entity should be operated on exit 0 for "yes", 1 for "no". insert_index, insert_index_entry, insert_item, insert_product, insert_products, insert_version : invoked to insert the given entity. remove_product, remove_version, remove_item: invoked to remove the given entity Other Configuration: product_load_output_format: one of [serial_list, json] serial_list: The default output should be white space delimited output of product_name and version. json: output should be a json formated dictionary formated like products:1.0 content. Environments / Variables: When a hook is invoked, data about the relevant entity is made available in the environment. In all cases: * a special 'FIELDS' key is available which is a space delimited list of keys * a special 'HOOK' field is available that specifies which hook is being called. For an item in a products:1.0 file that has a 'path' item, the item will be downloaded and a 'path_local' field inserted into the metadata which will contain the path to the local file. If the configuration setting 'item_skip_download' is set to True, then 'path_url' will be set instead to a url where the item can be found. """ def __init__(self, config): if isinstance(config, str): config = util.load_content(config) check_config(config) super(CommandHookMirror, self).__init__(config=config) def load_products(self, path=None, content_id=None): (_rc, output) = self.call_hook('load_products', data={'content_id': content_id}, capture=True) fmt = self.config.get("product_load_output_format", "serial_list") loaded = load_product_output(output=output, content_id=content_id, fmt=fmt) return loaded def filter_index_entry(self, data, src, pedigree): mdata = util.stringitems(src) mdata['content_id'] = pedigree[0] mdata.update(util.stringitems(data)) (ret, _output) = self.call_hook('filter_index_entry', data=mdata, rcs=[0, 1]) return ret == 0 def filter_product(self, data, src, target, pedigree): return self._call_filter('filter_product', src, pedigree) def filter_version(self, data, src, target, pedigree): return self._call_filter('filter_version', src, pedigree) def filter_item(self, data, src, target, pedigree): return self._call_filter('filter_item', src, pedigree) def _call_filter(self, name, src, pedigree): data = util.products_exdata(src, pedigree) (ret, _output) = self.call_hook(name, data=data, rcs=[0, 1]) return ret == 0 def insert_index(self, path, src, content): return self.call_hook('insert_index', data=src, content=content, extra={'path': path}) def insert_products(self, path, target, content): return self.call_hook('insert_products', data=target, content=content, extra={'path': path}) def insert_product(self, data, src, target, pedigree): return self.call_hook('insert_product', data=util.products_exdata(src, pedigree)) def insert_version(self, data, src, target, pedigree): return self.call_hook('insert_version', data=util.products_exdata(src, pedigree)) def insert_item(self, data, src, target, pedigree, contentsource): mdata = util.products_exdata(src, pedigree) tmp_path = None tmp_del = None extra = {} if 'path' in data: extra.update({'item_url': contentsource.url}) if not self.config.get('item_skip_download', False): try: (tmp_path, tmp_del) = util.get_local_copy(contentsource) extra['path_local'] = tmp_path finally: contentsource.close() try: ret = self.call_hook('insert_item', data=mdata, extra=extra) finally: if tmp_del and os.path.exists(tmp_path): os.unlink(tmp_path) return ret def remove_product(self, data, src, target, pedigree): return self.call_hook('remove_product', data=util.products_exdata(src, pedigree)) def remove_version(self, data, src, target, pedigree): return self.call_hook('remove_version', data=util.products_exdata(src, pedigree)) def remove_item(self, data, src, target, pedigree): return self.call_hook('remove_item', data=util.products_exdata(target, pedigree)) def call_hook(self, hookname, data, capture=False, rcs=None, extra=None, content=None): command = self.config.get(hookname, self.config.get(DEFAULT_HOOK_NAME)) if not command: # return successful execution with no output return (0, '') if isinstance(command, str): command = ['sh', '-c', command] fdata = util.stringitems(data) content_file = None if content is not None: (tfd, content_file) = tempfile.mkstemp() tfile = os.fdopen(tfd, "w") tfile.write(content) tfile.close() fdata['content_file_path'] = content_file if extra: fdata.update(extra) fdata['HOOK'] = hookname try: return call_hook(command=command, data=fdata, unset=self.config.get('unset_value', None), capture=capture, rcs=rcs) finally: if content_file: os.unlink(content_file) def call_hook(command, data, unset=None, capture=False, rcs=None): env = os.environ.copy() data = data.copy() data[ENV_FIELDS_NAME] = ' '.join([k for k in data if k != ENV_HOOK_NAME]) mcommand = render(command, data, unset=unset) env.update(data) return run_command(mcommand, env=env, capture=capture, rcs=rcs) def render(inputs, data, unset=None): fdata = data.copy() outputs = [] for i in inputs: while True: try: outputs.append(i % fdata) break except KeyError as err: if unset is None: raise for key in err.args: fdata[key] = unset return outputs def check_config(config): missing = [] for f in REQUIRED_FIELDS: if f not in config and config.get(DEFAULT_HOOK_NAME) is None: missing.append(f) if missing: raise TypeError("Missing required config entries for %s" % missing) def load_product_output(output, content_id, fmt="serial_list"): # parse command output and return if fmt == "serial_list": # "line" format just is a list of serials that are present working = {'content_id': content_id, 'products': {}} for line in output.splitlines(): (product_id, version) = line.split(None, 1) if product_id not in working['products']: working['products'][product_id] = {'versions': {}} working['products'][product_id]['versions'][version] = {} return working elif fmt == "json": return util.load_content(output) return def run_command(cmd, env=None, capture=False, rcs=None): if not rcs: rcs = [0] if not capture: stdout = None else: stdout = subprocess.PIPE sp = subprocess.Popen(cmd, env=env, stdout=stdout, shell=False) (out, _err) = sp.communicate() rc = sp.returncode # pylint: disable=E1101 if rc == 0x80 | signal.SIGPIPE: exc = IOError("Child Received SIGPIPE: %s" % str(cmd)) exc.errno = errno.EPIPE raise exc if rc not in rcs: raise subprocess.CalledProcessError(rc, cmd) if out is None: out = '' elif isinstance(out, bytes): out = out.decode('utf-8') return (rc, out) # vi: ts=4 expandtab syntax=python simplestreams-0.1.0~bzr341/simplestreams/mirrors/glance.py0000644000000000000000000002344612314577450022103 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import simplestreams.mirrors as mirrors import simplestreams.util as util import simplestreams.openstack as openstack from simplestreams.log import LOG import copy import errno import glanceclient import os def get_glanceclient(version='1', **kwargs): pt = ('endpoint', 'token', 'insecure', 'cacert') kskw = {k: kwargs.get(k) for k in pt if k in kwargs} return glanceclient.Client(version, **kskw) def empty_iid_products(content_id): return {'content_id': content_id, 'products': {}, 'datatype': 'image-ids', 'format': 'products:1.0'} # glance mirror 'image-downloads' content into glance # if provided an object store, it will produce a 'image-ids' mirror class GlanceMirror(mirrors.BasicMirrorWriter): def __init__(self, config, objectstore=None, region=None, name_prefix=None): super(GlanceMirror, self).__init__(config=config) self.loaded_content = {} self.store = objectstore self.keystone_creds = openstack.load_keystone_creds() self.name_prefix = name_prefix or "" if region is not None: self.keystone_creds['region_name'] = region conn_info = openstack.get_service_conn_info('image', **self.keystone_creds) self.gclient = get_glanceclient(**conn_info) self.tenant_id = conn_info['tenant_id'] self.region = self.keystone_creds.get('region_name', 'nullregion') self.cloudname = config.get("cloud_name", 'nullcloud') self.crsn = '-'.join((self.cloudname, self.region,)) self.auth_url = self.keystone_creds['auth_url'] self.content_id = config.get("content_id") self.modify_hook = config.get("modify_hook") if not self.content_id: raise TypeError("content_id is required") def _cidpath(self, content_id): return "streams/v1/%s.json" % content_id def load_products(self, path=None, content_id=None): my_cid = self.content_id # glance is the definitive store. Any data loaded from the store # is secondary. store_t = None if self.store: try: path = self._cidpath(my_cid) store_t = util.load_content(self.store.source(path).read()) except IOError as e: if e.errno != errno.ENOENT: raise if not store_t: store_t = empty_iid_products(my_cid) glance_t = empty_iid_products(my_cid) images = self.gclient.images.list() for image in images: image = image.to_dict() if image['owner'] != self.tenant_id: continue props = image['properties'] if props.get('content_id') != my_cid: continue source_content_id = props.get('source_content_id') product = props.get('product_name') version = props.get('version_name') item = props.get('item_name') if not (version and product and item and source_content_id): LOG.warn("%s missing required fields" % image['id']) continue # get data from the datastore for this item, if it exists # and then update that with glance data (just in case different) try: item_data = util.products_exdata(store_t, (product, version, item,), include_top=False, insert_fieldnames=False) except KeyError: item_data = {} item_data.update({'name': image['name'], 'id': image['id']}) if 'owner_id' not in item_data: item_data['owner_id'] = self.tenant_id util.products_set(glance_t, item_data, (product, version, item,)) for product in glance_t['products']: glance_t['products'][product]['region'] = self.region glance_t['products'][product]['endpoint'] = self.auth_url return glance_t def filter_item(self, data, src, target, pedigree): flat = util.products_exdata(src, pedigree, include_top=False) return (flat.get('ftype') in ('disk1.img', 'disk.img') and flat.get('arch') in ('x86_64', 'amd64', 'i386')) def insert_item(self, data, src, target, pedigree, contentsource): flat = util.products_exdata(src, pedigree, include_top=False) tmp_path = None tmp_del = None name = flat.get('pubname', flat.get('name')) if not name.endswith(flat['item_name']): name += "-%s" % (flat['item_name']) t_item = flat.copy() if 'path' in t_item: del t_item['path'] props = {'content_id': target['content_id'], 'source_content_id': src['content_id']} for n in ('product_name', 'version_name', 'item_name'): props[n] = flat[n] del t_item[n] arch = flat.get('arch') if arch == "amd64": arch = "x86_64" if arch: props['architecture'] = arch fullname = self.name_prefix + name create_kwargs = { 'name': fullname, 'properties': props, 'disk_format': 'qcow2', 'container_format': 'bare', 'is_public': True, } if 'size' in data: create_kwargs['size'] = data.get('size') if 'md5' in data: create_kwargs['checksum'] = data.get('md5') try: try: (tmp_path, tmp_del) = util.get_local_copy(contentsource) if self.modify_hook: (newsize, newmd5) = call_hook(item=t_item, path=tmp_path, cmd=self.modify_hook) create_kwargs['checksum'] = newmd5 create_kwargs['size'] = newsize t_item['md5'] = newmd5 t_item['size'] = newsize finally: contentsource.close() create_kwargs['data'] = open(tmp_path, 'rb') ret = self.gclient.images.create(**create_kwargs) t_item['id'] = ret.id print("created %s: %s" % (ret.id, fullname)) finally: if tmp_del and os.path.exists(tmp_path): os.unlink(tmp_path) t_item['region'] = self.region t_item['endpoint'] = self.auth_url t_item['owner_id'] = self.tenant_id t_item['name'] = fullname util.products_set(target, t_item, pedigree) def remove_item(self, data, src, target, pedigree): util.products_del(target, pedigree) if 'id' in data: print("removing %s: %s" % (data['id'], data['name'])) self.gclient.images.delete(data['id']) def filter_index_entry(self, data, src, pedigree): return data.get('datatype') in ("image-downloads", None) def insert_products(self, path, target, content): if not self.store: return tree = copy.deepcopy(target) util.products_prune(tree) # stop these items from copying up when we call condense sticky = ['ftype', 'md5', 'sha256', 'size', 'name', 'id'] util.products_condense(tree, sticky=sticky) tsnow = util.timestamp() tree['updated'] = tsnow dpath = self._cidpath(tree['content_id']) LOG.info("writing data: %s", dpath) self.store.insert_content(dpath, util.dump_data(tree)) # now insert or update an index ipath = "streams/v1/index.json" try: index = util.load_content(self.store.source(ipath).read()) except IOError as exc: if exc.errno != errno.ENOENT: raise index = {"index": {}, 'format': 'index:1.0', 'updated': util.timestamp()} index['index'][tree['content_id']] = { 'updated': tsnow, 'datatype': 'image-ids', 'clouds': [{'region': self.region, 'endpoint': self.auth_url}], 'cloudname': self.cloudname, 'path': dpath, 'products': list(tree['products'].keys()), 'format': tree['format'], } LOG.info("writing data: %s", ipath) self.store.insert_content(ipath, util.dump_data(index)) def _checksum_file(fobj, read_size=util.READ_SIZE, checksums=None): if checksums is None: checksums = {'md5': None} cksum = util.checksummer(checksums=checksums) while True: buf = fobj.read(read_size) cksum.update(buf) if len(buf) != read_size: break return cksum.hexdigest() def call_hook(item, path, cmd): env = os.environ.copy() env.update(item) env['IMAGE_PATH'] = path env['FIELDS'] = ' '.join(item.keys()) + ' IMAGE_PATH' util.subp(cmd, env=env, capture=False) with open(path, "rb") as fp: md5 = _checksum_file(fp, checksums={'md5': None}) return (os.path.getsize(path), md5) # vi: ts=4 expandtab syntax=python simplestreams-0.1.0~bzr341/simplestreams/objectstores/__init__.py0000644000000000000000000001604512314577450023417 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import errno import hashlib import os import simplestreams.contentsource as cs import simplestreams.util as util from simplestreams.log import LOG READ_BUFFER_SIZE = 1024 * 10 class ObjectStore(object): read_size = READ_BUFFER_SIZE def insert(self, path, reader, checksums=None, mutable=True, size=None): #store content from reader.read() into path, expecting result checksum raise NotImplementedError() def insert_content(self, path, content, checksums=None, mutable=True): if not isinstance(content, bytes): content = content.encode('utf-8') self.insert(path=path, reader=cs.MemoryContentSource(content=content), checksums=checksums, mutable=mutable) def remove(self, path): #remove path from store raise NotImplementedError() def source(self, path): # return a ContentSource for the provided path raise NotImplementedError() def exists_with_checksum(self, path, checksums=None): return has_valid_checksum(path=path, reader=self.source, checksums=checksums, read_size=self.read_size) class MemoryObjectStore(ObjectStore): def __init__(self, data=None): super(MemoryObjectStore, self).__init__() if data is None: data = {} self.data = data def insert(self, path, reader, checksums=None, mutable=True, size=None): self.data[path] = reader.read() def remove(self, path): #remove path from store del self.data[path] def source(self, path): try: url = "%s://%s" % (self.__class__, path) return cs.MemoryContentSource(content=self.data[path], url=url) except KeyError: raise IOError(errno.ENOENT, '%s not found' % path) class FileStore(ObjectStore): def __init__(self, prefix, complete_callback=None): """ complete_callback is called periodically to notify users when a file is being inserted. It takes three arguments: the path that is inserted, the number of bytes downloaded, and the number of total bytes. """ self.prefix = prefix self.complete_callback = complete_callback def insert(self, path, reader, checksums=None, mutable=True, size=None, sparse=False): zeros = None if sparse is True: zeros = '\0' * self.read_size wpath = self._fullpath(path) if os.path.isfile(wpath): if not mutable: # if the file exists, and not mutable, return return if has_valid_checksum(path=path, reader=self.source, checksums=checksums, read_size=self.read_size): return cksum = util.checksummer(checksums) out_d = os.path.dirname(wpath) partfile = os.path.join(out_d, "%s.part" % os.path.basename(wpath)) util.mkdir_p(out_d) orig_part_size = 0 if os.path.exists(partfile): try: orig_part_size = os.path.getsize(partfile) reader.set_start_pos(orig_part_size) LOG.debug("resuming partial (%s) download of '%s' from '%s'", orig_part_size, path, partfile) with open(partfile, "rb") as fp: while True: buf = fp.read(self.read_size) cksum.update(buf) if len(buf) != self.read_size: break except NotImplementedError: # continuing not supported, just delete and retry orig_part_size = 0 os.unlink(partfile) with open(partfile, "ab") as wfp: while True: buf = reader.read(self.read_size) buflen = len(buf) if (buflen != self.read_size and zeros is not None and zeros[0:buflen] == buf): wfp.seek(wfp.tell() + buflen) elif buf == zeros: wfp.seek(wfp.tell() + buflen) else: wfp.write(buf) cksum.update(buf) if size is not None: if self.complete_callback: self.complete_callback(path, wfp.tell(), size) if wfp.tell() > size: # file is too big, so the checksum won't match; we # might as well stop downloading. break if buflen != self.read_size: break if zeros is not None: wfp.truncate(wfp.tell()) if not cksum.check(): os.unlink(partfile) if orig_part_size: LOG.warn("resumed download of '%s' had bad checksum.", path) msg = "unexpected checksum '%s' on %s (found: %s expected: %s)" raise Exception(msg % (cksum.algorithm, path, cksum.hexdigest(), cksum.expected)) os.rename(partfile, wpath) def remove(self, path): try: os.unlink(self._fullpath(path)) except OSError as e: if e.errno != errno.ENOENT: raise cur_d = os.path.dirname(path) prev_d = None while cur_d and cur_d != prev_d: try: os.rmdir(cur_d) except OSError as e: if e.errno not in (errno.ENOENT, errno.ENOTEMPTY): raise prev_d = cur_d cur_d = os.path.dirname(path) def source(self, path): return cs.UrlContentSource(url=self._fullpath(path)) def _fullpath(self, path): return os.path.join(self.prefix, path) def has_valid_checksum(path, reader, checksums=None, read_size=READ_BUFFER_SIZE): if checksums is None: return False cksum = util.checksummer(checksums) try: with reader(path) as rfp: while True: buf = rfp.read(read_size) cksum.update(buf) if len(buf) != read_size: break return cksum.check() except Exception: return False # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/simplestreams/objectstores/s3.py0000644000000000000000000000632312314577450022203 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import boto.exception import boto.s3 import boto.s3.connection from contextlib import closing import errno import tempfile import simplestreams.objectstores as objectstores import simplestreams.contentsource as cs class S3ObjectStore(objectstores.ObjectStore): _bucket = None _connection = None def __init__(self, prefix): # expect 's3://bucket/path_prefix' self.prefix = prefix if prefix.startswith("s3://"): path = prefix[5:] else: path = prefix (self.bucketname, self.path_prefix) = path.split("/", 1) @property def _conn(self): if not self._connection: self._connection = boto.s3.connection.S3Connection() return self._connection @property def bucket(self): if not self._bucket: self._bucket = self._conn.get_bucket(self.bucketname) return self._bucket def insert(self, path, reader, checksums=None, mutable=True, size=None): #store content from reader.read() into path, expecting result checksum try: tfile = tempfile.TemporaryFile() with reader(path) as rfp: while True: buf = rfp.read(self.read_size) tfile.write(buf) if len(buf) != self.read_size: break with closing(self.bucket.new_key(self.path_prefix + path)) as key: key.set_contents_from_file(tfile) finally: tfile.close() def insert_content(self, path, content, checksums=None, mutable=True): with closing(self.bucket.new_key(self.path_prefix + path)) as key: key.set_contents_from_string(content) def remove(self, path): #remove path from store self.bucket.delete_key(self.path_prefix + path) def source(self, path): # essentially return an 'open(path, r)' key = self.bucket.get_key(self.path_prefix + path) if not key: myerr = IOError("Unable to open %s" % path) myerr.errno = errno.ENOENT raise myerr return cs.FdContentSource(fd=key, url=self.path_prefix + path) def exists_with_checksum(self, path, checksums=None): key = self.bucket.get_key(self.path_prefix + path) if key is None: return False if 'md5' in checksums: return checksums['md5'] == key.etag.replace('"', "") return False # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/simplestreams/objectstores/swift.py0000644000000000000000000001221612314577450023010 0ustar 00000000000000# Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import simplestreams.objectstores as objectstores import simplestreams.contentsource as cs import simplestreams.openstack as openstack import errno import hashlib from swiftclient import Connection, ClientException def get_swiftclient(**kwargs): # nmap has entries that need name changes from a 'get_service_conn_info' # to a swift Connection name. # pt has names that pass straight through nmap = {'endpoint': 'preauthurl', 'token': 'preauthtoken'} pt = ('insecure', 'cacert') connargs = {v: kwargs.get(k) for k, v in nmap.items() if k in kwargs} connargs.update({k: kwargs.get(k) for k in pt if k in kwargs}) return Connection(**connargs) class SwiftContentSource(cs.IteratorContentSource): def is_enoent(self, exc): return is_enoent(exc) class SwiftObjectStore(objectstores.ObjectStore): def __init__(self, prefix, region=None): # expect 'swift://bucket/path_prefix' self.prefix = prefix if prefix.startswith("swift://"): path = prefix[8:] else: path = prefix (self.container, self.path_prefix) = path.split("/", 1) super(SwiftObjectStore, self).__init__() self.keystone_creds = openstack.load_keystone_creds() if region is not None: self.keystone_creds['region_name'] = region conn_info = openstack.get_service_conn_info('object-store', **self.keystone_creds) self.swiftclient = get_swiftclient(**conn_info) # http://docs.openstack.org/developer/swift/misc.html#acls self.swiftclient.put_container(self.container, headers={'X-Container-Read': '.r:*,.rlistings'}) def insert(self, path, reader, checksums=None, mutable=True, size=None): #store content from reader.read() into path, expecting result checksum self._insert(path=path, contents=reader, checksums=checksums, mutable=mutable) def insert_content(self, path, content, checksums=None, mutable=True): self._insert(path=path, contents=content, checksums=checksums, mutable=mutable) def remove(self, path): self.swiftclient.delete_object(container=self.container, obj=self.path_prefix + path) def source(self, path): def itgen(): (_headers, iterator) = self.swiftclient.get_object( container=self.container, obj=self.path_prefix + path, resp_chunk_size=self.read_size) return iterator return SwiftContentSource(itgen=itgen, url=self.prefix + path) def exists_with_checksum(self, path, checksums=None): return headers_match_checksums(self._head_path(path), checksums) def _head_path(self, path): try: headers = self.swiftclient.head_object(container=self.container, obj=self.path_prefix + path) except Exception as exc: if is_enoent(exc): return {} raise return headers def _insert(self, path, contents, checksums=None, mutable=True, size=None): # content is a ContentSource or a string headers = self._head_path(path) if headers: if not mutable: return if headers_match_checksums(headers, checksums): return insargs = {'container': self.container, 'obj': self.path_prefix + path, 'contents': contents} if size is not None and isinstance(contents, str): size = len(contents) if size is not None: insargs['content_length'] = size if checksums and checksums.get('md5'): insargs['etag'] = checksums.get('md5') elif isinstance(contents, str): insargs['etag'] = hashlib.md5(contents).hexdigest() self.swiftclient.put_object(**insargs) def headers_match_checksums(headers, checksums): if not (headers and checksums): return False if ('md5' in checksums and headers.get('etag') == checksums.get('md5')): return True return False def is_enoent(exc): return ((isinstance(exc, IOError) and exc.errno == errno.ENOENT) or (isinstance(exc, ClientException) and exc.http_status == 404)) # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/tests/__init__.py0000644000000000000000000000003612314577450017154 0ustar 00000000000000#ts=4 expandtab syntax=python simplestreams-0.1.0~bzr341/tests/testutil.py0000644000000000000000000000120712314577450017273 0ustar 00000000000000import os from simplestreams import objectstores from simplestreams import mirrors EXAMPLES_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "examples")) def get_mirror_reader(name, docdir=None, signed=False): if docdir is None: docdir = EXAMPLES_DIR src_d = os.path.join(EXAMPLES_DIR, name) sstore = objectstores.FileStore(src_d) def policy(content, path): # pylint: disable=W0613 return content kwargs = {} if signed else {"policy": policy} return mirrors.ObjectStoreMirrorReader(sstore, **kwargs) # vi: ts=4 expandtab syntax=python simplestreams-0.1.0~bzr341/tests/unittests/0000755000000000000000000000000012314577450017106 5ustar 00000000000000simplestreams-0.1.0~bzr341/tests/unittests/__init__.py0000644000000000000000000000000012314577450021205 0ustar 00000000000000simplestreams-0.1.0~bzr341/tests/unittests/test_command_hook_mirror.py0000644000000000000000000000401212314577450024544 0ustar 00000000000000from unittest import TestCase import simplestreams.mirrors.command_hook as chm from tests.testutil import get_mirror_reader class TestCommandHookMirror(TestCase): """Test of CommandHookMirror.""" def setUp(self): self._run_commands = [] def test_init_without_load_stream_fails(self): self.assertRaises(TypeError, chm.CommandHookMirror, {}) def test_init_with_load_products_works(self): _mirror = chm.CommandHookMirror({'load_products': 'true'}) def test_stream_load_empty(self): src = get_mirror_reader("foocloud") target = chm.CommandHookMirror({'load_products': ['true']}) oruncmd = chm.run_command try: chm.run_command = self._run_command target.sync(src, "streams/v1/index.json") finally: chm.run_command = oruncmd # the 'load_products' should be called once for each content # in the stream. self.assertEqual(self._run_commands, [['true'], ['true']]) def test_stream_insert_product(self): src = get_mirror_reader("foocloud") target = chm.CommandHookMirror( {'load_products': ['load-products'], 'insert_products': ['insert-products']}) oruncmd = chm.run_command try: chm.run_command = self._run_command target.sync(src, "streams/v1/index.json") finally: chm.run_command = oruncmd # the 'load_products' should be called once for each content # in the stream. same for 'insert-products' self.assertEqual(len([f for f in self._run_commands if f == ['load-products']]), 2) self.assertEqual(len([f for f in self._run_commands if f == ['insert-products']]), 2) def _run_command(self, cmd, env=None, capture=False, rcs=None): _pylint = (env, capture, rcs) self._run_commands.append(cmd) rc = 0 output = '' return (rc, output) # vi: ts=4 expandtab syntax=python simplestreams-0.1.0~bzr341/tests/unittests/test_contentsource.py0000644000000000000000000000611712314577450023417 0ustar 00000000000000import random import shutil import tempfile from os.path import join from simplestreams import objectstores from simplestreams import contentsource from subprocess import Popen, PIPE from unittest import TestCase from nose.tools import raises class RandomPortServer(object): def __init__(self, path): self.path = path self.process = None self.port = None def __enter__(self): for _ in range(10): port = random.randrange(40000, 65000) p = Popen(['python', '-u', '-m', 'SimpleHTTPServer', str(port)], cwd=self.path, stdout=PIPE) # wait for the HTTP server to start up while True: line = p.stdout.readline() # pylint: disable=E1101 if b'Serving HTTP' in line: self.port = port self.process = p return self def __exit__(self, _type, value, tb): _pylint = value, tb self.process.kill() # pylint: disable=E1101 class TestResume(TestCase): def setUp(self): self.target = tempfile.mkdtemp() self.source = tempfile.mkdtemp() with open(join(self.target, 'foo.part'), 'wb') as f: f.write(b'hello') with open(join(self.source, 'foo'), 'wb') as f: f.write(b'hello world\n') def tearDown(self): shutil.rmtree(self.target) shutil.rmtree(self.source) def test_binopen_seek(self): tcs = objectstores.FileStore(self.target) scs = contentsource.UrlContentSource('file://%s/foo' % self.source) tcs.insert('foo', scs) with open(join(self.target, 'foo'), 'rb') as f: contents = f.read() assert contents == b'hello world\n', contents def test_url_seek(self): with RandomPortServer(self.source) as server: tcs = objectstores.FileStore(self.target) loc = 'http://localhost:%d/foo' % server.port scs = contentsource.UrlContentSource(loc) tcs.insert('foo', scs) with open(join(self.target, 'foo'), 'rb') as f: contents = f.read() # Unfortunately, SimpleHTTPServer doesn't support the Range # header, so we get two 'hello's. assert contents == b'hellohello world\n', contents @raises(Exception) def test_post_open_set_start_pos(self): cs = contentsource.UrlContentSource('file://%s/foo' % self.source) cs.open() cs.set_start_pos(1) def test_percent_callback(self): data = {'dld': 0} def handler(path, downloaded, total): _pylint = path, total data['dld'] = downloaded with RandomPortServer(self.source) as server: tcs = objectstores.FileStore(self.target, complete_callback=handler) loc = 'http://localhost:%d/foo' % server.port scs = contentsource.UrlContentSource(loc) tcs.insert('foo', scs, size=len('hellohello world')) assert data['dld'] > 0 # just make sure it was called simplestreams-0.1.0~bzr341/tests/unittests/test_mirrorwriters.py0000644000000000000000000000066212314577450023455 0ustar 00000000000000from tests.testutil import get_mirror_reader from simplestreams.mirrors import DryRunMirrorWriter from simplestreams.objectstores import MemoryObjectStore def test_DryRunMirrorWriter_foocloud_no_filters(): src = get_mirror_reader("foocloud") config = {} objectstore = MemoryObjectStore(None) target = DryRunMirrorWriter(config, objectstore) target.sync(src, "streams/v1/index.json") assert target.size == 886 simplestreams-0.1.0~bzr341/tests/unittests/test_resolvework.py0000644000000000000000000001044212314577450023102 0ustar 00000000000000from unittest import TestCase from simplestreams.util import resolve_work from simplestreams.objectstores import MemoryObjectStore from simplestreams.mirrors import ObjectStoreMirrorWriter from simplestreams.filters import filter_item, ItemFilter from tests.testutil import get_mirror_reader class TestStreamResolveWork(TestCase): def tryit(self, src, target, maxnum=None, keep=False, itemfilter=None, add=None, remove=None): if add is None: add = [] if remove is None: remove = [] (r_add, r_remove) = resolve_work(src, target, maxnum=maxnum, keep=keep, itemfilter=itemfilter) self.assertEqual(r_add, add) self.assertEqual(r_remove, remove) def test_keep_with_max_none_is_exception(self): self.assertRaises(TypeError, resolve_work, [1], [2], None, True) def test_full_replace(self): src = [10, 9, 8] target = [7, 6, 5] self.tryit(src=src, target=target, add=src, remove=[5, 6, 7]) def test_only_new_with_max(self): self.tryit(src=[10, 9, 8], target=[7, 6, 5], add=[10, 9], remove=[5, 6, 7], maxnum=2) def test_only_new_with_keep(self): self.tryit(src=[10, 9, 8], target=[7, 6, 5], add=[10, 9, 8], remove=[5, 6], maxnum=4, keep=True) def test_only_remove(self): self.tryit(src=[3], target=[3, 2, 1], add=[], remove=[1, 2]) def test_only_remove_with_keep(self): self.tryit(src=[3], target=[3, 2, 1], add=[], remove=[], maxnum=3, keep=True) def test_only_remove_with_max(self): self.tryit(src=[3], target=[3, 2, 1], add=[], remove=[1, 2], maxnum=2) def test_only_remove_with_no_max(self): self.tryit(src=[3], target=[3, 2, 1], add=[], remove=[1, 2], maxnum=None) def test_null_remote_without_keep(self): self.tryit(src=[], target=[3, 2, 1], add=[], remove=[1, 2, 3]) def test_null_remote_with_keep(self): self.tryit(src=[], target=[3, 2, 1], maxnum=3, keep=True, add=[], remove=[]) def test_null_remote_without_keep_with_maxnum(self): self.tryit(src=[], target=[3, 2, 1], maxnum=3, keep=False, add=[], remove=[1, 2, 3]) def test_max_forces_remove(self): self.tryit(src=[2, 1], target=[2, 1], maxnum=1, keep=False, add=[], remove=[1]) def test_nothing_needed_with_max(self): self.tryit(src=[1], target=[1], maxnum=1, keep=False, add=[], remove=[]) def test_filtered_items_not_present(self): self.tryit(src=[1, 2, 3, 4, 5], target=[1], maxnum=None, keep=False, itemfilter=lambda a: a < 3, add=[2], remove=[]) def test_max_and_target_has_newest(self): self.tryit(src=[1, 2, 3, 4], target=[4], maxnum=1, keep=False, add=[], remove=[]) def test_unordered_target_input(self): self.tryit(src=['20121026.1', '20120328', '20121001'], target=['20121001', '20120328', '20121026.1'], maxnum=2, keep=False, add=[], remove=['20120328']) def test_reduced_max(self): self.tryit(src=[9, 5, 8, 4, 7, 3, 6, 2, 1], target=[9, 8, 7, 6, 5], maxnum=4, keep=False, add=[], remove=[5]) def test_foocloud_multiple_paths_remove(self): config = {'delete_filtered_items': True} memory = ObjectStoreMirrorWriter(config, MemoryObjectStore(None)) foocloud = get_mirror_reader("foocloud") memory.sync(foocloud, "streams/v1/index.json") # We sync'd, now we'll sync everything that doesn't have the samepaths # tag. samepaths reuses some paths, and so if we try and delete # anything here that would be wrong. filters = [ItemFilter("version_name!=samepaths")] def no_samepaths(data, src, _target, pedigree): return filter_item(filters, data, src, pedigree) def dont_remove(*_args): # This shouldn't be called, because we are smart and do "reference # counting". assert False memory.filter_version = no_samepaths memory.store.remove = dont_remove memory.sync(foocloud, "streams/v1/index.json") # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/tests/unittests/test_signed_data.py0000644000000000000000000000225212314577450022762 0ustar 00000000000000import shutil import subprocess import tempfile from nose.tools import raises from os.path import join from simplestreams import mirrors from simplestreams import objectstores from simplestreams.util import SignatureMissingException from tests.testutil import get_mirror_reader, EXAMPLES_DIR def _tmp_reader(): sstore = objectstores.FileStore(tempfile.gettempdir()) return mirrors.ObjectStoreMirrorReader(sstore) @raises(subprocess.CalledProcessError) def test_read_bad_data(): good = join(EXAMPLES_DIR, "foocloud", "streams", "v1", "index.sjson") bad = join(tempfile.gettempdir(), "index.sjson") shutil.copy(good, bad) with open(bad, 'r+') as f: lines = f.readlines() f.truncate() f.seek(0) for line in lines: f.write(line.replace('foovendor', 'attacker')) _tmp_reader().read_json("index.sjson") @raises(SignatureMissingException) def test_read_unsigned(): # empty files aren't signed open(join(tempfile.gettempdir(), 'index.json'), 'w').close() _tmp_reader().read_json("index.json") def test_read_signed(): reader = get_mirror_reader("foocloud") reader.read_json("streams/v1/index.sjson") simplestreams-0.1.0~bzr341/tests/unittests/test_util.py0000644000000000000000000001624212314577450021501 0ustar 00000000000000# pylint: disable=C0301 from simplestreams import util from copy import deepcopy from unittest import TestCase class TestProductsSet(TestCase): def test_product_exists(self): tree = {'products': {'P1': {"F1": "V1"}}} util.products_set(tree, {'F2': 'V2'}, ('P1',)) self.assertEqual(tree, {'products': {'P1': {'F2': 'V2'}}}) def test_product_no_exists(self): tree = {'products': {'A': 'B'}} util.products_set(tree, {'F1': 'V1'}, ('P1',)) self.assertEqual(tree, {'products': {'A': 'B', 'P1': {'F1': 'V1'}}}) def test_product_no_products_tree(self): tree = {} util.products_set(tree, {'F1': 'V1'}, ('P1',)) self.assertEqual(tree, {'products': {'P1': {'F1': 'V1'}}}) def test_version_exists(self): tree = {'products': {'P1': {'versions': {'FOO': {'1': 'one'}}}}} util.products_set(tree, {'2': 'two'}, ('P1', 'FOO')) self.assertEqual(tree, {'products': {'P1': {'versions': {'FOO': {'2': 'two'}}}}}) def test_version_no_exists(self): tree = {'products': {'P1': {'versions': {'BAR': {'1': 'one'}}}}} util.products_set(tree, {'2': 'two'}, ('P1', 'FOO')) d = {'products': {'P1': {'versions': {'BAR': {'1': 'one'}, 'FOO': {'2': 'two'}}}}} self.assertEqual(tree, d) def test_item_exists(self): items = {'item1': {'f1': '1'}} tree = {'products': {'P1': {'versions': {'VBAR': {'1': 'one', 'items': items}}}}} mnew = {'f2': 'two'} util.products_set(tree, mnew, ('P1', 'VBAR', 'item1',)) expvers = {'VBAR': {'1': 'one', 'items': {'item1': mnew}}} self.assertEqual(tree, {'products': {'P1': {'versions': expvers}}}) def test_item_no_exists(self): items = {'item1': {'f1': '1'}} tree = {'products': {'P1': { 'versions': {'V1': {'VF1': 'VV1', 'items': items}} }}} util.products_set(tree, {'f2': '2'}, ('P1', 'V1', 'item2',)) expvers = {'V1': {'VF1': 'VV1', 'items': {'item1': {'f1': '1'}, 'item2': {'f2': '2'}}}} self.assertEqual(tree, {'products': {'P1': {'versions': expvers}}}) class TestProductsDel(TestCase): def test_product_exists(self): tree = {'products': {'P1': {"F1": "V1"}}} util.products_del(tree, ('P1',)) self.assertEqual(tree, {'products': {}}) def test_product_no_exists(self): ptree = {'P1': {'F1': 'V1'}} tree = {'products': deepcopy(ptree)} util.products_del(tree, ('P2',)) self.assertEqual(tree, {'products': ptree}) def test_version_exists(self): otree = {'products': { 'P1': {"F1": "V1"}, 'P2': {'versions': {'VER1': {'X1': 'X2'}}} }} tree = deepcopy(otree) util.products_del(tree, ('P2', 'VER1')) del otree['products']['P2']['versions']['VER1'] self.assertEqual(tree, otree) def test_version_no_exists(self): otree = {'products': { 'P1': {"F1": "V1"}, 'P2': {'versions': {'VER1': {'X1': 'X2'}}} }} tree = deepcopy(otree) util.products_del(tree, ('P2', 'VER2')) self.assertEqual(tree, otree) def test_item_exists(self): otree = {'products': { 'P1': {"F1": "V1"}, 'P2': {'versions': {'VER1': {'X1': 'X2', 'items': {'ITEM1': {'IF1': 'IV2'}}}}} }} tree = deepcopy(otree) del otree['products']['P2']['versions']['VER1']['items']['ITEM1'] util.products_del(tree, ('P2', 'VER1', 'ITEM1')) self.assertEqual(tree, otree) def test_item_no_exists(self): otree = {'products': { 'P1': {"F1": "V1"}, 'P2': {'versions': {'VER1': {'X1': 'X2', 'items': {'ITEM1': {'IF1': 'IV2'}}}}} }} tree = deepcopy(otree) util.products_del(tree, ('P2', 'VER1', 'ITEM2')) self.assertEqual(tree, otree) class TestProductsPrune(TestCase): def test_products_empty(self): tree = {'products': {}} util.products_prune(tree) self.assertEqual(tree, {}) def test_products_not_empty(self): tree = {'products': {'fooproduct': {'a': 'b'}}} util.products_prune(tree) self.assertEqual(tree, {}) def test_has_item(self): otree = {'products': {'P1': {'versions': {'V1': {'items': {'I1': 'I'}}}}}} tree = deepcopy(otree) util.products_prune(tree) self.assertEqual(tree, otree) def test_deletes_one_version_leaves_one(self): versions = {'V1': {'items': {}}, 'V2': {'items': {'I1': 'I'}}} otree = {'products': {'P1': {'versions': versions}}} tree = deepcopy(otree) util.products_prune(tree) del otree['products']['P1']['versions']['V1'] self.assertEqual(tree, otree) class TestProductsCondense(TestCase): def test_condense_1(self): tree = {'products': {'P1': {'versions': {'1': {'A': 'B'}, '2': {'A': 'B'}}}}} exp = {'products': {'P1': {'versions': {'1': {}, '2': {}}, 'A': 'B'}}} util.products_condense(tree) self.assertEqual(tree, exp) def test_condense_different_arch(self): tree = {'products': {'P1': {'versions': {'1': {'items': {'thing1': {'arch': 'amd64'}, 'thing2': {'arch': 'amd64'}}}, '2': {'items': {'thing3': {'arch': 'i3867'}}}}}}} exp = {'products': {'P1': {'versions': {'1': {'arch': 'amd64', 'items': {'thing1': {}, 'thing2': {}}}, '2': {'arch': 'i3867', 'items': {'thing3': {}}}}}}} util.products_condense(tree) self.assertEqual(tree, exp) def test_repeats_removed(self): tree = {'products': {'P1': {'A': 'B', 'versions': {'1': {'A': 'B'}, '2': {'A': 'B'}}}}} exp = {'products': {'P1': {'versions': {'1': {}, '2': {}}, 'A': 'B'}}} util.products_condense(tree) self.assertEqual(tree, exp) def test_nonrepeats_stay(self): tree = {'products': {'P1': {'A': 'C', 'versions': {'1': {'A': 'B'}, '2': {'A': 'B'}}}}} exp = {'products': {'P1': {'A': 'C', 'versions': {'1': {'A': 'B'}, '2': {'A': 'B'}}}}} util.products_condense(tree) self.assertEqual(tree, exp) # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/tests/unittests/tests_filestore.py0000644000000000000000000000465312314577450022706 0ustar 00000000000000import shutil import tempfile import os from simplestreams import objectstores from simplestreams import mirrors from tests.testutil import get_mirror_reader from unittest import TestCase FOOCLOUD_FILE = ("files/release-20121026.1/" "foovendor-6.1-server-cloudimg-amd64.tar.gz") class TestResumePartDownload(TestCase): def setUp(self): self.target = tempfile.mkdtemp() def tearDown(self): shutil.rmtree(self.target) def test_mirror_resume(self): # test mirror resuming from filestore smirror = get_mirror_reader("foocloud") # as long as this is less than size of file, its valid part_size = 10 # create a valid .part file tfile = os.path.join(self.target, FOOCLOUD_FILE) os.makedirs(os.path.dirname(tfile)) with open(tfile + ".part", "wb") as fw: with smirror.source(FOOCLOUD_FILE) as fr: fw.write(fr.read(part_size)) target_objstore = objectstores.FileStore(self.target) tmirror = mirrors.ObjectStoreMirrorWriter(config=None, objectstore=target_objstore) tmirror.sync(smirror, "streams/v1/index.json") # the part file should have been cleaned up. If this fails, then # likely the part file wasn't used, and this test is no longer valid self.assertFalse(os.path.exists(tfile + ".part")) def test_corrupted_mirror_resume(self): # test corrupted .part file is caught smirror = get_mirror_reader("foocloud") # create a corrupt .part file tfile = os.path.join(self.target, FOOCLOUD_FILE) os.makedirs(os.path.dirname(tfile)) with open(tfile + ".part", "w") as fw: # just write some invalid data fw.write("--bogus--") target_objstore = objectstores.FileStore(self.target) tmirror = mirrors.ObjectStoreMirrorWriter(config=None, objectstore=target_objstore) self.assertRaisesRegexp(Exception, r".*%s.*" % FOOCLOUD_FILE, tmirror.sync, smirror, "streams/v1/index.json") # now the .part file should be removed, and trying again should succeed self.assertFalse(os.path.exists(tfile + ".part")) tmirror.sync(smirror, "streams/v1/index.json") self.assertFalse(os.path.exists(tfile + ".part")) simplestreams-0.1.0~bzr341/tools/build-deb0000755000000000000000000000357612314577450016632 0ustar 00000000000000#!/bin/sh set -e sourcename="simplestreams" TEMP_D="" UNCOMMITTED=${UNCOMMITTED:-0} fail() { echo "$@" 1>&2; exit 1; } cleanup() { [ -z "$TEMP_D" ] || rm -Rf "$TEMP_D" } if [ "$1" = "-h" -o "$1" = "--help" ]; then cat <&2; exit 1; } cleanup() { [ -z "$TEMP_D" ] || rm -Rf "$TEMP_D" } export_uncommitted="" if [ "${UNCOMMITTED:-0}" != "0" ]; then export_uncommitted="--uncommitted" fi [ "$1" = "-h" -o "$1" = "--help" ] && { Usage; exit 0; } TEMP_D=$(mktemp -d) trap cleanup EXIT case "${1:-HEAD}" in tag:*) version="${1#tag:}";; HEAD) revno="$(bzr revno)"; revargs="-r $revno";; [0-9]*) revno="$1" ; revargs="-r $1";; esac output="$2" if [ -z "$version" ]; then bzr cat $revargs debian/changelog.trunk > "$TEMP_D/clog" || fail "failed to extract debian/change.log.trunk at $revargs" clogver_o=$(sed -n '1s,.*(\([^)]*\)).*,\1,p' $TEMP_D/clog) clogver_upstream=${clogver_o%%-*} mmm=${clogver_o%%~*} version="$mmm~bzr$revno" fi if [ -z "$output" ]; then output="$sourcename-$version.tar.gz" fi bzr export ${export_uncommitted} \ --format=tgz --root="$sourcename-${version}" $revargs $output echo "wrote $output" simplestreams-0.1.0~bzr341/tools/gen-example-key0000755000000000000000000000347612314577450017772 0ustar 00000000000000#!/bin/sh set -f TEMP_D="" cleanup() { [ -z "${TEMP_D}" ] || rm -Rf "${TEMP_D}" } Usage() { cat <&2; } out_pub=${1} out_sec=${2} [ "$1" = "-h" -o "$1" = "--help" ] && { Usage; exit 0; } [ $# -eq 2 ] || { Usage 1>&2; fail "expect 2 args"; } [ "${out_pub#/}" = "${out_pub}" -a "${out_pub#./}" = "${out_pub}" ] && out_pub="./${out_pub}" [ "${out_sec#/}" = "${out_sec}" -a "${out_sec#./}" = "${out_sec}" ] && out_sec="./${out_sec}" error "writing to ${out_pub} and ${out_sec}" TEMP_D=$(mktemp -d "${TEMPDIR:-/tmp}/${0##*/}.XXXXXX") trap cleanup EXIT # so your local gpg will not be modified export HOME="${TEMP_D}" export GNUPGHOME="${TEMP_D}/gnupg" ( umask 077 && mkdir $GNUPGHOME ) bfile="${TEMP_D}/batch" tpub="${TEMP_D}/out.pub" tsec="${TEMP_D}/out.sec" cat > "$bfile" <&1) || fail "failed to initialize gpg dir: $out" out=$(gpg --batch --gen-key "$bfile" 2>&1) || fail "failed to generate key in batch mode:$out" topts="--no-default-keyring --secret-keyring ${tsec} --keyring ${tpub}" gpg $topts --armor --export-secret-keys > "${TEMP_D}/secret" || fail "failed to export secret key to armor" gpg $topts --armor --export > "${TEMP_D}/public" || fail "failed to export public key to armor" cp "${TEMP_D}/public" "${out_pub}" && cp "${TEMP_D}/secret" "${out_sec}" || fail "failed to copy output" gpg $out_pub simplestreams-0.1.0~bzr341/tools/gpg-trust-pubkey0000755000000000000000000000155312314577450020225 0ustar 00000000000000#!/bin/sh Usage() { cat <&2; } fail() { [ $# -eq 0 ] || error "$@"; exit 2; } [ "$1" = "-h" -o "$1" = "--help" ] && { Usage; exit; } [ $# -eq 1 ] || { Usage 1>&2; fail "must give only one arg"; } [ -f "$1" ] || fail "$1: not a file" pubkey="$1" fp=$(gpg --quiet --with-fingerprint --with-colons "$pubkey" 2>/dev/null | awk -F: '$1 == "fpr" {print $10}') || fail "failed to read fingerprint of $pubkey" out=$(gpg --import "$pubkey" 2>&1) || { error "import of pubkey failed:"; error "$out"; fail; } out=$(echo "${fp}:6:" | gpg --import-ownertrust 2>&1) || { error "failed import-ownertrust for $fp"; fail "$out"; } echo "imported $pubkey. fingerprint=$fp" gpg --quiet --list-key "$fp" 2>/dev/null simplestreams-0.1.0~bzr341/tools/hook-check-downloads0000755000000000000000000000705612314577450021003 0ustar 00000000000000#!/bin/sh # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . RC_FILTER_INCLUDE=0 RC_FILTER_EXCLUDE=1 RC_FAIL=2 BAD_URL="BAD_URL:" TMPF="" error() { echo "$@" 1>&2; } debug() { error "$@"; } fail() { echo "FAIL:" "$@" 1>&2; exit $RC_FAIL; } badurl() { echo "$BAD_URL" "$@"; exit 0; } cleanup() { [ -z "$TMPF" ] || rm -f "$TMPF"; } do_check() { local self="${0}" local ret="" count="" local print_only=false if [ "$1" = "--print-only" ]; then print_only=true export _PRINT_ONLY=1 shift fi TMPF=$(mktemp ${TMPDIR:-/tmp}/${0##*/}.XXXXXX) || fail "failed to make tempfile" trap cleanup EXIT ( sstream-sync ${2:+"--path=${2}"} "$1" \ "--hook-load-products=${self}" "--hook-filter-index-entry=${self}" \ "--hook-insert-item=${self}" --item-skip-download; echo $? ) | tee "$TMPF" ret=$(tail -n -1 "$TMPF") [ $ret -eq 0 ] || fail "odd failure [$ret]" count=$(grep -c "^$BAD_URL" "$TMPF") if [ "$count" != "0" ]; then fail "found '$count' bad urls" fi exit 0 } Usage() { cat </dev/null;; */*) [ -f "$1" ];; *) error "unknown protocol: ${item_url}"; return 2;; esac } case "$HOOK" in filter_index_entry) # skip streams or groups are not download if [ "$datatype" = "image-downloads" ]; then debug "== including $content_id: $datatype ==" rc=$RC_FILTER_INCLUDE else debug "== skipping $content_id: $datatype ==" rc=$RC_FILTER_EXCLUDE; fi exit $rc ;; load_products) debug "== load_products: $content_id (no-op) ==" exit 0;; insert_item) info="${content_id}/${product_name}/${version_name}/${item_name}" if [ -z "$item_url" ]; then fail "${info}: empty item url!" fi if [ "${_PRINT_ONLY:-0}" != "0" ]; then debug "$info ${item_url}" else if staturl "${item_url}"; then debug "$info: ${item_url}" else badurl "$info: ${item_url}" fi fi exit 0;; *) if [ -n "$HOOK" ]; then fail "unsupported hook '$HOOK'" else [ "$1" = "-h" -o "$1" = "--help" ] && { Usage; exit 0; } [ "$#" -eq 0 -o "$#" -gt 2 ] && { Usage 1>&2; exit 1; } do_check "$@" fi ;; esac # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/tools/hook-debug0000755000000000000000000000336012314577450017016 0ustar 00000000000000#!/bin/bash # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . set -f if [ "$1" = "--help" -o "$1" = "usage" ]; then cat <&2 case "$HOOK" in filter_*) [ "$OP" = "keep" ] && exit 0; [ "$OP" = "skip" ] && exit 1; exit 2 ;; esac exit 0 simplestreams-0.1.0~bzr341/tools/hook-glance0000755000000000000000000001261512314577450017164 0ustar 00000000000000#!/bin/bash # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . set -f RC_FILTER_INCLUDE=0 RC_FILTER_EXCLUDE=1 RC_FAIL=2 VERBOSITY=${VERBOSITY:-0} error() { echo "$@" 1>&2; } fail() { [ $# -eq 0 ] || error "$@"; exit "$RC_FAIL"; } debug() { [ "${VERBOSITY}" -lt "$1" ] && return shift error "$@" } Usage() { cat <&2; fail "HOOK not available in environment"; } # we only operate on case "$HOOK" in filter_item|filter_index_entry|filter_product|\ insert_item|load_products) "$HOOK";; filter_*) return "${RC_FILTER_INCLUDE}";; *) noop;; esac } main "$@" exit $? # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/tools/hook-image-id0000755000000000000000000001035112314577450017402 0ustar 00000000000000#!/bin/bash # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . set -f RC_FILTER_INCLUDE=0 RC_FILTER_EXCLUDE=1 RC_FAIL=2 VERBOSITY=${VERBOSITY:-0} DEFAULT_OUTPUT_FORMAT='${product_name} ${version_name} ${item_name} ${id}' error() { echo "$@" 1>&2; } fail() { [ $# -eq 0 ] || error "$@"; exit "$RC_FAIL"; } debug() { [ "${VERBOSITY}" -lt "$1" ] && return shift error "$@" } Usage() { cat <&2; return 1; } pt=( ) while [ $# -ne 0 ]; do cur="$1"; next="$2"; case "$cur" in -h|--help) Usage ; return 0;; -v|--verbose) pt[${#pt[@]}]="$cur";; --max) pt[${#pt[@]}]="--max=${next}"; shift;; -p|--path) path=$next; shift;; -f|--fmt) fmt=$next; shift;; --) shift; break;; esac shift; done url="$1" shift [ -n "$url" ] || { Usage 1>&2; error "Must provide url"; return 1; } [ -n "$path" ] && pt[${#pt[@]}]="--path=$path" pt[${#pt[@]}]="$url" export _OUTPUT_FORMAT="$fmt" _CRITERIA="${*}" sstream-sync \ "--hook-load-products=${self}" "--hook-filter-index-entry=${self}" \ "--hook-insert-item=${self}" "--hook-filter-item=${self}" "${pt[@]}" } main() { # we only operate on case "$HOOK" in filter_item) is_excluded && return "${RC_FILTER_EXCLUDE}" return "${RC_FILTER_INCLUDE}" ;; filter_index_entry) [ "$format" = "products:1.0" ] && [ "$datatype" = "image-ids" ] && return "${RC_FILTER_INCLUDE}" return "$RC_FILTER_EXCLUDE" ;; insert_item) local k="" out="" if [ "$_OUTPUT_FORMAT" = "debug" ]; then for k in ${FIELDS}; do out="${out} ${k}=${!k}" done else eval out="\"${_OUTPUT_FORMAT}\"" fi echo "$out" ;; filter_*) return "${RC_FILTER_INCLUDE}";; *) if [ -n "$HOOK" ]; then noop else [ "$1" = "--help" -o "$1" = "-h" -o "$1" = "usage" ] && { Usage; return 0; } call_sync "$@" fi esac } main "$@" exit $? # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/tools/js2signed0000755000000000000000000000305312314577450016661 0ustar 00000000000000#!/usr/bin/python # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import os import os.path import sys import toolutil def status_cb(fname): sys.stderr.write("%s\n" % fname) def main(): for path in sys.argv[1:]: if os.path.isfile(path): if not path.endswith(".json"): sys.stderr.write("file must end with .json\n") sys.exit(1) toolutil.signjson_file(path) elif os.path.isdir(path): for root, _dirs, files in os.walk(path): for f in [f for f in files if f.endswith(".json")]: toolutil.signjson_file(os.path.join(root, f), status_cb=status_cb) else: sys.stderr.write("input must be file or dir\n") sys.exit(1) if __name__ == '__main__': main() simplestreams-0.1.0~bzr341/tools/make-test-data0000755000000000000000000003471612314577450017604 0ustar 00000000000000#!/usr/bin/python # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import argparse import json import os import os.path import sys from simplestreams import util try: # this is just python2 or python3 compatible prepping for get_url_len import urllib.request url_request = urllib.request.Request url_open = urllib.request.urlopen except ImportError as e: import urllib2 url_request = urllib2.Request url_open = urllib2.urlopen import toolutil # could support reading from other mirrors # for example: # http://cloud-images-archive.ubuntu.com/ # file:///srv/ec2-images # BASE_URLS = ("http://cloud-images.ubuntu.com/",) FAKE_DATA = { 'root.tar.gz': { 'size': 10240, 'md5': '1276481102f218c981e0324180bafd9f', 'sha256': '84ff92691f909a05b224e1c56abb4864f01b4f8e3c854e4bb4c7baf1d3f6d652'}, 'tar.gz': { 'size': 11264, 'md5': '820a81e0916bac82838fd7e74ab29b15', 'sha256': '5309e677c79cffae49a65728c61b436d3cdc2a2bab4c81bf0038415f74a56880'}, 'disk1.img': { 'size': 12288, 'md5': '4072783b8efb99a9e5817067d68f61c6', 'sha256': 'f3cc103136423a57975750907ebc1d367e2985ac6338976d4d5a439f50323f4a'}, 'uefi1.img': { 'size': 12421, 'md5': 'd41d8cd98f00b204e9800998ecf8427e', 'sha256': '8ca9c39f2200d299b011f5018c9d27a5a70f5a6b4c24f2fe06a94bc0e8c1213f'}, } UBUNTU_RDNS = "com.ubuntu.cloud" REAL_DATA = os.environ.get("REAL_DATA", False) if REAL_DATA and REAL_DATA != "0": REAL_DATA = True else: REAL_DATA = False FILE_DATA = {} def get_cache_data(path, field): dirname = os.path.dirname(path) bname = os.path.basename(path) return FILE_DATA.get(dirname, {}).get(bname, {}).get(field) def store_cache_data(path, field, value): dirname = os.path.dirname(path) bname = os.path.basename(path) if dirname not in FILE_DATA: FILE_DATA[dirname] = {} if bname not in FILE_DATA[dirname]: FILE_DATA[dirname][bname] = {} FILE_DATA[dirname][bname][field] = value def save_cache(): if FILE_DATA: hashcache = FILE_DATA['filename'] with open(hashcache, "w") as hfp: hfp.write(json.dumps(FILE_DATA, indent=1)) def get_cloud_images_file_hash(path): md5 = get_cache_data(path, 'md5') sha256 = get_cache_data(path, 'sha256') if md5 and sha256: return {'md5': md5, 'sha256': sha256} found = {} dirname = os.path.dirname(path) for cksum in ("md5", "sha256"): content = None for burl in BASE_URLS: dir_url = burl + dirname try: url = dir_url + "/%sSUMS" % cksum.upper() sys.stderr.write("reading %s\n" % url) content = util.read_url(url).decode("utf-8") break except Exception as error: pass if not content: raise error for line in content.splitlines(): (hexsum, fname) = line.split() if fname.startswith("*"): fname = fname[1:] found[cksum] = hexsum store_cache_data(dirname + "/" + fname, cksum, hexsum) md5 = get_cache_data(path, 'md5') sha256 = get_cache_data(path, 'sha256') save_cache() return {'md5': md5, 'sha256': sha256} def get_url_len(url): if url.startswith("file:///"): path = url[len("file://"):] return os.stat(path).st_size if os.path.exists(url): return os.stat(url).st_size # http://stackoverflow.com/questions/4421170/python-head-request-with-urllib2 sys.stderr.write("getting size for %s\n" % url) request = url_request(url) request.get_method = lambda : 'HEAD' response = url_open(request) return int(response.headers.get('content-length', 0)) def get_cloud_images_file_size(path): size = get_cache_data(path, 'size') if size: return size for burl in BASE_URLS: try: size = int(get_url_len(burl + path)) break except Exception as error: pass if not size: raise error store_cache_data(path, 'size', size) save_cache() return size def create_fake_file(prefix, item): fpath = os.path.join(prefix, item['path']) path = item['path'] data = FAKE_DATA[item['ftype']] util.mkdir_p(os.path.dirname(fpath)) print("creating %s" % fpath) with open(fpath, "w") as fp: fp.truncate(data['size']) item.update(data) for cksum in util.CHECKSUMS: if cksum in item and not cksum in data: del item[data] return def dl_load_query(path): tree = {} for rline in toolutil.load_query_download(path): (stream, rel, build, label, serial, arch, filepath, fname) = rline if stream not in tree: tree[stream] = {'products': {}} products = tree[stream]['products'] version = toolutil.REL2VER[rel]['version'] prodname_rdns = UBUNTU_RDNS if stream != "released": prodname_rdns += "." + stream prodname = ':'.join([prodname_rdns, build, version, arch]) if prodname not in products: products[prodname] = { "release": rel, "version": version, "arch": arch, "versions": {} } product = products[prodname] if serial not in product['versions']: product['versions'][serial] = {'items': {}, "label": label} name = pubname(label, rel, arch, serial, build) product['versions'][serial]['pubname'] = name items = product['versions'][serial]['items'] # ftype: finding the unique extension is not-trivial # take basename of the filename, and remove up to "-?" # so ubuntu-12.04-server-cloudimg-armhf.tar.gz becomes # 'tar.gz' and 'ubuntu-12.04-server-cloudimg-armhf-disk1.img' # becomse 'disk1.img' dash_arch = "-" + arch ftype = filepath[filepath.rindex(dash_arch) + len(dash_arch) + 1:] items[ftype] = { 'path': filepath, 'ftype': ftype } return tree def pubname(label, rel, arch, serial, build='server'): version = toolutil.REL2VER[rel]['version'] if label == "daily": rv_label = rel + "-daily" elif label == "release": rv_label = "%s-%s" % (rel, version) elif label.startswith("beta"): rv_label = "%s-%s-%s" % (rel, version, label) else: rv_label = "%s-%s" % (rel, label) return "ubuntu-%s-%s-%s-%s" % (rv_label, arch, build, serial) def ec2_load_query(path): tree = {} dmap = { "north": "nn", "northeast": "ne", "east": "ee", "southeast": "se", "south": "ss", "southwest": "sw", "west": "ww", "northwest": "nw", } itmap = { 'pv': {'instance': "pi", "ebs": "pe"}, 'hvm': {'instance': "hi", "ebs": "he"} } for rline in toolutil.load_query_ec2(path): (stream, rel, build, label, serial, store, arch, region, iid, _kern, _rmd, vtype) = rline if stream not in tree: tree[stream] = {'products': {}} products = tree[stream]['products'] version = toolutil.REL2VER[rel]['version'] prodname_rdns = UBUNTU_RDNS if stream != "released": prodname_rdns += "." + stream prodname = ':'.join([prodname_rdns, build, version, arch]) if prodname not in products: products[prodname] = { "release": rel, "version": version, "arch": arch, "versions": {} } product = products[prodname] if serial not in product['versions']: product['versions'][serial] = {'items': {}, "label": label} items = product['versions'][serial]['items'] name = pubname(label, rel, arch, serial, build) product['versions'][serial]['pubname'] = name if store == "instance-store": store = 'instance' if vtype == "paravirtual": vtype = "pv" # create the item key: # - 2 letter country code (us) # - 2 letter direction ('nn' for north, 'nw' for northwest) # - 1 digit number # - 1 char for virt type # - 1 char for root-store type (cc, direction, num) = region.split("-") ikey = cc + dmap[direction] + num + itmap[vtype][store] items[ikey] = { 'id': iid, 'root_store': store, 'virt': vtype, 'crsn': region, } return tree def printitem(item, exdata): full = exdata.copy() full.update(item) print(full) def create_image_data(query_tree, out_d, streamdir): license = 'http://www.canonical.com/intellectual-property-policy' hashcache = os.path.join(query_tree, "FILE_DATA_CACHE") FILE_DATA['filename'] = hashcache if os.path.isfile(hashcache): FILE_DATA.update(json.loads(open(hashcache, "r").read())) ts = util.timestamp() tree = dl_load_query(query_tree) def update_hashes(item, tree, pedigree): item.update(get_cloud_images_file_hash(item['path'])) def update_sizes(item, tree, pedigree): item.update({'size': get_cloud_images_file_size(item['path'])}) cid_fmt = "com.ubuntu.cloud:%s:download" for stream in tree: def create_file(item, tree, pedigree): create_fake_file(os.path.join(out_d, stream), item) cid = cid_fmt % stream if REAL_DATA: util.walk_products(tree[stream], cb_item=update_hashes) util.walk_products(tree[stream], cb_item=update_sizes) else: util.walk_products(tree[stream], cb_item=create_file) tree[stream]['format'] = "products:1.0" tree[stream]['updated'] = ts tree[stream]['content_id'] = cid tree[stream]['datatype'] = 'image-downloads' tree[stream]['license'] = license outfile = os.path.join(out_d, stream, streamdir, cid + ".json") util.mkdir_p(os.path.dirname(outfile)) with open(outfile, "w") as fp: sys.stderr.write("writing %s\n" % outfile) fp.write(json.dumps(tree[stream], indent=1) + "\n") # save hashes data save_cache() return tree def create_aws_data(query_tree, out_d, streamdir): tree = ec2_load_query(query_tree) ts = util.timestamp() cid_fmt = "com.ubuntu.cloud:%s:aws" for stream in tree: cid = cid_fmt % stream # now add the '_alias' data regions = set() def findregions(item, tree, pedigree): regions.add(item['crsn']) util.walk_products(tree[stream], cb_item=findregions) tree[stream]['_aliases'] = {'crsn': {}} for region in regions: tree[stream]['_aliases']['crsn'][region] = { 'endpoint': 'https://ec2.%s.amazonaws.com' % region, 'region': region} tree[stream]['format'] = "products:1.0" tree[stream]['datatype'] = "image-ids" tree[stream]['updated'] = ts tree[stream]['content_id'] = cid outfile = os.path.join(out_d, stream, streamdir, cid + ".json") util.mkdir_p(os.path.dirname(outfile)) with open(outfile, "w") as fp: sys.stderr.write("writing %s\n" % outfile) fp.write(json.dumps(tree[stream], indent=1) + "\n") return tree def main(): parser = argparse.ArgumentParser(description="create example content tree") parser.add_argument("query_tree", metavar='query_tree', help=('read in content from /query tree. Hint: ' + 'make exdata-query')) parser.add_argument("out_d", metavar='out_d', help=('create content under output_dir')) parser.add_argument('--sign', action='store_true', default=False, help='sign all generated files') args = parser.parse_args() streamdir = "streams/v1" dltree = create_image_data(args.query_tree, args.out_d, streamdir) aws_tree = create_aws_data(args.query_tree, args.out_d, streamdir) for streamname in aws_tree: index = {"index": {}, 'format': 'index:1.0', 'updated': util.timestamp()} clouds = list(aws_tree[streamname]['_aliases']['crsn'].values()) index['index'][aws_tree[streamname]['content_id']] = { 'updated': aws_tree[streamname]['updated'], 'datatype': aws_tree[streamname]['datatype'], 'clouds': clouds, 'cloudname': "aws", 'path': '/'.join((streamdir, "%s.json" % aws_tree[streamname]['content_id'],)), 'products': list(aws_tree[streamname]['products'].keys()), 'format': aws_tree[streamname]['format'], } index['index'][dltree[streamname]['content_id']] = { 'updated': dltree[streamname]['updated'], 'datatype': dltree[streamname]['datatype'], 'path': '/'.join((streamdir, "%s.json" % dltree[streamname]['content_id'],)), 'products': list(dltree[streamname]['products'].keys()), 'format': dltree[streamname]['format'] } outfile = os.path.join(args.out_d, streamname, streamdir, 'index.json') util.mkdir_p(os.path.dirname(outfile)) with open(outfile, "w") as fp: sys.stderr.write("writing %s\n" % outfile) fp.write(json.dumps(index, indent=1) + "\n") if args.sign: def printstatus(name): sys.stderr.write("signing %s\n" % name) for root, dirs, files in os.walk(args.out_d): for f in [f for f in files if f.endswith(".json")]: toolutil.signjson_file(os.path.join(root, f), status_cb=printstatus) return if __name__ == '__main__': sys.exit(main()) # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/tools/run-pep80000755000000000000000000000040712314577450016447 0ustar 00000000000000#!/bin/bash if [ $# -eq 0 ]; then files=( $(find * -name "*.py" -type f) $(for f in bin/*; do [ "${f%.*}" = "$f" ] && echo "$f"; done) ) else files=( "$@" ); fi cmd=( pep8 "${files[@]}" ) echo -e "\nRunning pep8:" echo "${cmd[@]}" "${cmd[@]}" simplestreams-0.1.0~bzr341/tools/run-pylint0000755000000000000000000000066412314577450017117 0ustar 00000000000000#!/bin/bash if [ $# -eq 0 ]; then files=( $(find * -name "*.py" -type f) $(for f in bin/*; do [ "${f%.*}" = "$f" ] && echo "$f"; done) ) else files=( "$@" ); fi RC_FILE="pylintrc" if [ ! -f $RC_FILE ]; then RC_FILE="../pylintrc" fi cmd=( pylint --rcfile=$RC_FILE --disable=R --disable=I --dummy-variables-rgx="_" "${files[@]}" ) echo -e "\nRunning pylint:" echo "${cmd[@]}" "${cmd[@]}" simplestreams-0.1.0~bzr341/tools/sstream-mirror-glance0000755000000000000000000001220312314577450021203 0ustar 00000000000000#!/usr/bin/python # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . # # this is python2 as openstack dependencies (swiftclient, keystoneclient, # glanceclient) are not python3. # import argparse import os.path import sys from simplestreams import objectstores from simplestreams.objectstores import swift from simplestreams import log from simplestreams import mirrors from simplestreams import openstack from simplestreams import util from simplestreams.mirrors import glance def error(msg): sys.stderr.write(msg) def main(): parser = argparse.ArgumentParser() parser.add_argument('--keep', action='store_true', default=False, help='keep items in target up to MAX items ' 'even after they have fallen out of the source') parser.add_argument('--max', type=int, default=None, help='store at most MAX items in the target') parser.add_argument('--region', action='append', default=None, dest='regions', help='operate on specified region ' '[useable multiple times]') parser.add_argument('--mirror', action='append', default=[], dest="mirrors", help='additional mirrors to find referenced files') parser.add_argument('--output-dir', metavar="DIR", default=False, help='write image data to storage in dir') parser.add_argument('--output-swift', metavar="prefix", default=False, help='write image data to swift under prefix') parser.add_argument('--name-prefix', metavar="prefix", default=None, help='prefix for each published image name') parser.add_argument('--cloud-name', metavar="name", default=None, required=True, help='unique name for this cloud') parser.add_argument('--modify-hook', metavar="cmd", default=None, required=False, help='invoke cmd on each image prior to upload') parser.add_argument('--content-id', metavar="name", default=None, required=True, help='content-id to use for published data.' ' may contain "%%(region)s"') parser.add_argument('--verbose', '-v', action='count', default=0) parser.add_argument('--log-file', default=sys.stderr, type=argparse.FileType('w')) parser.add_argument('--keyring', action='store', default=None, help='The keyring for gpg --keyring') parser.add_argument('source_mirror') parser.add_argument('path', nargs='?', default="streams/v1/index.sjson") args = parser.parse_args() modify_hook = None if args.modify_hook: modify_hook = args.modify_hook.split() mirror_config = {'max_items': args.max, 'keep_items': args.keep, 'cloud_name': args.cloud_name, 'modify_hook': modify_hook} def policy(content, path): # pylint: disable=W0613 if args.path.endswith('sjson'): return util.read_signed(content, keyring=args.keyring) else: return content smirror = mirrors.UrlMirrorReader(args.source_mirror, mirrors=args.mirrors, policy=policy) if args.output_dir and args.output_swift: error("--output-dir and --output-swift are mutually exclusive\n") sys.exit(1) level = (log.ERROR, log.INFO, log.DEBUG)[min(args.verbose, 2)] log.basicConfig(stream=args.log_file, level=level) regions = args.regions if regions is None: regions = openstack.get_regions(services=['image']) for region in regions: if args.output_dir: outd = os.path.join(args.output_dir, region) tstore = objectstores.FileStore(outd) elif args.output_swift: tstore = swift.SwiftObjectStore(args.output_swift, region=region) else: sys.stderr.write("not writing data anywhere\n") tstore = None mirror_config['content_id'] = args.content_id % {'region': region} tmirror = glance.GlanceMirror(config=mirror_config, objectstore=tstore, region=region, name_prefix=args.name_prefix) tmirror.sync(smirror, args.path) if __name__ == '__main__': main() # vi: ts=4 expandtab syntax=python simplestreams-0.1.0~bzr341/tools/tab2streams0000755000000000000000000001110712314577450017217 0ustar 00000000000000#!/usr/bin/python3 # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import argparse import json import os import sys from simplestreams import util import toolutil def tab2items(content): # tab content is # content-id product_name version_name img_name [key=value [key=value]] # return a list with each item containing: # (content_id, product_name, version_name, item_name, {data}) items = [] for line in content.splitlines(): fields = line.split('\t') content_id, prodname, vername, itemname = fields[0:4] kvdata = {} if len(fields) > 4: for field in fields[4:]: key, value = field.split("=") if key == "size": kvdata[key] = int(value) else: kvdata[key] = value items.append((content_id, prodname, vername, itemname, kvdata,)) return items def items2content_trees(itemslist, exdata): # input is a list with each item having: # (content_id, product_name, version_name, item_name, {data}) ctrees = {} for (content_id, prodname, vername, itemname, data) in itemslist: if content_id not in ctrees: ctrees[content_id] = {'content_id': content_id, 'format': 'products:1.0', 'products': {}} ctrees[content_id].update(exdata) ctree = ctrees[content_id] if prodname not in ctree['products']: ctree['products'][prodname] = {'versions': {}} prodtree = ctree['products'][prodname] if vername not in prodtree['versions']: prodtree['versions'][vername] = {'items': {}} vertree = prodtree['versions'][vername] if itemname in vertree['items']: raise ValueError("%s: already existed" % str([content_id, prodname, vername, itemname])) vertree['items'][itemname] = data return ctrees def main(): parser = argparse.ArgumentParser( description="create content tree from tab data") parser.add_argument("input", metavar='file', help=('source tab delimited data')) parser.add_argument("out_d", metavar='out_d', help=('create content under output_dir')) parser.add_argument('--sign', action='store_true', default=False, help='sign all generated files') args = parser.parse_args() if args.input == "-": tabinput = sys.stdin.read() else: with open(args.input, "r") as fp: tabinput = fp.read() streamdir = "streams/v1" items = tab2items(tabinput) data = {'updated': util.timestamp(), 'datatype': 'image-downloads'} trees = items2content_trees(items, data) index = {"index": {}, 'format': 'index:1.0', 'updated': data['updated']} to_write = [("%s/%s" % (streamdir, 'index.json'), index,)] not_copied_up = ['content_id'] for content_id in trees: util.products_condense(trees[content_id]) content = trees[content_id] index['index'][content_id] = { 'path': "%s/%s.json" % (streamdir, content_id), 'products': list(content['products'].keys()), } for k in util.stringitems(content): if k not in not_copied_up: index['index'][content_id][k] = content[k] to_write.append((index['index'][content_id]['path'], content,)) for (outfile, data) in to_write: filef = os.path.join(args.out_d, outfile) util.mkdir_p(os.path.dirname(filef)) with open(filef, "w") as fp: sys.stderr.write("writing %s\n" % filef) fp.write(json.dumps(data, indent=1) + "\n") if args.sign: sys.stderr.write("signing %s\n" % filef) toolutil.signjson_file(filef) return if __name__ == '__main__': sys.exit(main()) # vi: ts=4 expandtab simplestreams-0.1.0~bzr341/tools/tenv0000755000000000000000000000072112314577450015744 0ustar 00000000000000#!/bin/sh # trunkenv. prep the environment to run from this trunk # then execute program [ "${TENV_SETUP:-0}" != "0" ] && exec "$@" if [ -z "$TOPDIR" ]; then mydir=${0%/*} startd="$PWD" cd "${mydir}/.." topdir=${PWD} cd $startd else topdir=$TOPDIR fi export GNUPGHOME="$topdir/gnupg" export PYTHONPATH="$topdir${PYTHONPATH:+:${PYTHONPATH}}" export PATH=$topdir/tools:$topdir/bin:$PATH export TENV_SETUP=1 exec "$@" simplestreams-0.1.0~bzr341/tools/toolutil.py0000644000000000000000000001510312314577450017267 0ustar 00000000000000#!/usr/bin/python3 # Copyright (C) 2013 Canonical Ltd. # # Author: Scott Moser # # Simplestreams is free software: you can redistribute it and/or modify it # under the terms of the GNU Affero General Public License as published by # the Free Software Foundation, either version 3 of the License, or (at your # option) any later version. # # Simplestreams is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY # or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public # License for more details. # # You should have received a copy of the GNU Affero General Public License # along with Simplestreams. If not, see . import os import os.path from simplestreams import util REL2VER = { "hardy": {'version': "8.04", 'devname': "Hardy Heron"}, "lucid": {'version': "10.04", 'devname': "Lucid Lynx"}, "oneiric": {'version': "11.10", 'devname': "Oneiric Ocelot"}, "precise": {'version': "12.04", 'devname': "Precise Pangolin"}, "quantal": {'version': "12.10", 'devname': "Quantal Quetzal"}, "raring": {'version': "13.04", 'devname': "Raring Ringtail"}, "saucy": {'version': "13.10", 'devname': "Saucy Salamander"}, "trusty": {'version': "14.04", 'devname': "Trusty Tahr"}, } RELEASES = [k for k in REL2VER if k != "hardy"] BUILDS = ("server") NUM_DAILIES = 4 def is_expected(repl, fields): rel = fields[0] serial = fields[3] arch = fields[4] if repl == "-root.tar.gz": if rel in ("lucid", "oneiric"): # lucid, oneiric do not have -root.tar.gz return False if rel == "precise" and serial <= "20120202": # precise got -root.tar.gz after alpha2 return False if repl == "-disk1.img": if rel == "lucid": return False if rel == "oneiric" and serial <= "20110802.2": # oneiric got -disk1.img after alpha3 return False if repl == "-uefi1.img": # uefi images were released with trusty and for amd64 right now if arch != "amd64": return False if rel < "trusty": return False if arch == "ppc64el": if rel < "trusty" or serial <= "20140122": return False if repl not in (".tar.gz", "-root.tar.gz"): return False #if some data in /query is not truely available, fill up this array #to skip it. ex: export BROKEN="precise/20121212.1 quantal/20130128.1" broken = os.environ.get("BROKEN", "").split(" ") if "%s/%s" % (rel, serial) in broken: print("Known broken: %s/%s" % (rel, serial)) return False return True def load_query_download(path, builds=None, rels=None): if builds is None: builds = BUILDS if rels is None: rels = RELEASES suffixes = (".tar.gz", "-root.tar.gz", "-disk1.img", "-uefi1.img") streams = [f[0:-len(".latest.txt")] for f in os.listdir(path) if f.endswith("latest.txt")] results = [] for stream in streams: dl_files = [] latest_f = "%s/%s.latest.txt" % (path, stream) # get the builds and releases with open(latest_f, "r") as fp: for line in fp.readlines(): (rel, build, _stream, _serial) = line.split("\t") if ((len(builds) and build not in builds) or (len(rels) and rel not in rels)): continue dl_files.append("%s/%s/%s/%s-dl.txt" % (path, rel, build, stream)) field_path = 5 field_name = 6 # stream/build/release/arch for dl_file in dl_files: olines = open(dl_file, "r").readlines() # download files in /query only contain '.tar.gz' (uec tarball) # file. So we have to make up other entries. lines = [] for oline in olines: for repl in suffixes: fields = oline.rstrip().split("\t") if not is_expected(repl, fields): continue new_path = fields[field_path].replace(".tar.gz", repl) fields[field_path] = new_path fields[field_name] += repl lines.append("\t".join(fields) + "\n") for line in lines: line = line.rstrip("\n\r") + "\tBOGUS" results.append([stream] + line.split("\t", 8)[0:7]) return results def load_query_ec2(path, builds=None, rels=None, max_dailies=NUM_DAILIES): if builds is None: builds = BUILDS if rels is None: rels = RELEASES streams = [f[0:-len(".latest.txt")] for f in os.listdir(path) if f.endswith("latest.txt")] results = [] for stream in streams: id_files = [] latest_f = "%s/%s.latest.txt" % (path, stream) # get the builds and releases with open(latest_f, "r") as fp: for line in fp.readlines(): (rel, build, _stream, _serial) = line.split("\t") if ((len(builds) and build not in builds) or (len(rels) and rel not in rels)): continue id_files.append("%s/%s/%s/%s.txt" % (path, rel, build, stream)) for id_file in id_files: lines = reversed(open(id_file, "r").readlines()) serials_seen = 0 last_serial = None for line in lines: line = line.rstrip("\n\r") + "\tBOGUS" ret = [stream] ret.extend(line.split("\t", 11)[0:11]) if stream == "daily": serial = ret[4] if serial != last_serial: serials_seen += 1 last_serial = serial if serials_seen > max_dailies: break results.append(ret) return results def signjson_file(fname, status_cb=None): # input fname should be .json # creates .json.gpg and .sjson content = "" with open(fname, "r") as fp: content = fp.read() (changed, scontent) = util.make_signed_content_paths(content) if status_cb: status_cb(fname) util.sign_file(fname, inline=False) if changed: util.sign_content(scontent, util.signed_fname(fname, inline=True), inline=True) else: util.sign_file(fname, inline=True) return # vi: ts=4 expandtab syntax=python