debian/0000775000000000000000000000000012320517610007165 5ustar debian/source/0000775000000000000000000000000012320517610010465 5ustar debian/source/format0000664000000000000000000000001412272245730011702 0ustar 3.0 (quilt) debian/changelog0000664000000000000000000004256512320517603011055 0ustar cloud-utils (0.27-0ubuntu9) trusty; urgency=medium * fix regression bug in all uses of ubuntu-ec2-run (LP: #1303786). -- Scott Moser Mon, 07 Apr 2014 08:52:18 -0400 cloud-utils (0.27-0ubuntu8) trusty; urgency=medium * sync to trunk at revno 259 * fix mount-image-callback args --dev, --sys, or --proc (LP: #1302052) * ubuntu-ec2-run: know about more instance types * growpart: better --dry-run output for gpt disks -- Scott Moser Thu, 03 Apr 2014 12:30:28 -0400 cloud-utils (0.27-0ubuntu7) trusty; urgency=low * sync to trunk at revno 256 * ubuntu-cloudimg-query: allow 'arm64' as input. * ubuntu-cloudimg-query: add '--arch' flag -- Scott Moser Thu, 06 Feb 2014 15:33:47 +0200 cloud-utils (0.27-0ubuntu6) trusty; urgency=medium * sync to trunk at revno 254 * growpart: run partx only on block devices (not files) * ubuntu-cloudimg-query: allow 'ppc64el' as input. (LP: #1273769) * ubuntu-cloudimg-query, ubuntu-ec2-run: know about trusty -- Scott Moser Wed, 29 Jan 2014 13:44:03 -0500 cloud-utils (0.27-0ubuntu5) trusty; urgency=low * sync to trunk at revno 250 * cloud-localds: make quiet by default (increase verbosity with '-v') * ubuntu-cloudimg-query: do not fail on no ami id found if no ami id is * necessary for the output requested (ie, allow 'armhf' queries of url) * growpart: fix bug when growing partitions on disks > 2TB. (LP: #1259703) -- Scott Moser Wed, 11 Dec 2013 11:39:40 -0500 cloud-utils (0.27-0ubuntu4) saucy; urgency=low * Add proper Conflicts/Replaces for the package split to unbreak upgrades. (LP: #1217846) -- Martin Pitt Wed, 28 Aug 2013 13:40:31 +0200 cloud-utils (0.27-0ubuntu3) saucy; urgency=low * sync to trunk at revno 246 * cloud-localds: add man page [Thomas Bechtold] * cloud-localds: only use qemu-img convert if output format is not 'raw' * cloud-localds: add '--hostname' flag to specify local-hostname in meta-data. * cloud-publish-image: add '--architecture' when using 'register' * cloud-publish-image: improvements to '-v' (debugging) * cloud-publish-image: pass through --root-device-name * cloud-run-instances: dropped (obsolete, not recommended) * remove ubuntu-cloud-keyring (replaced in ubuntu by ubuntu-cloudimage-keyring) * mount-image-callback: add utility * split package into cloud-guest-utils and cloud-image-utils. * remove deprecated 'uec-*' commands: uec-publish-tarball, uec-publish-image, uec-run-instances, uec-resize-image. * fix lintian issue with debian/copyright -- Scott Moser Tue, 27 Aug 2013 16:23:58 -0400 cloud-utils (0.27-0ubuntu2) saucy; urgency=low * sync to trunk at revno 230 * ubuntu-cloudimg-query: change default release to 'precise' * growpart: fix some issues in error path reporting * growpart: capture output of 'partx --help' as older versions do not support that flag, and send output to stderr. * add 'vcs-run' utility for easily executing / bootstrapping from a version control system (hg, git, bzr) -- Scott Moser Wed, 19 Jun 2013 11:59:32 -0400 cloud-utils (0.27-0ubuntu1) raring; urgency=low * New upstream release. * package upstream release. * growpart: essential fix in variable quoting -- Scott Moser Wed, 27 Mar 2013 09:40:20 -0400 cloud-utils (0.26-0ubuntu3) raring; urgency=low * sync to trunk at revno 219 * growpart: support updating mounted partition with partx --update (LP: #1136936) -- Scott Moser Thu, 07 Mar 2013 15:45:10 -0500 cloud-utils (0.26-0ubuntu2) raring; urgency=low * debian/copyright: fix formatting * sync to trunk at revno 216 * support for GPT partitions in growpart via sgdisk (LP: #1087526) * depend on wget and ca-certificates for ubuntu-cloudimg-query (LP: #1062671) * fix sfdisk parsing (LP: #1007415) -- Scott Moser Mon, 04 Feb 2013 15:18:49 -0500 cloud-utils (0.26-0ubuntu1) quantal; urgency=low * New upstream release. * pull in upstream at 0.26 * remove client tools cloudimg-sync and ubuntu-cloudimg-query2 (LP: #1059781) * ubuntu-cloudimg-query: add 'serial' to output variables (LP: #974569) -- Scott Moser Mon, 01 Oct 2012 15:08:43 -0400 cloud-utils (0.25-0ubuntu7) quantal; urgency=low * Sync to upstream trunk at revision 195 . * adds cloud-localds utility (LP: #1036312) * growpart: bugfix for nbd and loop devices * awareness of hi1.4xlarge * ubuntu-ec2-run: fix issue with block-device-mappings -- Scott Moser Thu, 23 Aug 2012 00:45:35 -0400 cloud-utils (0.25-0ubuntu6) quantal; urgency=low * Sync to upstream trunk at revision 188 * adds cloudimg-sync and ubuntu-cloudimg-query2 [Ben Howard] * debian/control: run wrap-and-sort -- Scott Moser Thu, 12 Jul 2012 13:38:44 -0400 cloud-utils (0.25-0ubuntu5) precise; urgency=low * cloud-publish-tarball, cloud-publish-image: be more quiet when downloading images by using wget --progress=dot:mega * ubuntu-cloudimg-query, ubuntu-ec2-run: support m1.medium ec2 size and do not assume m1.small or c1.medium imply i386. -- Scott Moser Fri, 09 Mar 2012 17:02:53 -0500 cloud-utils (0.25-0ubuntu4) precise; urgency=low * growpart: invoke sfdisk with '--no-reread' to avoid udev race conditions (LP: #937352) -- Scott Moser Tue, 28 Feb 2012 14:40:51 -0500 cloud-utils (0.25-0ubuntu3) precise; urgency=low * growpart: allow output of failed sfdisk to get to user -- Scott Moser Wed, 22 Feb 2012 15:49:34 -0500 cloud-utils (0.25-0ubuntu2) precise; urgency=low * cloud-publish-image: fix issue if ramdisk=none this fixes cloud-publish-tarball for no-ramdisk tarballs -- Scott Moser Sun, 19 Feb 2012 15:18:26 -0500 cloud-utils (0.25-0ubuntu1) precise; urgency=low * New upstream release. * fixes for cloud-publish-ubuntu for clouds other than EC2 * better support for "loader" kernels in cloud-publish-image -- Scott Moser Thu, 16 Feb 2012 15:36:15 -0500 cloud-utils (0.24-0ubuntu1) precise; urgency=low * New upstream release * cloud-publish-tarball, cloud-publish-image now accept urls * cloud-publish-image supports older euca2ools or ec2 tools that do not have a '--name' flag in register * cloud-publish-ubuntu: new tool added for one command population of your cloud from images on cloud-images.ubuntu.com -- Scott Moser Thu, 27 Oct 2011 15:10:33 -0400 cloud-utils (0.23-0ubuntu7) oneiric; urgency=low * cloud-publish-image: do not fail if arch is other than i386 or x86_64. This is to allow 'arm'. (LP: #849093) -- Scott Moser Tue, 13 Sep 2011 16:29:24 -0400 cloud-utils (0.23-0ubuntu6) oneiric; urgency=low * cherry pick some fixes from trunk * ubuntu-ec2-run: add block-device-mapping arguments * cloud-publish-image: default to 'image' type rather than auto * ubuntu-cloudimg-query: hvm instances cannot run on t1.micro * ubuntu-ec2-run: add --help usage, and do not run an instance if no args are given -- Scott Moser Fri, 09 Sep 2011 14:33:46 -0700 cloud-utils (0.23-0ubuntu5) oneiric; urgency=low * sync with trunk at revision 142 * add symlink for legacy name uec-run-instances to cloud-run-instances * ec2metadata: * use 2009-04-04 version of api, which is present in openstack for the metadata url. * correctly provide prefix (keyname) for each item if dumping all metadata * cloud-run-instances: pass '--key' or '-k' to the underlying command * ubuntu-ec2-run: fix bug where --instance-type would be passed twice if the user set it -- Scott Moser Wed, 17 Aug 2011 15:39:12 -0500 cloud-utils (0.23-0ubuntu4) oneiric; urgency=low * sync with trunk at revision 136 * ubuntu-cloudimg-query add additional format data options [Dustin Kirkland] * bin/ubuntu-ec2-run: - karmic is EOL, hardy still supported, note that we *should* use distro-info eventually - default to t1.micro instead of m1.small (least expensive, can do amd64) -- Scott Moser Thu, 11 Aug 2011 09:22:43 +0100 cloud-utils (0.23-0ubuntu3) oneiric; urgency=low * fix for ubuntu-ec2-run, which was broken in previous upload -- Scott Moser Thu, 04 Aug 2011 15:24:57 -0400 cloud-utils (0.23-0ubuntu2) oneiric; urgency=low * sync with trunk at revision 127 * fix to cloud-publish-image when interacting with older eucalyptus * bring in 2 new utilities for getting EC2 ami ids - ubuntu-ec2-query : command line utility for querying http://cloud-images.ubuntu.com/query - ubuntu-ec2-run : lightweight wrapper around ec2-run-instances that utilizes ubuntu-ec2-query -- Scott Moser Thu, 04 Aug 2011 14:53:07 -0400 cloud-utils (0.23-0ubuntu1) oneiric; urgency=low * New upstream release (first upstream release separate from Ubuntu) * ec2metadata: add '--url' flag for specifying metadata service url * ec2metadata: update to use metadata api version 2011-01-01 This adds 'instance-action', 'mac', 'profile' * ec2metadata: use urllib2 and correctly identify HTTPErrors * rename 'uec' prefix to 'cloud' there are symlinks providing backwards compat that will issue warnings - uec-publish-tarball: renamed to cloud-publish-tarball - uec-publish-image: renamed to cloud-publish-image - uec-resize-image: renamed to resize-part-image * uec-query-builds: removed -- smoser Tue, 26 Jul 2011 22:15:46 -0400 cloud-utils (0.22ubuntu1) oneiric; urgency=low * uec-publish-tarball: accept x86_64 as valid arch input, and pass provided arch un-modified to uec-publish-image rather than always changing to 'i386' (LP: #779812) * uec-publish-image: improve searching for existing image. Handle searching for either the name or the manifest path, and only search images owned by self. * uec-publish-image: register as bucket/basename rather than basename -- Scott Moser Fri, 15 Jul 2011 12:14:21 -0400 cloud-utils (0.21ubuntu1) natty; urgency=low * make uec-publish-tarball read TMPDIR. Previously it read 'TEMPDIR' environment variable. Fall back to using that if TMPDIR is not set. * add utility 'growpart' for rewriting a partition table so that a given partition uses available space (LP: #725127) -- Scott Moser Fri, 25 Feb 2011 12:54:46 -0500 cloud-utils (0.20ubuntu1) natty; urgency=low * uec-publish-image: use --name in euca-register * uec-publish-image: fix for debug so '-v' will give some info (previously needed -vv) * ec2metadata: fix for ancestor-ami-ids retrieval (LP: #706651) * uec-run-instances: add '--attach-volume' -- Scott Moser Sat, 19 Feb 2011 01:17:35 -0500 cloud-utils (0.19ubuntu1) natty; urgency=low * uec-publish-image: fix using ec2-api-tools and ec2-ami-tools -- Scott Moser Thu, 13 Jan 2011 13:56:45 -0500 cloud-utils (0.18ubuntu1) natty; urgency=low * include write-mime-multipart into the packaging, with man page -- Scott Moser Tue, 11 Jan 2011 19:50:35 -0500 cloud-utils (0.17ubuntu1) natty; urgency=low * uec-publish-tarball: add --rename-[kernel,ramdisk,image] flags * add support for shorter syntax in uec-query-builds * fix ec2metadata under Eucalyptus (LP: #676144) * move write-mime-multipart from cloud-init to cloud-init-multipart in cloud-utils -- Scott Moser Tue, 11 Jan 2011 09:40:28 -0500 cloud-utils (0.16ubuntu1) maverick; urgency=low * uec-run-instances: fix multiple launchpad-ids. (LP: #621473) * uec-run-instances: depend on python-paramiko. (LP: #646823) -- Scott Moser Mon, 30 Aug 2010 13:56:19 -0400 cloud-utils (0.15ubuntu1) maverick; urgency=low * uec-publish-tarball: support --loader flag to use the loader image rather than linux kernel in uec tarballs -- Scott Moser Tue, 03 Aug 2010 16:12:47 -0400 cloud-utils (0.14-0ubuntu1) maverick; urgency=low * debian/control, debian/ssh-import.install, debian/ssh-import.manpages, ssh-import-lp-id, ssh-import-lp-id.1: purge ssh-import* from this source package, now pushed to openssh-server 1:5.5p1-4ubuntu3 -- Dustin Kirkland Thu, 22 Jul 2010 15:25:41 +0200 cloud-utils (0.13ubuntu1) maverick; urgency=low [ Dustin Kirkland ] * ssh-import-lp-id: handle multi-line ssh public keys in Launchpad, LP: #596938; thanks to Jos Boumans for the elegant snippet of perl that fixes this [ Clint Byrum ] * uec-run-instances: rewritten command with much larger scope -- Scott Moser Thu, 24 Jun 2010 20:40:09 -0400 cloud-utils (0.12ubuntu1) maverick; urgency=low * ec2metadata: bring in ec2metadata (LP: #547019) -- Scott Moser Thu, 10 Jun 2010 17:00:22 -0400 cloud-utils (0.11-0ubuntu1) lucid; urgency=low * uec-query-builds: do not throw IndexError on no builds available, or Exception on bad usage, LP: #559236 * uec-publish-image: remove error trailing '%s' in error message, LP: #559244 * uec-publish-tarball: send stdout through on failure of uec-publish-image, LP: #559244 -- Scott Moser Fri, 09 Apr 2010 10:03:29 -0400 cloud-utils (0.10-0ubuntu1) lucid; urgency=low [ Scott Moser ] * ssh-import-lp-id: allow dss keys * uec-publish-tarball: add -q/--quiet flag * uec-publish-image: - remove trailing slash on bucket input which caused failed register - remove trailing tab in output - on error, make sure user sees command output - add -B/--device-block-mapping pass through to euca-bundle-image * uec-resize-image: make quiet by default, add --verbose,-v * uec-query-builds: support querying 'latest-ec2' [ Dustin Kirkland ] * debian/install, debian/manpages, uec-run-instances, uec-run-instances.1: add a wrapper for euca-run-instances that can easily/cleanly inject ssh keys from Launchpad.net, LP: #524101 -- Dustin Kirkland Thu, 25 Mar 2010 21:53:59 -0700 cloud-utils (0.9-0ubuntu1) lucid; urgency=low * ssh-import-lp-id: - ensure that authorized_keys gets created with the right permissions if it does not yet exist, LP: #531144 - drop the sort -u, as this is actually incorrect behavior (rearranging the order of an existing authorized_keys file, even if to prune duplicate entries, is wrong); this does mean that duplicate entries might creep into the file, but the behavior is the same as ssh-copy-id in this sense, LP: #531145 -- Dustin Kirkland Tue, 02 Mar 2010 23:53:05 -0600 cloud-utils (0.8-0ubuntu1) lucid; urgency=low * uec-publish-image: return to using symbolic link for renaming (LP: #522292 is fixed) * uec-publish-tarball: fail before extracting tarball if environment is not set up for euca2ools (LP: #526504) -- Scott Moser Mon, 01 Mar 2010 12:02:53 -0500 cloud-utils (0.7-0ubuntu1) lucid; urgency=low * ssh-import-lp-id: ensure that $HOME is set properly, LP: #528029; add a usage statement -- Dustin Kirkland Thu, 25 Feb 2010 16:19:35 -0600 cloud-utils (0.6-0ubuntu1) lucid; urgency=low * debian/control, debian/ssh-import.install, debian/ssh-import.manpages, ssh-import-lp-id, ssh-import-lp-id.1: - add a utility and a binary package for conveniently importing public ssh keys from Launchpad by a LP user id, LP: #524226 -- Dustin Kirkland Tue, 23 Feb 2010 20:32:40 -0600 cloud-utils (0.5-0ubuntu1) lucid; urgency=low [ Scott Moser ] * uec-publish-image: use hard link instead of soft, work around euca2ools bug (LP: #522292) * uec-publish-image: remove temp dir if --working-dir is given * uec-publish-image: add --kernel-file, --ramdisk-file flags -- Dustin Kirkland Thu, 18 Feb 2010 22:58:33 -0600 cloud-utils (0.4-0ubuntu1) lucid; urgency=low * Fix package versioning -- Dustin Kirkland < kirkland@ubuntu.com> Thu, 18 Feb 2010 15:04:39 -0600 cloud-utils (0.3ubuntu1) lucid; urgency=low [ Dustin Kirkland ] * Makefile, debian/install: ditch the Makefile in favor of a debhelper install * uec-publish-image, uec-publish-tarball, uec-resize-image: add GPLv3 headers to all scripts * uec-publish-image: add a note about bashisms * debian/copyright, uec-query-builds: clean up trailing whitespace * debian/control: - improve the description - depend on python, python-yaml * debian/manpages, uec-publish-tarball.1, uec-query-builds.1, uec-resize-image.1, uec-publish-image.1: add a first cut at manpages -- Dustin Kirkland Wed, 17 Feb 2010 21:00:56 -0600 cloud-utils (0.2ubuntu1) lucid; urgency=low * add uec-query-builds for querying uec build data from uec-images.u.c -- Scott Moser Wed, 17 Feb 2010 17:34:27 -0500 cloud-utils (0.2) lucid; urgency=low * fix version number to represent native package -- Scott Moser Mon, 08 Feb 2010 10:18:12 -0500 cloud-utils (0.1ubuntu1) lucid; urgency=low * Initial release. -- Scott Moser Fri, 05 Feb 2010 18:37:57 -0500 debian/watch0000664000000000000000000000017312272245730010226 0ustar version=3 https://launchpad.net/cloud-utils/+download/ https://launchpad.net/cloud-utils/.*/cloud-utils-([\d\.]+)\.tar\.gz debian/copyright0000664000000000000000000000171712272245730011135 0ustar Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: cloud-utils Source: https://code.launchpad.net/cloud-utils Upstream-Contact: Scott Moser Copyright: 2010-2013, Canonical Ltd. 2013, Hewlett-Packard Development Company, L.P. License: GPL-3 This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License version 3, as published by the Free Software Foundation. . This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. . You should have received a copy of the GNU General Public License along with this program. If not, see . . The complete text of the GPL version 3 can be seen in /usr/share/common-licenses/GPL-3. debian/rules0000775000000000000000000000005512272245730010254 0ustar #!/usr/bin/make -f %: dh $@ --with=python2 debian/cloud-image-utils.install0000664000000000000000000000061712272245730014114 0ustar usr/bin/cloud-localds usr/bin/cloud-publish-image usr/bin/cloud-publish-tarball usr/bin/cloud-publish-ubuntu usr/bin/mount-image-callback usr/bin/resize-part-image usr/bin/ubuntu-cloudimg-query usr/bin/ubuntu-ec2-run usr/bin/write-mime-multipart usr/share/man/*/cloud-publish-image.* usr/share/man/*/cloud-publish-tarball.* usr/share/man/*/resize-part-image.* usr/share/man/*/write-mime-multipart.* debian/README.source0000664000000000000000000000056012272245730011354 0ustar This is the cloud-utils ubuntu packaging branch. it comes from: bzr branch lp:ubuntu/cloud-utils Please only make ubuntu specific changes here. Other changes should be made to the upstream, and then cherry-picked back. Upstream is: bzr branch lp:cloud-utils To create the debian/patches/sync-to-trunk.patch, do the following: ./update-sync-to-main ../trunk debian/patches/0000775000000000000000000000000012320517610010614 5ustar debian/patches/sync-to-trunk.patch0000664000000000000000000024270412320517270014405 0ustar Patch created with './debian/update-sync-to-main ../trunk' ------------------------------------------------------------ revno: 260 fixes bug: https://launchpad.net/bugs/1303786 committer: Scott Moser branch nick: trunk timestamp: Mon 2014-04-07 08:43:08 -0400 message: ubuntu-ec2-run: fix regression in normal usage path ------------------------------------------------------------ revno: 259 fixes bug: https://launchpad.net/bugs/1302052 committer: Scott Moser branch nick: trunk timestamp: Thu 2014-04-03 12:20:23 -0400 message: mount-image-callback: fix '--proc', '--sys', and '--dev' These didn't work as expected due to bad shell syntax. ------------------------------------------------------------ revno: 258 committer: Scott Moser branch nick: trunk timestamp: Thu 2014-03-13 10:46:55 -0400 message: ubuntu-ec2-run: know about more instance types ------------------------------------------------------------ revno: 257 committer: Scott Moser branch nick: trunk timestamp: Wed 2014-02-26 11:02:41 -0500 message: growpart: better --dry-run output for gpt disks provide the sgdisk command line that would be used. ------------------------------------------------------------ revno: 256 committer: Scott Moser branch nick: trunk timestamp: Thu 2014-02-06 15:31:58 +0200 message: update changelog ------------------------------------------------------------ revno: 255 committer: Scott Moser branch nick: trunk timestamp: Thu 2014-02-06 15:19:49 +0200 message: ubuntu-cloudimg-query: allow 'arm64', add '--arch' ------------------------------------------------------------ revno: 254 committer: Scott Moser branch nick: trunk timestamp: Wed 2014-01-29 13:03:59 -0500 message: ubuntu-cloudimg-query, ubuntu-ec2-run: know about trusty ------------------------------------------------------------ revno: 253 fixes bug: https://launchpad.net/bugs/1273769 committer: Scott Moser branch nick: trunk timestamp: Tue 2014-01-28 12:13:12 -0500 message: ubuntu-cloudimg-query: allow 'ppc64el' as input. (LP: #1273769) We have ppc64el images now. Long term, this program needs to be replaced with some sstream based query. ------------------------------------------------------------ revno: 252 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-12-11 14:57:19 -0500 message: growpart: run partx only on block devices (not files) ------------------------------------------------------------ revno: 251 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-12-11 14:56:55 -0500 message: debian/changelog: update for last commit ------------------------------------------------------------ revno: 250 fixes bug: https://launchpad.net/bugs/1259703 committer: Scott Moser branch nick: trunk timestamp: Tue 2013-12-10 16:38:03 -0500 message: fix growpart issue growing partitions on disks greater than 2TB. sfdisk seems to not realize that it cannot grow a MBR partition greater than 2TB. When it tries to do so it ends up doing something else, which is not what you want. the fix here is just to catch the case where this occurs and explicitly specify the end at 2TB. ------------------------------------------------------------ revno: 249 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-10-30 09:28:14 -0400 message: allow 'armhf' input to ubuntu-cloudimg-query ------------------------------------------------------------ revno: 248 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-10-30 08:54:03 -0400 message: ubuntu-cloudimg-query: do not fail on no ami id found if no ami id is necessary for the output requested (ie, allow 'armhf' queries of url) ------------------------------------------------------------ revno: 247 committer: Scott Moser branch nick: trunk timestamp: Mon 2013-10-28 14:59:29 -0400 message: cloud-localds: make quiet by default (increase verbosity with '-v') ------------------------------------------------------------ revno: 246 committer: Scott Moser branch nick: trunk timestamp: Tue 2013-08-27 15:49:22 -0400 message: debian/rules: simplify ------------------------------------------------------------ revno: 245 committer: Scott Moser branch nick: trunk timestamp: Tue 2013-08-27 15:45:58 -0400 message: debian/copyright: fix warnings ------------------------------------------------------------ revno: 244 committer: Scott Moser branch nick: trunk timestamp: Tue 2013-08-27 15:37:56 -0400 message: mount-image-callback: add utility ------------------------------------------------------------ revno: 243 committer: Scott Moser branch nick: trunk timestamp: Tue 2013-08-27 14:36:03 -0400 message: remove some obsolete things (cloud-run-instances, cloud-keyring) * cloud-run-instances: dropped (obsolete, not recommended) * remove ubuntu-cloud-keyring (replaced in ubuntu by ubuntu-cloudimage-keyring) ------------------------------------------------------------ revno: 242 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-08-21 13:35:05 -0400 message: update changelogs ------------------------------------------------------------ revno: 241 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-08-21 13:32:30 -0400 message: vcs-run: add examples in usage, fix repo 'lp:foo' lp:foo didn't have a '/' in it, so it would end up doing: bzr branch lp:foo lp:foo ------------------------------------------------------------ revno: 240 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-08-21 10:13:02 -0400 message: cloud-publish-image: pass '--architecture' to register command revno 234 seems incorrect. The flag added to 'ec2-register' (or euca-register) was '--arch' but it should have been --architecture. I've verified EC2PRE="xc2 " cloud-publish-image -vv i386 my.img smoser-us-east-1 and EC2PRE="euca-" cloud-publish-image -vv i386 my.img smoser-us-east-1 worked correctly. ------------------------------------------------------------ revno: 239 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-08-21 09:57:12 -0400 message: update debian/ChangeLog ------------------------------------------------------------ revno: 238 committer: Scott Moser branch nick: trunk timestamp: Thu 2013-08-15 15:42:37 -0400 message: cloud-localds: add --hostname flag this allows specifying of '--hostname' to set the 'local-hostname' key in the metadata. Also: - improves writing of json formated metadata - removes 'errorp' and 'failp' (previously unused) methods ------------------------------------------------------------ revno: 237 committer: Scott Moser branch nick: trunk timestamp: Thu 2013-08-15 14:43:28 -0400 message: only use 'qemu-img' if output is not 'raw' format. it should not be necessary to use qemu-img for raw output format. as the truncated image should be raw already. This means that we don't need qemu-img by default. ------------------------------------------------------------ revno: 236 [merge] fixes bug: https://launchpad.net/bugs/1206038 author: Thomas Bechtold committer: Scott Moser branch nick: trunk timestamp: Mon 2013-07-29 09:46:50 -0400 message: cloud-localds: add man page ------------------------------------------------------------ revno: 235 committer: Scott Moser branch nick: trunk timestamp: Fri 2013-06-21 10:30:05 -0400 message: cloud-publish-image: accept --root-device-name parameter accept --root-device-name and pass it through to ${EC2PRE}register ------------------------------------------------------------ revno: 234 committer: Scott Moser branch nick: trunk timestamp: Fri 2013-06-21 10:12:06 -0400 message: cloud-publish-image: put --architecture=arch in ${EC2PRE}-register if you don't pass --architecture to euca-register, then it will now register with 'i386' (the default) even if you bundled with --arch. ec2-bundle-image still (1.6.6.0-0ubuntu1) needs '--arch' passed or it will prompt the user, so we still pass it there. ------------------------------------------------------------ revno: 233 committer: Scott Moser branch nick: trunk timestamp: Fri 2013-06-21 10:10:57 -0400 message: cloud-publish-image: show commands being run in debug output if user specifies '-vv', now they'll get commands that are being run. ------------------------------------------------------------ revno: 232 committer: Scott Moser branch nick: trunk timestamp: Fri 2013-06-21 10:09:27 -0400 message: cloud-publish-image: fix 'debug' verbosity issue the lowest value passed to debug was 1, but that wouldn't do anything. so, in order to see any debug output, the user would have to give '-vv' make '-v' give 'debug 1' output. ------------------------------------------------------------ revno: 231 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-06-19 14:59:25 -0400 message: vcs-run: --deps does not require an argument ------------------------------------------------------------ revno: 230 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-06-19 11:49:34 -0400 message: add debug statement ------------------------------------------------------------ revno: 229 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-06-19 11:47:19 -0400 message: vcs-run: handle error better ------------------------------------------------------------ revno: 228 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-06-19 11:45:56 -0400 message: use wget instead of curl ------------------------------------------------------------ revno: 227 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-06-19 11:40:59 -0400 message: update debian/changelog ------------------------------------------------------------ revno: 226 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-06-19 11:37:21 -0400 message: add 'vcs-run' command vcs-run is just a convenience command for downloading a version control'd repository and executing a command in it. ------------------------------------------------------------ revno: 225 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-05-29 12:19:21 -0400 message: growpart: capture output of 'partx --help' to avoid stderr leakage partx --help on old versions of partx that do not support --help would write to stderr something like: | partx: unrecognized option '--help' | unknown option This just captures that annoyance. ------------------------------------------------------------ revno: 224 committer: Scott Moser branch nick: trunk timestamp: Wed 2013-05-22 10:13:19 -0400 message: fix some issues in reporting in the error path. A few changes here really just cleanups. * If RESTORE_HUMAN isn't set, do not attempt to cat it on failure of the restore function (this really should not happen). * fix bug where RESTORE_HUMAN was not being set at all in gpt path. And, we were not correctly catching failure of writing it. * shorten race conditions when MBR_BACKUP and RESTORE_HUMAN were set but not actually populated. ------------------------------------------------------------ revno: 223 committer: Scott Moser branch nick: trunk timestamp: Thu 2013-03-28 08:42:13 -0400 message: ubuntu-cloudimg-query: change default release to 'precise' ------------------------------------------------------------ revno: 222 committer: Scott Moser branch nick: trunk timestamp: Thu 2013-03-28 08:41:12 -0400 message: open 0.28, improve build-deb ------------------------------------------------------------ Use --include-merged or -n0 to see merged revisions. === modified file 'ChangeLog' --- old/ChangeLog 2013-03-27 13:10:52 +0000 +++ new/ChangeLog 2014-03-13 14:46:55 +0000 @@ -1,3 +1,33 @@ +0.28 + - ubuntu-cloudimg-query: change default release to 'precise' + - growpart: fix some issues in error path reporting + - growpart: capture output of 'partx --help' as older versions + do not support that flag, and send output to stderr. + - add 'vcs-run' utility for easily executing / bootstrapping + from a version control system (hg, git, bzr) + - cloud-localds: add man page [Thomas Bechtold] + - cloud-localds: only use qemu-img convert if output format is not 'raw' + - cloud-localds: add '--hostname' flag to specify local-hostname in + meta-data. + - cloud-publish-image: add '--architecture' when using 'register' + - cloud-publish-image: improvements to -v (debugging) + - cloud-publish-image: pass through --root-device-name + - cloud-run-instances: dropped (obsolete, not recommended) + - dropped installation of (obsolete) ubuntu cloud-image keyring. + See ubuntu package 'ubuntu-cloudimage-keyring' + - add mount-image-callback + - cloud-localds: make quiet by default (increase verbosity with '-v') + - ubuntu-cloudimg-query: do not fail on no ami id found if no ami id is + necessary for the output requested (ie, allow 'armhf' queries of url) + - growpart: fix bug when growing partitions on disks > 2TB. (LP: #1259703) + - growpart: run partx only on block devices (not files) + - ubuntu-cloudimg-query: allow 'ppc64el', 'arm64' as input. (LP: #1273769) + - ubuntu-cloudimg-query, ubuntu-ec2-run: know about trusty + - ubuntu-cloudimg-query: add '--arch' to specifically state the arch. + - growpart: better --dry-run output for gpt disks, providing sgdisk command + line that would be used. + - ubuntu-ec2-run: know about more instance types + 0.27 - cloud-publish-image: add '--hook-img' flag to cloud-publish-image and passthrough that flag from cloud-publish-ubuntu and cloud-publish-tarball. === modified file 'Makefile' --- old/Makefile 2012-10-01 18:40:29 +0000 +++ new/Makefile 2013-08-27 18:36:03 +0000 @@ -4,22 +4,17 @@ BINDIR = $(DESTDIR)/usr/bin MANDIR = $(DESTDIR)/usr/share/man/man1 DOCDIR = $(DESTDIR)/usr/share/doc/$(NAME) -KEYDIR = $(DESTDIR)/usr/share/keyrings binprogs := $(subst bin/,,$(wildcard bin/*)) manpages := $(subst man/,,$(wildcard man/*.1)) -build: ubuntu-cloudimg-keyring.gpg +build: echo manpages=$(manpages) install: - mkdir -p "$(BINDIR)" "$(DOCDIR)" "$(MANDIR)" "$(KEYDIR)" + mkdir -p "$(BINDIR)" "$(DOCDIR)" "$(MANDIR)" cd bin && install $(binprogs) "$(BINDIR)" cd man && install $(manpages) "$(MANDIR)/" --mode=0644 - install -m 0644 ubuntu-cloudimg-keyring.gpg $(KEYDIR) - -ubuntu-cloudimg-keyring.gpg: ubuntu-cloudimg-keyring.gpg.b64 - grep -v "^#" "$<" | base64 --decode > "$@" || { rm "$@"; exit 1; } clean: : === modified file 'bin/cloud-localds' --- old/bin/cloud-localds 2012-08-23 04:30:45 +0000 +++ new/bin/cloud-localds 2013-10-28 18:59:29 +0000 @@ -4,11 +4,11 @@ TEMP_D="" DEF_DISK_FORMAT="raw" DEF_FILESYSTEM="iso9660" +CR=" +" error() { echo "$@" 1>&2; } -errorp() { printf "$@" 1>&2; } fail() { [ $# -eq 0 ] || error "$@"; exit 1; } -failp() { [ $# -eq 0 ] || errorp "$@"; exit 1; } Usage() { cat < "${TEMP_D}/meta-data" + mdata="" + for kv in "instance-id:$instance_id" "local-hostname:$hostname" \ + "interfaces:${iface_data}" "dsmode:$dsmode"; do + key=${kv%%:*} + val=${kv#*:} + [ -n "$val" ] || continue + mdata="${mdata:+${mdata},${CR}}\"$key\": \"$val\"" + done + printf "{\n%s\n}\n" "$mdata" > "${TEMP_D}/meta-data" fi if [ "$userdata" = "-" ]; then @@ -143,11 +157,16 @@ esac [ "$output" = "-" ] && output="$TEMP_D/final" -qemu-img convert -f raw -O "$diskformat" "$img" "$output" || - fail "failed to convert to disk format $diskformat" +if [ "$diskformat" != "raw" ]; then + qemu-img convert -f raw -O "$diskformat" "$img" "$output" || + fail "failed to convert to disk format $diskformat" +else + cp "$img" "$output" || + fail "failed to copy image to $output" +fi [ "$output" != "$TEMP_D/final" ] || { cat "$output" && output="-"; } || fail "failed to write to -" -error "wrote ${output} with filesystem=$filesystem and diskformat=$diskformat" +debug 1 "wrote ${output} with filesystem=$filesystem and diskformat=$diskformat" # vi: ts=4 noexpandtab === modified file 'bin/cloud-publish-image' --- old/bin/cloud-publish-image 2012-12-03 15:11:53 +0000 +++ new/bin/cloud-publish-image 2013-08-21 14:13:02 +0000 @@ -64,6 +64,7 @@ specify 'none' for no ramdisk -R | --ramdisk-file f : bundle, upload, use file 'f' as ramdisk -B | --block-device-mapping m : specify block device mapping in bundle + --root-device-name r: pass '--root-device-name' in register EOF } @@ -79,7 +80,7 @@ debug() { local level=${1} shift; - [ "${level}" -ge "${VERBOSITY}" ] && return + [ "${level}" -gt "${VERBOSITY}" ] && return error "$(date):" "${@}" } run() { @@ -88,6 +89,8 @@ [ -e "${dir}/stamp.${pre}" ] && { debug 1 "skipping ${pre}"; return 0; } debug 1 "${msg}" + debug 2 "running:" "${@}" + echo "$@" > "${dir}/${pre}.cmd" "$@" > "${dir}/${pre}.stdout" 2> "${dir}/${pre}.stderr" && : > "${dir}/stamp.${pre}" && return 0 @@ -184,7 +187,7 @@ error "WARNING: '${0##*/}' is now to 'cloud${_n#uec}'. Please update your tools or docs" short_opts="B:h:k:K:l:no:r:R:t:vw:" -long_opts="add-launch:,allow-existing,block-device-mapping:,dry-run,help,hook-img:,kernel:,kernel-file:,name:,output:,image-to-raw,ramdisk:,ramdisk-file:,rename:,save-downloaded,type:,verbose,working-dir:" +long_opts="add-launch:,allow-existing,block-device-mapping:,dry-run,help,hook-img:,kernel:,kernel-file:,name:,output:,image-to-raw,ramdisk:,ramdisk-file:,rename:,root-device-name:,save-downloaded,type:,verbose,working-dir:" getopt_out=$(getopt --name "${0##*/}" \ --options "${short_opts}" --long "${long_opts}" -- "$@") && eval set -- "${getopt_out}" || @@ -210,6 +213,7 @@ image2raw=0 raw_image="" hook_img="" +rdname="" while [ $# -ne 0 ]; do cur=${1}; next=${2}; @@ -241,6 +245,7 @@ -R|--ramdisk-file) ramdisk_file=${next}; shift;; -n|--dry-run) dry_run=1;; --rename) rename=${next}; shift;; + --root-device-name) rdname=${next}; shift;; --save-downloaded) save_dl=1;; -t|--type) img_type=${next}; @@ -358,7 +363,7 @@ debug 1 "${EC2PRE}register seems not to support --name, not passing" name="" fi - + elif [ -z "${name}" -o "${name}" == "none" ]; then # if user passed in '--name=""' or '--name=none", do not pass --name name="" @@ -503,6 +508,10 @@ fail "failed to upload bundle to ${bucket}/${manifest}" junk="" img_id=""; + ex_register_args[${#ex_register_args[@]}]="--architecture=$arch" + [ -n "$rdname" ] && + ex_register_args[${#ex_register_args[@]}]="--root-device-name=$rdname" + run "${wdir}" "register" "register ${bucket}/${manifest}" \ ${EC2PRE}register ${name:+--name "${name}"} \ "${ex_register_args[@]}" "${bucket}/${manifest}" && === removed file 'bin/cloud-run-instances' --- old/bin/cloud-run-instances 2012-09-24 13:28:40 +0000 +++ new/bin/cloud-run-instances 1970-01-01 00:00:00 +0000 @@ -1,715 +0,0 @@ -#!/usr/bin/python -# -# Copyright (C) 2010 Canonical Ltd. -# -# Authors: Dustin Kirkland -# Scott Moser -# Clint Byrum -# Tom Ellis -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, version 3 of the License. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . - - -import os -import string -import sys -import signal -import re -import base64 -from optparse import OptionParser -from socket import getaddrinfo -import time -import logging -from paramiko import SSHClient, AutoAddPolicy, AuthenticationException -import paramiko -from subprocess import Popen, PIPE - -finished = "FINISHED" - -CC_IMPORT_SSH = """#cloud-config -runcmd: - - [ sudo, -Hu, ubuntu, sh, '-c', - "c=ssh-import-id; which $c >/dev/null || c=ssh-import-lp-id; $c $1", - "--", "%s" ] -""" - - -class SafeConnectException(Exception): - pass - - -class Instance(object): - pass - - -class TemporaryMissingHostKeyPolicy(AutoAddPolicy): - """ does not save to known_hosts, but does save the keys in an array """ - def __init__(self): - self._keys = [] - AutoAddPolicy.__init__(self) - - def missing_host_key(self, client, hostname, key): - self._keys.append(key) - - def getKeys(self): - return self._keys - - -class PermanentMissingHostKeyPolicy(TemporaryMissingHostKeyPolicy): - """ also has the behavor of the parent AutoAddPolicy """ - def missing_host_key(self, client, hostname, key): -#TemporaryMissingHostKeyPolicy.missing_host_key(self, client, hostname, key) - self._keys.append(key) - AutoAddPolicy.missing_host_key(self, client, hostname, key) - - -class ConsoleFingerprintScanner(object): - def __init__(self, instance_id, hostname, provider, options, sleeptime=30): - self.state = "working" - self.instance_id = instance_id - self.hostname = hostname - self.provider = provider - self.sleeptime = sleeptime - self.fingerprint = None - self.options = options - self.logger = logging.getLogger('console-scanner(%s)' % instance_id) - - def scan(self): - self.logger.debug('scraping fingerprints for instance_id = %s', - self.instance_id) - try: - while self.fingerprint is None: - console_data = self.get_console_output() - self.fingerprint = self.get_fingerprints_in_console_data( - console_data) - if self.fingerprint is not None: - self.fingerprint = (int(self.fingerprint[0]), - self.fingerprint[1], self.fingerprint[3]) - else: - self.logger.debug('sleeping %d seconds', - self.options.sleep_time) - time.sleep(self.options.sleep_time) - except None: - pass - return self.fingerprint - - def get_console_output(self): - cmd = '%s-get-console-output' % self.provider - args = [cmd] - args.append(self.instance_id) - - self.logger.debug('running %s', args) - rconsole = Popen(args, stdout=PIPE) - - ret = [] - try: - for line in rconsole.stdout: - ret.append(line.strip()) - finally: - cmdout = rconsole.wait() - - if bool(cmdout): - raise Exception('%s failed with return code = %d', cmd, cmdout) - - return ret - - def get_fingerprints_in_console_data(self, output): - # return an empty list on "no keys found" - # return a list of key fingerprint data on success - # where each key fingerprint data is an array like: - # (2048 c7:c8:1d:0f:d9:....0a:8a:fe localhost (RSA)) - begin_marker = "-----BEGIN SSH HOST KEY FINGERPRINTS----" - end_marker = "----END SSH HOST KEY FINGERPRINTS-----" - i = 0 - while i < len(output): - if output[i].find(begin_marker) > -1: - while i < len(output) and output[i].find(end_marker) == -1: - self.logger.debug(output[i].strip()) - toks = output[i].split(" ") - self.logger.debug(toks) - if len(toks) == 5: - # rip off "ec2:" - toks = toks[1:] - if len(toks) == 4 and toks[3] == "(RSA)": - self.logger.debug('found %s on line %d', toks, i) - return((toks)) - i = i + 1 - break - i = i + 1 - self.logger.debug( - 'did not find any fingerprints in output! (lines=%d)', i) - return None - - -class SshKeyScanner(object): - def __init__(self, instance_id, hostname, options, sleeptime=30): - self.state = "working" - self.instance_id = instance_id - self.hostname = hostname - self.sleeptime = sleeptime - self.fingerprint = None - self.keys = None - self.options = options - self.port = 22 - self.logger = logging.getLogger('ssh-key-scanner(%s)' % instance_id) - self.client = None - self.connected = False - - def scan(self): - self.logger.debug('getting fingerprints for %s', self.hostname) - try: - fingerprints = self.get_fingerprints_for_host() - self.logger.debug('fingerprints = %s', fingerprints) - if (len(fingerprints) > 0): - self.state = "finished" - self.fingerprint = fingerprints[0] - except None: - pass - return self.fingerprint - - def get_fingerprints_for_host(self): - # return an empty list on "no keys found" - # return a list of key fingerprint data on success - # where each key fingerprint data is an array like: - # (2048 c7:c8:1d:0f:d9:..:6f:0a:8a:fe localhost (RSA)) - - # use paramiko here - self.client = SSHClient() - client = self.client - client.set_log_channel('ssh-key-scanner(%s)' % self.instance_id) - - if self.options.known_hosts is not None: - policy = PermanentMissingHostKeyPolicy() - """ This step ensures we save the keys, otherwise that step will be - skipped in AutoAddPolicy.missing_host_key """ - for path in self.options.known_hosts: - if not os.path.isfile(path): - # if the file doesn't exist, then - # create it empty - fp = open(path, "w") - fp.close() - client.load_host_keys(path) - else: - policy = TemporaryMissingHostKeyPolicy() - client.set_missing_host_key_policy(policy) - - pkey = None - if self.options.privkey is not None: - # TODO support password protected key file - pkey = paramiko.RSAKey.from_private_key_file(self.options.privkey) - - retries = 0 - - allkeys = [] - - while 1: - try: - client.connect(self.hostname, self.port, - username=self.options.ssh_user, pkey=pkey) - self.connected = True - break - except AuthenticationException as (message): - self.logger.warning('auth failed (non fatal) %s', message) - break - except Exception as (e): - retries += 1 - if retries > 5: - raise Exception('gave up after retrying ssh %d times' % - retries) - self.logger.info(e) - self.logger.debug('retry #%d... sleeping %d seconds..', - retries, self.options.sleep_time) - time.sleep(self.options.sleep_time) - - rlist = [] - - allkeys.extend(policy.getKeys()) - allkeys.append(client.get_transport().get_remote_server_key()) - - for key in allkeys: - - if type(key) == paramiko.RSAKey or type(key) == paramiko.PKey: - keytype = '(RSA)' - elif type(key) == paramiko.DSSKey: - keytype = '(DSA)' - else: - raise Exception('Cannot handle type %s == %s' % - (type(key).__name__, key)) - - fp = key.get_fingerprint().encode("hex") - fp = ':'.join(re.findall('..', fp)) - rlist.append((key.get_bits(), fp, keytype)) - - return rlist - - def run_commands(self): - if (self.options.ssh_run_cmd is not None and - len(self.options.ssh_run_cmd)): - if not self.connected: - self.logger.critical('cannot run command, ssh did not connect') - sys.exit(1) - ecmd = ' '.join(self.options.ssh_run_cmd) - self.logger.debug('running %s', ecmd) - inouterr = self.client.exec_command(ecmd) - try: - for line in inouterr[1]: - print line, - except: - pass - try: - for line in inouterr[2]: - print >> sys.stderr(line) - except: - pass - - if self.connected: - self.client.close() - self.connected = False - - -def get_auto_instance_type(ami_id, provider): - cmd = '%s-describe-images' % provider - args = [cmd, ami_id] - logging.debug('running %s', args) - rimages = Popen(args, stdout=PIPE) - deftype = {'i386': 'm1.small', 'x86_64': 'm1.large'} - - try: - for line in rimages.stdout: - # Just in case there are %'s, don't confusee logging - # XXX print these out instead - logging.debug(line.replace('%', '%%').strip()) - parts = line.split("\t") - if parts[0] == 'IMAGE': - itype = parts[7] - if itype in deftype: - logging.info('auto instance type = %s', deftype[itype]) - return deftype[itype] - finally: - rcode = rimages.wait() - - logging.warning('ami not found, returning default m1.small') - return("m1.small") - - -def timeout_handler(signum, frame): - logging.critical('timeout reached, exiting') - sys.exit(1) - - -def handle_runargs(option, opt_str, value, parser): - delim = getattr(parser.values, "runargs_delim", None) - cur = getattr(parser.values, "runargs", []) - if cur is None: - cur = [] - cur.extend(value.split(delim)) - setattr(parser.values, "runargs", cur) - return - - -def main(): - parser = OptionParser( - usage="usage: %prog [options] ids|(-- raw args for provider scripts)") - parser.add_option("-t", "--instance-type", dest="inst_type", - help="instance type", metavar="TYPE", - default="auto") - parser.add_option("-k", "--key", dest="keypair_name", - help="keypair name", metavar="TYPE", - default="auto") - parser.add_option("-n", "--instance-count", dest="count", - help="instance count", metavar="TYPE", type="int", - default=1) - parser.add_option("", "--ssh-privkey", dest="privkey", - help="private key to connect with (ssh -i)", metavar="id_rsa", - default=None) - parser.add_option("", "--ssh-pubkey", dest="pubkey", - help="public key to insert into image)", metavar="id_rsa.pub", - default=None) - parser.add_option("", "--ssh-run-cmd", dest="ssh_run_cmd", - action="append", nargs=0, - help="run this command when ssh'ing", default=None) - parser.add_option("", "--ssh-user", dest="ssh_user", - help="connect with ssh as user", default=None) - parser.add_option("", "--associate-ip", dest="ip", - help="associate elastic IP with instance", metavar="IP_ADDR", - default=None) - parser.add_option("", "--attach-volume", dest="vol", - help="attach EBS volume with instance", metavar="VOLUME_ID", - default=None) - parser.add_option("", "--known-hosts", dest="known_hosts", action="append", - metavar="KnownHosts", default=None, - help="write host keys to specified known_hosts file. " - "Specify multiple times to read keys from multiple files " - "(only updates last one)") - parser.add_option("-l", "--launchpad-id", dest="launchpad_id", - action="append", metavar="lpid", default=None, - help="launchpad ids to pull SSH keys from " - "(multiple times adds to the list)") - parser.add_option("-i", "--instance-ids", dest="instance_ids", - action="store_true", default=False, - help="expect instance ids instead of ami ids," - "skips -run-instances") - parser.add_option("", "--all-instances", dest="all_instances", - action="store_true", default=False, - help="query all instances already defined " - "(running/pending/terminated/etc)") - parser.add_option("", "--run-args", dest="runargs", action="callback", - callback=handle_runargs, type="string", - help="pass option through to run-instances") - parser.add_option("", "--run-args-delim", dest="runargs_delim", - help="split run-args options with delimiter", - default=None) - parser.add_option("", "--verify-ssh", dest="verify_ssh", - action="store_true", - help="verify SSH keys against console output (implies --wait-for=ssh)", - default=False) - parser.add_option("", "--wait-for", dest="wait_for", - help="wait for one of: ssh , running", default=None) - parser.add_option("-p", "--provider", dest="provider", - help="either euca or ec2", default=None) - parser.add_option("-v", "--verbose", action="count", dest="loglevel", - help="increase logging level", default=3) - parser.add_option("-q", "--quiet", action="store_true", dest="quiet", - help="produce no output or error messages", default=False) - parser.add_option("", "--sleep-time", dest="sleep_time", - help="seconds to sleep between polling", default=2) - parser.add_option("", "--teardown", dest="teardown", action="store_true", - help="terminate instances at the end", default=False) - - (options, args) = parser.parse_args() - - if (os.path.basename(sys.argv[0]).startswith("uec") and - os.getenv("CLOUD_UTILS_WARN_UEC", "0") == "0"): - sys.stderr.write("WARNING: '%s' is now 'cloud-run-instances'. %s\n" % - (os.path.basename(sys.argv[0]), "Please update tools or docs")) - - if len(args) < 1 and not options.all_instances: - parser.error('you must pass at least one ami ID') - - # loglevel should be *reduced* every time -v is passed, - # see logging docs for more - if options.quiet: - sys.stderr = open('/dev/null', 'w') - sys.stdout = sys.stderr - else: - loglevel = 6 - options.loglevel - if loglevel < 1: - loglevel = 1 - # logging module levels are 0,10,20,30 ... - loglevel = loglevel * 10 - - logging.basicConfig(level=loglevel, - format="%(asctime)s %(name)s/%(levelname)s: %(message)s", - stream=sys.stderr) - - logging.debug("loglevel = %d", loglevel) - - provider = options.provider - if options.provider is None: - provider = os.getenv('EC2PRE', 'euca') - - if options.ssh_run_cmd == [()]: - options.ssh_run_cmd = args - - if options.known_hosts is None: - options.known_hosts = [os.path.expanduser('~/.ssh/known_hosts')] - - if options.known_hosts is not None and len(options.known_hosts): - path = None - for path in options.known_hosts: - if not os.access(path, os.R_OK): - logging.warning('known_hosts file %s is not readable!', path) - # paramiko writes to the last one - if not os.access(path, os.W_OK): - logging.critical('known_hosts file %s is not writable!', path) - - logging.debug("provider = %s", provider) - - logging.debug("instance type is %s", options.inst_type) - - if options.instance_ids or options.all_instances: - - if options.all_instances: - pending_instance_ids = [''] - else: - pending_instance_ids = args - - else: - - if len(args) < 1: - raise Exception('you must pass at least one AMI ID') - - ami_id = args[0] - del(args[0]) - - logging.debug("ami_id = %s", ami_id) - - if options.inst_type == "auto": - options.inst_type = get_auto_instance_type(ami_id, provider) - - pending_instance_ids = [] - - cmd = '%s-run-instances' % provider - - run_inst_args = [cmd] - - # these variables pass through to run-instances - run_inst_pt = { - "instance-count": options.count, - "instance-type": options.inst_type, - "key": options.keypair_name, - } - - for key, val in run_inst_pt.iteritems(): - if key is not None and key != "": - run_inst_args.append("--%s=%s" % (key, val)) - - if options.launchpad_id: - run_inst_args.append('--user-data') - run_inst_args.append(CC_IMPORT_SSH % - ' '.join(options.launchpad_id)) - - if options.runargs is not None: - run_inst_args.extend(options.runargs) - - run_inst_args.append(ami_id) - - # run-instances with pass through args - logging.debug("executing %s", run_inst_args) - logging.info("starting instances with ami_id = %s", ami_id) - - rinstances = Popen(run_inst_args, stdout=PIPE) - #INSTANCE i-32697259 ami-2d4aa444 pending\ - # 0 m1.small 2010-06-18T18:28:21+0000\ - # us-east-1b aki-754aa41c \ - # monitoring-disabled instance-store - try: - for line in rinstances.stdout: - # Just in case there are %'s, don't confusee logging - # XXX print these out instead - logging.debug(line.replace('%', '%%').strip()) - parts = line.split("\t") - if parts[0] == 'INSTANCE': - pending_instance_ids.append(parts[1]) - finally: - rcode = rinstances.wait() - - logging.debug("command returned %d", rcode) - logging.info("instances started: %s", pending_instance_ids) - - if bool(rcode): - raise Exception('%s failed' % cmd) - - if len(pending_instance_ids) < 1: - raise Exception('no instances were started!') - - cmd = '%s-describe-instances' % provider - - instances = [] - - timeout_date = time.time() + 600 - - signal.signal(signal.SIGALRM, timeout_handler) - signal.alarm(600) - - logging.debug("timeout at %s", time.ctime(timeout_date)) - - # We must wait for ssh to run commands - if options.verify_ssh and not options.wait_for == 'ssh': - logging.info('--verify-ssh implies --wait-for=ssh') - options.wait_for = 'ssh' - - if options.ssh_run_cmd and not options.wait_for == 'ssh': - logging.info('--ssh-run-cmd implies --wait-for=ssh') - options.wait_for = 'ssh' - - while len(pending_instance_ids): - new_pending_instance_ids = [] - describe_inst_args = [cmd] - - # remove '', confuses underlying commands - pids = [] - for iid in pending_instance_ids: - if len(iid): - pids.append(iid) - if len(pids): - describe_inst_args.extend(pending_instance_ids) - - logging.debug('running %s', describe_inst_args) - rdescribe = Popen(describe_inst_args, stdout=PIPE) - try: - for line in rdescribe.stdout: - logging.debug(line.replace('%', '%%').strip()) - parts = line.split("\t") - if parts[0] == 'INSTANCE': - iid = parts[1] - istatus = parts[5] - if istatus == 'terminated': - logging.debug('%s is terminated, ignoring...', iid) - elif istatus != 'running' and options.wait_for: - logging.warning('%s is %s', iid, istatus) - new_pending_instance_ids.append(iid) - elif istatus != 'running' and options.vol: - logging.warning('%s is %s', iid, istatus) - new_pending_instance_ids.append(iid) - else: - logging.info("%s %s", iid, istatus) - inst = Instance() - inst.id = iid - inst.hostname = parts[3] - inst.output = line - instances.append(inst) - finally: - rcode = rdescribe.wait() - - pending_instance_ids = new_pending_instance_ids - - logging.debug("command returned %d", rcode) - logging.debug("pending instances: %s", pending_instance_ids) - - if bool(rcode): - raise Exception('%s failed' % cmd) - - if len(pending_instance_ids): - logging.debug('sleeping %d seconds', options.sleep_time) - time.sleep(options.sleep_time) - - if options.ip: - ips = options.ip.split(',') - if len(ips) < len(instances): - logging.warning( - 'only %d ips given, some instances will not get an ip', - len(ips)) - elif len(ips) > len(instances): - logging.warning('%d ips given, some ips will not be associated', - len(ips)) - - rcmds = [] - ips.reverse() - for inst in instances: - cmd = '%s-associate-address' % provider - if len(ips) < 1: - break - ip = ips.pop() - aargs = [cmd, '-i', inst.id, ip] - logging.debug('running %s', aargs) - rassociate = Popen(aargs, stdout=PIPE) - rcmds.append(rassociate) - for rcmd in rcmds: - # dump stdin into the inst object - try: - for line in rcmd.stdout: - logging.info(line) - finally: - ret = rcmd.wait() - if bool(ret): - logging.debug('associate-ip returned %d', ret) - - if options.vol: - # as you can start multiple instances, support multiple vols like ips, - # instead of multiple volumes on one instance - vols = options.vol.split(',') - if len(vols) < len(instances): - logging.warning('only %d volumes given, some instances will not' - ' get a volume attached', len(vols)) - elif len(vols) > len(instances): - logging.warning( - '%d volumes given, some volumes will not be associated', - len(vols)) - - rcmds = [] - vols.reverse() - for inst in instances: - # instance needs to be 'running' not 'pending' before attaching - # volume, otherwise it fails - logging.info('waiting for instance to run') - cmd = '%s-attach-volume' % provider - if len(vols) < 1: - break - vol = vols.pop() - dev = '/dev/sdb' - args = [cmd, '-i', inst.id, '-d', dev, vol] - logging.debug('running %s', args) - logging.info("attaching volume with id = %s to instance id = %s", - vol, inst.id) - rattach = Popen(args, stdout=PIPE) - rcmds.append(rattach) - for rcmd in rcmds: - # dump stdin into the inst object - try: - for line in rcmd.stdout: - logging.info(line) - finally: - ret = rcmd.wait() - if bool(ret): - logging.debug('attach-volume returned %d', ret) - - if options.wait_for == 'ssh': - logging.info('waiting for ssh access') - for inst in instances: - pid = os.fork() - if pid == 0: - ssh_key_scan = SshKeyScanner(inst.id, inst.hostname, options) - ssh_fingerprint = ssh_key_scan.scan() - if options.verify_ssh: - # For ec2, it can take 3.5 minutes or more to get console - # output, do this last, and only if we have to. - cons_fp_scan = ConsoleFingerprintScanner(inst.id, - inst.hostname, provider, options) - console_fingerprint = cons_fp_scan.scan() - - if console_fingerprint == ssh_fingerprint: - logging.debug('fingerprint match made for iid = %s', - inst.id) - else: - fmt = 'fingerprints do not match for iid = %s' - raise Exception(fmt % inst.id) - ssh_key_scan.run_commands() - raise SystemExit - else: - logging.debug('child pid for %s is %d', inst.id, pid) - inst.child = pid - logging.info('Waiting for %d children', len(instances)) - final_instances = [] - - for inst in instances: - try: - (pid, status) = os.waitpid(inst.child, 0) - except: - logging.critical('%s - %d doesn\'t exist anymore?', inst.id, - pid) - logging.debug('%d returned status %d', pid, status) - if not bool(status): - final_instances.append(inst) - instances = final_instances - - """ If we reach here, all has happened in the expected manner so - we should produce the expected output which is instance-id\\tip\\n """ - - final_instance_ids = [] - for inst in instances: - final_instance_ids.append(inst.id) - - if options.teardown: - terminate = ['%s-terminate-instances' % provider] - terminate.extend(final_instance_ids) - logging.debug('running %s', terminate) - logging.info('terminating instances...') - rterm = Popen(terminate, stdout=sys.stderr, stderr=sys.stderr) - rterm.wait() - - -if __name__ == "__main__": - main() - -# vi: ts=4 expandtab === modified file 'bin/growpart' --- old/bin/growpart 2013-03-21 08:40:05 +0000 +++ new/bin/growpart 2014-02-26 16:02:41 +0000 @@ -63,9 +63,13 @@ if ${RESTORE_FUNC} ; then error "***** Appears to have gone OK ****" else - error "***** FAILED! or original partition table" \ - "looked like: ****" - cat "${RESTORE_HUMAN}" 1>&2 + error "***** FAILED! ******" + if [ -n "${RESTORE_HUMAN}" -a -f "${RESTORE_HUMAN}" ]; then + error "**** original table looked like: ****" + cat "${RESTORE_HUMAN}" 1>&2 + else + error "We seem to have not saved the partition table!" + fi fi fi [ -z "${TEMP_D}" -o ! -d "${TEMP_D}" ] || rm -Rf "${TEMP_D}" @@ -147,8 +151,8 @@ } mbr_resize() { - RESTORE_HUMAN="${TEMP_D}/recovery" - MBR_BACKUP="${TEMP_D}/orig.save" + local humanpt="${TEMP_D}/recovery" + local mbr_backup="${TEMP_D}/orig.save" local change_out=${TEMP_D}/change.out local dump_out=${TEMP_D}/dump.out @@ -156,6 +160,7 @@ local dump_mod=${TEMP_D}/dump.mod local tmp="${TEMP_D}/tmp.out" local err="${TEMP_D}/err.out" + local mbr_max_512="4294967296" local _devc cyl _w1 heads _w2 sectors _w3 tot dpart local pt_start pt_size pt_end max_end new_size change_info @@ -170,17 +175,23 @@ tot=$((${cyl}*${heads}*${sectors})) debug 1 "geometry is ${MBR_CHS}. total size=${tot}" + [ "$tot" -gt "$mbr_max_512" ] && + debug 1 "WARN: disk is larger than 2TB. additional space will go unused." + rqe sfd_dump sfdisk ${MBR_CHS} --unit=S --dump "${DISK}" \ >"${dump_out}" || fail "failed to dump sfdisk info for ${DISK}" + RESTORE_HUMAN="$dump_out" { echo "## sfdisk ${MBR_CHS} --unit=S --dump ${DISK}" cat "${dump_out}" - } >"${RESTORE_HUMAN}" + } >"$humanpt" + [ $? -eq 0 ] || fail "failed to save sfdisk -d output" + RESTORE_HUMAN="$humanpt" - debugcat 1 "${RESTORE_HUMAN}" + debugcat 1 "$humanpt" sed -e 's/,//g; s/start=/start /; s/size=/size /' "${dump_out}" \ >"${dump_mod}" || @@ -210,6 +221,10 @@ [ -n "${max_end}" ] || fail "failed to get max_end for partition ${PART}" + if [ "$max_end" -gt "$mbr_max_512" ]; then + max_end=$mbr_max_512; + fi + debug 1 "max_end=${max_end} tot=${tot} pt_end=${pt_end}" \ "pt_start=${pt_start} pt_size=${pt_size}" [ $((${pt_end})) -eq ${max_end} ] && @@ -237,8 +252,9 @@ exit 0 fi + MBR_BACKUP="${mbr_backup}" LANG=C sfdisk --no-reread "${DISK}" ${MBR_CHS} --force \ - -O "${MBR_BACKUP}" <"${new_out}" >"${change_out}" 2>&1 + -O "${mbr_backup}" <"${new_out}" >"${change_out}" 2>&1 ret=$? [ $ret -eq 0 ] || RESTORE_FUNC="mbr_restore" @@ -296,6 +312,7 @@ # used in case something goes wrong and human interaction is required # to revert any changes. rqe sgd_info sgdisk "--info=${PART}" --print "${DISK}" >"${pt_info}" || + fail "${dev}: failed to dump original sgdisk info" RESTORE_HUMAN="${pt_info}" debug 1 "$dev: original sgdisk info:" @@ -357,6 +374,8 @@ fail "${dev}: failed to parse sgdisk details" debug 1 "${dev}: code=${code} guid=${guid} name='${name}'" + local wouldrun="" + [ "$DRY_RUN" -ne 0 ] && wouldrun="would-run" # Calculate the new size of the partition new_size=$((${pt_max} - ${pt_start})) @@ -364,11 +383,8 @@ new="new: size=${new_size},end=${pt_max}" change_info="${dev}: start=${pt_start} ${old} ${new}" - # Dry run - [ "${DRY_RUN}" -ne 0 ] && change "${change_info}" - # Backup the current partition table, we're about to modify it - rq sgd_backup sgdisk "--backup=${GPT_BACKUP}" "${DISK}" || + rq sgd_backup $wouldrun sgdisk "--backup=${GPT_BACKUP}" "${DISK}" || fail "${dev}: failed to backup the partition table" # Modify the partition table. We do it all in one go (the order is @@ -379,16 +395,19 @@ # - set the partition code # - set the partition GUID # - set the partition name - rq sgdisk_mod sgdisk --move-second-header "--delete=${PART}" \ + rq sgdisk_mod $wouldrun sgdisk --move-second-header "--delete=${PART}" \ "--new=${PART}:${pt_start}:${pt_max}" \ "--typecode=${PART}:${code}" \ "--partition-guid=${PART}:${guid}" \ "--change-name=${PART}:${name}" "${DISK}" && - rq pt_update pt_update "$DISK" "$PART" || { + rq pt_update $wouldrun pt_update "$DISK" "$PART" || { RESTORE_FUNC=gpt_restore fail "${dev}: failed to repartition" } + # Dry run + [ "${DRY_RUN}" -ne 0 ] && change "${change_info}" + changed "${change_info}" } @@ -418,7 +437,20 @@ local label="$1" ret="" efile="" efile="$TEMP_D/$label.err" shift; - debug 2 "running[$label][$_capture]" "$@" + + local rlabel="running" + [ "$1" = "would-run" ] && rlabel="would-run" && shift + + local cmd="" x="" + for x in "$@"; do + [ "${x#* }" != "$x" -o "${x#* \"}" != "$x" ] && x="'$x'" + cmd="$cmd $x" + done + cmd=${cmd# } + + debug 2 "$rlabel[$label][$_capture]" "$cmd" + [ "$rlabel" = "would-run" ] && return 0 + if [ "${_capture}" = "erronly" ]; then "$@" 2>"$TEMP_D/$label.err" ret=$? @@ -448,10 +480,18 @@ fi if command -v partx >/dev/null 2>&1; then - partx --help | grep -q -- --update || { - reason="partx has no '--update' flag in usage." + local out="" ret=0 + out=$(partx --help 2>&1) + ret=$? + if [ $ret -eq 0 ]; then + echo "$out" | grep -q -- --update || { + reason="partx has no '--update' flag in usage." + found="off" + } + else + reason="'partx --help' returned $ret. assuming it is old." found="off" - } + fi else reason="no 'partx' command" found="off" @@ -498,6 +538,8 @@ if ! $update; then return 0 fi + # partx only works on block devices (do not run on file) + [ -b "$dev" ] || return 0 partx --update "$part" "$dev" } === added file 'bin/mount-image-callback' --- old/bin/mount-image-callback 1970-01-01 00:00:00 +0000 +++ new/bin/mount-image-callback 2014-04-03 16:20:23 +0000 @@ -0,0 +1,279 @@ +#!/bin/bash + +VERBOSITY=0 +TEMP_D="" +UMOUNT="" +QEMU_DISCONNECT="" + +error() { echo "$@" 1>&2; } + +Usage() { + cat <&2; [ $# -eq 0 ] || error "$@"; exit 1; } +cleanup() { + if [ -n "$UMOUNT" ]; then + umount_r "$UMOUNT" || + error "WARNING: unmounting filesystems failed!" + fi + if [ -n "$QEMU_DISCONNECT" ]; then + local out="" + out=$(qemu-nbd --disconnect "$QEMU_DISCONNECT" 2>&1) || { + error "warning: failed: qemu-nbd --disconnect $QEMU_DISCONNECT" + error "$out" + } + fi + [ -z "${TEMP_D}" -o ! -d "${TEMP_D}" ] || + rm --one-file-system -Rf "${TEMP_D}" || + error "removal of temp dir failed!" +} + +debug() { + local level="$1"; shift; + [ "${level}" -gt "${VERBOSITY}" ] && return + error "${@}" +} + +mount_callback_umount() { + local img_in="$1" dev="" out="" mp="" ret="" img="" ro="" + local opts="" bmounts="" system_resolvconf=false + + short_opts="dhpsv" + long_opts="dev,help,proc,read-only,sys,system-mounts,system-resolvconf,verbose" + getopt_out=$(getopt --name "${0##*/}" \ + --options "${short_opts}" --long "${long_opts}" -- "$@") && + eval set -- "${getopt_out}" || + { bad_Usage; return 1; } + + while [ $# -ne 0 ]; do + cur=${1}; next=${2}; + case "$cur" in + -d|--dev) bmounts="${bmounts:+${bmounts} }/dev";; + -h|--help) Usage ; exit 0;; + -p|--proc) bmounts="${bmounts:+${bmounts} }/proc";; + -s|--sys) bmounts="${bmounts:+${bmounts} }/sys";; + --system-mounts) bmounts="/dev /proc /sys";; + --system-resolvconf) system_resolvconf=true;; + -v|--verbose) VERBOSITY=$((${VERBOSITY}+1));; + --opts) opts="${opts} $next"; shift;; + --read-only) ro="ro";; + --) shift; break;; + esac + shift; + done + + [ $# -ge 2 ] || { bad_Usage "must provide image and cmd"; return 1; } + + [ -n "$ro" ] && $system_resolvconf && { + error "--read-only is incompatible with system-resolvconf"; + return 1; + } + + img_in="$1" + shift 1 + + img=$(readlink -f "$img_in") || + { error "failed to get full path to $img_in"; return 1; } + + [ "$(id -u)" = "0" ] || + { error "sorry, must be root"; return 1; } + + TEMP_D=$(mktemp -d "${TMPDIR:-/tmp}/${0##*/}.XXXXXX") || + { error "failed to make tempdir"; return 1; } + trap cleanup EXIT + + mp="${TEMP_D}/mp" + + mkdir "$mp" || return + + local cmd="" arg="" found=false + cmd=( ) + for arg in "$@"; do + if [ "${arg}" = "_MOUNTPOINT_" ]; then + debug 1 "replaced string _MOUNTPOINT_ in arguments arg ${#cmd[@]}" + arg=$mp + fi + cmd[${#cmd[@]}]="$arg" + done + + if [ "${cmd[0]##*/}" = "bash" -o "${cmd[0]##*/}" = "sh" ] && + [ ${#cmd[@]} -eq 0 ]; then + debug 1 "invoking shell ${cmd[0]}" + error "MOUNTPOINT=$mp" + fi + + local hasqemu=false + command -v "qemu-nbd" >/dev/null 2>&1 && hasqemu=true + + if out=$(set -f; mount -o loop${ro:+,$ro} $opts \ + "$img" "$mp" 2>&1); then + debug 1 "mounted simple filesystem image '$img_in'" + UMOUNT="$mp" + else + if ! $hasqemu; then + error "simple mount of '$img_in' failed." + error "if this not a raw image, or it is partitioned" + error "you must have qemu-nbd (apt-get install qemu-utils)" + error "mount failed with: $out" + return 1 + fi + fi + + if [ -z "$UMOUNT" ]; then + if [ ! -e /sys/block/nbd0 ] && ! grep -q nbd /proc/modules; then + debug 1 "trying to load nbd module" + modprobe nbd >/dev/null 2>&1 + udevadm settle >/dev/null 2>&1 + fi + [ -e /sys/block/nbd0 ] || { + error "no nbd kernel support, but simple mount failed" + return 1; + } + + local f nbd="" + for f in /sys/block/nbd*; do + [ -d "$f" -a ! -f "$f/pid" ] && nbd=${f##*/} && break + done + if [ -z "$nbd" ]; then + error "failed to find an nbd device" + return 1; + fi + nbd="/dev/$nbd" + + if ! qemu-nbd --connect "$nbd" "$img"; then + error "failed to qemu-nbd connect $img to $nbd" + return 1 + fi + QEMU_DISCONNECT="$nbd" + + local pfile="/sys/block/${nbd#/dev/}/pid" + if [ ! -f "$pfile" ]; then + debug 1 "waiting on pidfile for $nbd in $pfile" + local i=0 + while [ ! -f "$pfile" ] && i=$(($i+1)); do + if [ $i -eq 200 ]; then + error "giving up on pidfile $pfile for $nbd" + return 1 + fi + sleep .1 + debug 2 "." + done + fi + + debug 1 "connected $img_in to $nbd. now udev-settling" + udevadm settle >/dev/null 2>&1 + + local mdev="$nbd" + if [ -b "${nbd}p1" ]; then + mdev="${nbd}p1" + fi + if ( set -f; mount ${ro:+-o ${ro}} $opts "$mdev" "$mp" ) && + UMOUNT="$mp"; then + debug 1 "mounted $mdev via qemu-nbd $nbd" + else + local pid="" pfile="/sys/block/${nbd#/dev/}/pid" + { read pid < "$pfile" ; } >/dev/null 2>&1 + [ -n "$pid" -a ! -d "/proc/$pid" ] || + error "qemu-nbd process seems to have died. was '$pid'" + + qemu-nbd --disconnect "$nbd" && QEMU_DISCONNECT="" + error "failed to mount $mdev" + return 1 + fi + + fi + + local bindmp="" + for bindmp in $bmounts; do + [ -d "$mp${bindmp}" ] || mkdir "$mp${bindmp}" || + { error "failed mkdir $bindmp in mount"; return 1; } + mount --bind "$bindmp" "$mp/${bindmp}" || + { error "failed bind mount '$bindmp'"; return 1; } + done + + if ${system_resolvconf}; then + local rcf="$mp/etc/resolv.conf" + debug 1 "replacing /etc/resolvconf" + if [ -e "$rcf" -o -L "$rcf" ]; then + local trcf="$rcf.${0##*/}.$$" + rm -f "$trcf" && + mv "$rcf" "$trcf" && ORIG_RESOLVCONF="$trcf" || + { error "failed mv $rcf"; return 1; } + fi + cp "/etc/resolv.conf" "$rcf" || + { error "failed copy /etc/resolv.conf"; return 1; } + fi + + debug 1 "invoking: MOUNTPOINT=$mp" "${cmd[@]}" + MOUNTPOINT="$mp" "${cmd[@]}" + ret=$? + + if ${system_resolvconf}; then + local rcf="$mp/etc/resolv.conf" + cmp --quiet "/etc/resolv.conf" "$rcf" >/dev/null || + error "WARN: /etc/resolv.conf changed in image!" + rm "$rcf" && + { [ -z "$ORIG_RESOLVCONF" ] || mv "$ORIG_RESOLVCONF" "$rcf"; } || + { error "failed to restore /etc/resolv.conf"; return 1; } + fi + + debug 1 "cmd returned $ret. unmounting $mp" + umount_r "$mp" || { error "failed umount $img"; return 1; } + UMOUNT="" + rmdir "$mp" + + if [ -n "$QEMU_DISCONNECT" ]; then + local out="" + out=$(qemu-nbd --disconnect "$QEMU_DISCONNECT" 2>&1) && + QEMU_DISCONNECT="" || { + error "failed to disconnect $QEMU_DISCONNECT"; + error "$out" + return 1; + } + fi + return $ret +} + +mount_callback_umount "$@" + +# vi: ts=4 noexpandtab === modified file 'bin/ubuntu-cloudimg-query' --- old/bin/ubuntu-cloudimg-query 2013-02-04 16:32:56 +0000 +++ new/bin/ubuntu-cloudimg-query 2014-02-06 13:19:49 +0000 @@ -5,7 +5,8 @@ NAME="ubuntu-cloudimg-query" DOT_D="$HOME/.$NAME" CACHE_D="$HOME/.cache/$NAME" -KNOWN_RELEASES="hardy karmic lucid maverick natty oneiric precise quantal raring"; +KNOWN_RELEASES="hardy karmic lucid maverick natty oneiric precise quantal + raring trusty"; cachelife=86400 error() { echo "$@" 1>&2; } @@ -23,6 +24,7 @@ -o | --output FILE output to file rather than stdout -f | --format format change output to 'format'. default: '%{ami}\n' + --arch ARCH use the specified arch Examples: - get the latest ami matching default criteria for release 'n' @@ -122,7 +124,7 @@ } short_opts="f:ho:v" -long_opts="format:,help,no-cache,output:,verbose" +long_opts="arch:,format:,help,no-cache,output:,verbose" getopt_out=$(getopt --name "${0##*/}" \ --options "${short_opts}" --long "${long_opts}" -- "$@") && eval set -- "${getopt_out}" || @@ -134,7 +136,7 @@ burl="${UBUNTU_CLOUDIMG_QUERY_BASEURL:-https://cloud-images.ubuntu.com/query}" store="ebs" region_default="${EC2_REGION:-us-east-1}" -release="lucid" +release="precise" arch="amd64" stream="released" bname="server" @@ -150,6 +152,7 @@ while [ $# -ne 0 ]; do cur=${1}; next=${2}; case "$cur" in + --arch) arch="$next"; shift;; -h|--help) Usage ; exit 0;; -f|--format) format=${2}; shift;; -o|--output) output=${2}; shift;; @@ -169,7 +172,9 @@ rel*) stream="released";; daily) stream=${i};; server|desktop) bname=${i};; - i386|amd64|x86_64) arch=${i}; [ "${i}" = "x86_64" ] && arch="amd64";; + i386|amd64|x86_64|armhf|ppc64el|arm64) + arch=${i}; + [ "${i}" = "x86_64" ] && arch="amd64";; *-*-[0-9]) region=${i};; ebs) store="$i";; instance|instance-store) store="instance-store";; @@ -249,7 +254,13 @@ $6 == arch && $7 == region && $11 == ptype { print $8 }' \ "release=$release" "bname=${bname}" \ "store=$store" "arch=$arch" "region=$region" "ptype=$ptype" \ - "${ec2_curf}") && [ -n "$ami" ] || fail "failed to find ami" + "${ec2_curf}") + +if [ -z "$ami" ]; then + amifmt="%{ami}" + [ "$format" = "${format#*${amifmt}}" ] || + fail "no matching ami id found, but '%{ami}' in output format" +fi case "$arch:$store:$ptype" in *:hvm) itypes_all="${itypes_hvm}";; === modified file 'bin/ubuntu-ec2-run' --- old/bin/ubuntu-ec2-run 2013-02-04 16:33:18 +0000 +++ new/bin/ubuntu-ec2-run 2014-04-07 12:43:08 +0000 @@ -20,7 +20,7 @@ # along with this program. If not, see . KNOWN_RELEASES = ["lucid", "maverick", "natty", "oneiric", "precise", - "quantal", "raring"] + "quantal", "raring", "trusty"] USAGE = """ Usage: ubuntu-ec2-run [ options ] arguments @@ -76,6 +76,42 @@ "hvm", "paravirtual", "pv", ] +SSD = "ssd" +SPIN = "spin" + +# cleaned from http://aws.amazon.com/ec2/instance-types/ +# (vcpu, compute-units, mem, disknum, disksize, diskback) +SIZE_DATA = { + 'm3.medium': (1,3,3.75,1,4,SSD), + 'm3.large': (2,6.5,7.5,1,32,SSD), + 'm3.xlarge': (4,13,15,2,40,SSD), + 'm3.2xlarge': (8,26,30,2,80,SSD), + 'm1.small': (1,1,1.7,1,160,SPIN), + 'm1.medium': (1,2,3.75,1,410,SPIN), + 'm1.large': (2,4,7.5,2,420,SPIN), + 'm1.xlarge': (4,8,15,4,420,SPIN), + 'c3.large': (2,7,3.75,2,16,SSD), + 'c3.xlarge': (4,14,7.5,2,40,SSD), + 'c3.2xlarge': (8,28,15,2,80,SSD), + 'c3.4xlarge': (16,55,30,2,160,SSD), + 'c3.8xlarge': (32,108,60,2,320,SSD), + 'c1.medium': (2,5,1.7,1,350,SPIN), + 'c1.xlarge': (8,20,7,4,420,SPIN), + 'cc2.8xlarge': (32,88,60.5,4,840,SPIN), + 'g2.2xlarge': (8,26,15,1,60,SSD), + 'cg1.4xlarge': (16,33.5,22.5,2,840,SPIN), + 'm2.xlarge': (2,6.5,17.1,1,420,SPIN), + 'm2.2xlarge': (4,13,34.2,1,850,SPIN), + 'm2.4xlarge': (8,26,68.4,2,840,SPIN), + 'cr1.8xlarge': (32,88,244,2,120,SSD), + 'i2.xlarge': (4,14,30.5,1,800,SSD), + 'i2.2xlarge': (8,27,61,2,800,SSD), + 'i2.4xlarge': (16,53,122,4,800,SSD), + 'i2.8xlarge': (32,104,244,8,800,SSD), + 'hs1.8xlarge': (16,35,117,2,2048,SPIN), + 'hi1.4xlarge': (16,35,60.5,2,1024,SSD), + 't1.micro': (1,.1,0.615,0,0,None), +} def get_argopt(args, optnames): ret = None @@ -101,27 +137,13 @@ def get_block_device_mappings(itype): - # cleaned from http://aws.amazon.com/ec2/instance-types/ - # t1.micro 0 # m1.large 850 # cg1.4xlarge 1690 - # m1.small 160 # m2.2xlarge 850 # m1.xlarge 1690 - # c1.medium 350 # c1.xlarge 1690 # m2.4xlarge 1690 - # m1.medium 410 # cc1.4xlarge 1690 # hi1.4xlarge 2048 - # m2.xlarge 420 # cc1.4xlarge 1690 # cc2.8xlarge 3370 - # m3.xlarge 0 - # m3.2xlarge 0 bdmaps = [] - if (itype in ("t1.micro", "m1.small", "c1.medium") or - itype.startswith("m3.")): - pass # the first one is always attached. ephemeral0=sda2 - elif itype in ("m2.xlarge", "m1.medium"): - bdmaps = ["/dev/sdb=ephemeral0"] - elif (itype in ("m1.large", "m2.2xlarge", "hi1.4xlarge") or - itype.startswith("cg1.") or itype.startswith("cc1.")): - bdmaps = ["/dev/sdb=ephemeral0", "/dev/sdc=ephemeral1"] - elif (itype in ("m1.xlarge", "m2.4xlarge", "c1.xlarge") or - itype.startswith("cc2.8xlarge")): - bdmaps = ["sdb=ephemeral0", "sdc=ephemeral1", - "sdd=ephemeral2", "sde=ephemeral3"] + allmaps = ["/dev/sdb=ephemeral0", "/dev/sdc=ephemeral1", + "/dev/sdd=ephemeral2", "/dev/sde=ephemeral3"] + if itype in SIZE_DATA: + (vcpu, ec2, mem, disknum, disksize, diskback) = SIZE_DATA[itype] + bdmaps = allmaps[0:disknum] + args = [] for m in bdmaps: args.extend(("--block-device-mapping", m,)) === added file 'bin/vcs-run' --- old/bin/vcs-run 1970-01-01 00:00:00 +0000 +++ new/bin/vcs-run 2013-08-21 17:32:30 +0000 @@ -0,0 +1,282 @@ +#!/bin/bash +set -f + +VERBOSITY=0 +SUPPORTED_VCS="bzr hg git url" +RET_UNCLAIMED=3 +RET_SUCCESS=0 +RET_FAIL=1 +DEF_COMMAND="vcs_run" + +Usage() { + cat <&2; [ $# -eq 0 ] || error "$@"; return 1; } +error() { echo "$@" 1>&2; } +debug() { + local level=${1}; shift; + [ "${level}" -gt "${VERBOSITY}" ] && return + error "${@}" +} + +has_cmd() { + command -v "$1" >/dev/null 2>&1 +} + +get_cmd() { + # get_cmd(cmd, get_deps, packages) + # get command 'cmd' if necessary by installing 'packages' + # if 'get_deps' is false, then return error. + local cmd="$1" deps="$2" + shift 2 + has_cmd "$1" && return 0 + $deps || { error "No cmd '$cmd', but nodeps specified"; return 1; } + apt_install "$@" +} + +apt_install() { + local cmd="" + cmd=( env DEBIAN_FRONTEND=noninteractive apt-get --quiet + --assume-yes install "$@" ) + [ "$(id -u)" = "0" ] || + cmd=( sudo "${cmd[@]}" ) + debug 1 "installing dependencies:" "${cmd[@]}" + "${cmd[@]}" +} + +vcsget_bzr() { + # deps type src target cmd + local deps="$1" rtype="$2" src="$3" target="$4" tmp="" + if [ "$rtype" = "auto" ]; then + case "$src" in + *.bzr|bzr:*|lp:*) :;; + *) + if ! [ -d "$src" -a -d "$src/.bzr" ]; then + return $RET_UNCLAIMED + fi + src=$(cd "$src" && pwd) || return $RET_FAIL + ;; + esac + fi + get_cmd bzr "$deps" bzr || return $RET_FAIL + if [ -z "$target" ]; then + case "$src" in + */*) tmp="${src##*/}";; + *:*) tmp="${src#*:}";; + *) tmp="$src" + esac + target="${tmp%.bzr}" + fi + local cmd="" q="--quiet" + [ $VERBOSITY -gt 1 ] && q="" + + if [ -d "$target/.bzr" ]; then + debug 1 "updating $target: bzr pull ${q:+$q }$src" + ( cd "$target" && bzr pull $q "$src" ) + else + debug 1 "branching to $target: bzr branch ${q:+$q }$src" + bzr branch $q "$src" "$target" + fi + [ $? -eq 0 ] || return $RET_FAIL + _RET="$target" + return 0 +} + +vcsget_git() { + # deps type src target cmd + local deps="$1" rtype="$2" src="$3" target="$4" tmp="" + if [ "$rtype" = "auto" ]; then + case "$src" in + *.git|git:*) :;; + *) + if ! [ -d "$src" -a -d "$src/.git" ]; then + return $RET_UNCLAIMED + fi + src=$(cd "$src" && pwd) || return $RET_FAIL + ;; + esac + fi + get_cmd git "$deps" git || return $RET_FAIL + if [ -z "$target" ]; then + tmp="${src##*/}" + target="${tmp%.git}" + fi + local q="--quiet" + [ $VERBOSITY -gt 1 ] && q="" + if [ -d "$target/.git" ]; then + debug 1 "updating $target: git pull ${q:+$q }${src}" + ( cd "$target" && git pull $q "$src" ) + else + debug 1 "cloning to $target: git clone ${q:+$q }$src $target" + git clone $q "$src" "$target" || return $RET_FAIL + fi + [ $? -eq 0 ] || return $RET_FAIL + _RET="$target" + return 0 +} + +vcsget_hg() { + # deps type src target cmd + local deps="$1" rtype="$2" src="$3" target="$4" tmp="" + if [ "$rtype" = "auto" ]; then + case "$src" in + *.hg|hg:*) :;; + *) return $RET_UNCLAIMED;; + esac + fi + get_cmd hg "$deps" mercurial || return $RET_FAIL + if [ -z "$target" ]; then + tmp="${src##*/}" + target="${tmp%.hg}" + fi + local quiet="--quiet" + [ $VERBOSITY -gt 1 ] && quiet="" + hg clone $quiet "$src" "$target" || return $RET_FAIL + _RET="$target" + return 0 +} + +vcsget_url() { + # deps type src target cmd + # if target is not specified, target directory is md5sum + # of the url. If cmd does not start with a /, then use it + # as the output filename. If it does start with a /, then + # store the url in DEF_COMMAND in this directory. + local deps="$1" rtype="$2" src="$3" target="$4" cmd="$5" tmp="" + if [ "$rtype" = "auto" ]; then + case "$src" in + http://*|https://*) :;; + *) return $RET_UNCLAIMED;; + esac + fi + get_cmd wget "$deps" wget || return $RET_FAIL + if [ -z "$target" ]; then + target=$(echo "$src" | md5sum) + target=${target% -} + fi + + local cmdname="$cmd" + if [ "${cmd#/}" != "$cmd" ]; then + cmdname="./$DEF_COMMAND" + fi + + local quiet="--quiet" + [ $VERBOSITY -gt 1 ] && quiet="" + + mkdir -p "$target" || + { error "failed mkdir -p '$target'"; return $RET_FAIL; } + debug 1 "wget -O '$target/$cmdname' '$src'" + wget $quiet -O "$target/$cmdname" "$src" || { + error "failed wget -O '$target/$cmdname' '$src'" + return $RET_FAIL + } + + _RET="$target" + return 0 +} + +main() { + local short_opts="hDt:v" + local long_opts="help,deps,target:,vcs-type:,verbose" + local getopt_out=$(getopt --name "${0##*/}" \ + --options "${short_opts}" --long "${long_opts}" -- "$@") && + eval set -- "${getopt_out}" || + { bad_Usage; return; } + + local cur="" next="" target="" rtype="auto" tmp="" + local def_target="" deps="" getdeps=false arg0="" + + while [ $# -ne 0 ]; do + cur="$1"; next="$2"; + case "$cur" in + -h|--help) Usage ; exit 0;; + -D|--deps) getdeps=true;; + -t|--target) target=$next; shift;; + --vcs-type) rtype=$next; shift;; + -v|--verbose) VERBOSITY=$((${VERBOSITY}+1));; + --) shift; break;; + esac + shift; + done + + [ $# -gt 0 ] || { bad_Usage "must provide at least repo"; return; } + + src_repo="$1" + shift + [ -n "$src_repo" ] || { error "empty source repo?"; return 1; } + + if [ -n "$target" ]; then + tmp=$(dirname "${target}") + [ -d "$tmp" ] || mkdir -p "$tmp" || + { error "failed to create $tmp for '$target'"; return 1; } + fi + + if [ $# -eq 0 ]; then + set -- "$DEF_COMMAND" + fi + arg0="$1" + + local vcs vcslist="${SUPPORTED_VCS}" + [ "$rtype" = "auto" ] || vcslist="$rtype" + + local workd="" + for vcs in $vcslist; do + has_cmd "vcsget_$vcs" || + { error "unknown vcs type '$vcs'"; return 1; } + "vcsget_$vcs" "$getdeps" "$rtype" "$src_repo" "$target" "$arg0" + ret=$? + case "$ret" in + $RET_UNCLAIMED) :;; # not claimed + $RET_SUCCESS) workd="$_RET"; break;; + *) error "failed to get '$src_repo' of type '$vcs'"; + return $ret;; + esac + done + + [ -d "$workd" ] || + { error "unknown source repo '$src_repo'"; return 1; } + + cd "$workd" || + { error "failed to enter target dir '$workd'"; return 1; } + + if [ -f "./$1" ]; then + if [ ! -x "./$1" ]; then + debug 1 "adding execute to ./$1" + chmod ugo+x "./$1" || + { error "failed add execute to ./$1"; return 1; } + fi + tmp="./$1" + shift + set -- "$tmp" "$@" + elif ! has_cmd "$1"; then + error "command '$1' not available anywhere" + return 1 + fi + + debug 1 "executing command in $PWD:" "$@" + exec "$@" +} + +main "$@" +# vi: ts=4 noexpandtab === modified file 'debian/changelog' === modified file 'debian/control' === modified file 'debian/copyright' === modified file 'debian/rules' === added file 'man/cloud-localds.1' --- old/man/cloud-localds.1 1970-01-01 00:00:00 +0000 +++ new/man/cloud-localds.1 2013-07-29 08:21:17 +0000 @@ -0,0 +1,95 @@ +.\" cloud-localds (1) manual page +.\" Copyright (C) 2013 Thomas Bechtold +.\" License: GPL-3 +.\" + +.TH cloud-localds 1 "July 2013" cloud\-utils "cloud\-utils" +.SH NAME +cloud-localds \- create a disk for cloud-init to utilize nocloud +.SH SYNOPSIS +.B cloud-localds +[options] output user-data [meta-data] + +.SH DESCRIPTION +.B cloud-localds +creates a disk-image with user-data and/or meta-data for +.BR cloud-init (1). +user-data can contain everything which is supported by +.BR cloud-init (1) +. +.SH OPTIONS +.TP +.BR \-d ", " \-\-disk_format =\fIDISKFORMAT\fR +Disk format to output. See +.BR qemu-img (1) +for allowed disk formats. +Default is raw. + +.TP +.BR \-f ", " \-\-filesystem =\fIFORMAT\fR +Filesystem format. Allowed formats are vfat and iso. +Default is iso9660. + +.TP +.BR \-h ", " \-\-help +Show usage. + +.TP +.BR \-i ", " \-\-interfaces +Write network interfaces file into metadata. + +.TP +.BR \-m ", " \-\-dsmode =\fIMODE\fR +Add dsmode to the metadata. Allowed are local or net. +Default in +.BR cloud-init (1) +is net. + +.SH EXAMPLES +This example creates a disk image with user-data which can be used to start a cloud image which supports +.BR cloud-init (1). + +.IP "Create some user-data:" +.IP +.PP +.nf +.RS +cat > my-user-data < for Debian systems (but may be used by others). Permission is granted to copy, distribute and/or modify this document under the terms of the GNU General Public License, Version 3 published by the Free Software Foundation. === modified file 'tools/build-deb' --- old/tools/build-deb 2012-07-17 02:33:31 +0000 +++ new/tools/build-deb 2013-03-28 12:41:12 +0000 @@ -1,6 +1,8 @@ #!/bin/sh +sourcename="cloud-utils" TEMP_D="" +UNCOMMITTED=${UNCOMMITTED:-0} fail() { echo "$@" 1>&2; exit 1; } cleanup() { @@ -10,7 +12,7 @@ if [ "$1" = "-h" -o "$1" = "--help" ]; then cat < -mQINBEqwKTUBEAC8V01JGfeYVVlwlcr0dmwF8n+We/lbxwArjR/gZlH7/MJEZnALQHUrDTpD3Skf -bsjQgeNt8eS3Jyzoc2r3t2nos4rXPH4kIzAvtqslz6Ns4ZYjoHVkVC2oV8vYbxER+3/lDjTWVII7 -omtDVvqH33QlqYZ8+bQbs21lZb2ROJIQCiH0YzaqYR0I2SEykBL873V0ygdyW/mCMwniXTLUyGAU -V4/28NOzw/6LGvJElJe4UqwQxl/aXtPIJjPka8LA8+nDi5/u6WEgDWgBhLEHvQG1BNdttm3WCjbu -4zS3mNfNBidTamZfOaMJUZVYxhOB5kNQqyR4eYqFK/U+305eLrZ05ocadsmcQWkHQVbgt+g4yyFN -l56N5AirkFjVtfArkUJfINGgJ7gkSeyqTJK24f33vsIpPwRQ5eFn7H4PwGc0Piym73YLJnlR94LN -EG0ceOJ7u1r+WuaesIj+lKIZsG/rRLf7besaMCCtPcimVgEAmBoIdpTpdP3aa54w/dvfSwW47mGY -14G5PBk/0MDy2Y5HOeXat3RXpGZZFh7zbwSQ93RhYH3bNPNd5lMu3ZRkYX19FWxoLCi5lx4K3flY -hiolZ5i4KxJCoGRobsKjm74Xv2QlvCXYyAk5BnAQCsu5hKZ1sOhQADCcKz1Zbg8JRc3vmelaJ/VF -vHTzs4hJTUvOowARAQABtDRVRUMgSW1hZ2UgQXV0b21hdGljIFNpZ25pbmcgS2V5IDxjZGltYWdl -QHVidW50dS5jb20+iQI3BBMBAgAhAhsDAh4BAheABQJKsColBQsJCAcDBRUKCQgLBRYCAwEAAAoJ -EBpdbEx9uHyBLicP/jXjfLhs255oT8gmvBXS9WDGSdpPiaMxd0CHEyHuT/XdWsoUUYXAPAti8Fyk -2K99mze+n4SLCRRJhxqYlcpVy2icc41/VkKI9d/pF4t54RM5TledYpKVV7xTgoUHZpuL2mWzaT61 -MzRAxUqqaU42/xSLxLt/noryPHo57IghJXbAcmgLhFT0fZmtDy9cD4IBvurZF6cRuMJXjxZmssnt -MHsFZl4PEC3oR/WgJA37OrjMVej9r+JA909vr5K/UO+P2gWYOH/2CnGDlaTu72wUrLf6QV5jMyKc -6+G7fw5bTJd9lE8Km2H+4z9e+t7IOv9oxojvESu27exD4LU7SjzZloYnmlTCsdHwgSJVnf+lqXoZ -eUNT9Tmku8VzwCoExTwo9exaJUHeO8ABkfsJVmry40ovzQAHh427+6NpxgkWErVocnm54LPIQucZ -YJrg08s/azRzCjlsYChsaWMvGlMZQo52MuLvETHVPtSggP7sLeIOlS+8tO1ykSJY65j8AHYBV6hb -9EOjWmqpx33GXn8AyCPiMs9/pmeOI0V6YMm6HCLAwZb+rRS6gcyt9dlWyLU0QLlpmwHSOVJMv2rn -NCUtz6pb8y/o9AN2Z48RpH9C9cfv4dAfbtYn7uTd+M3gk4xyURREg2xuDnraYFs6cZ60/bSy63Gx -Tyi/cCc0S57GgtOKsAIAAw== debian/patches/series0000664000000000000000000000002412320517270012027 0ustar sync-to-trunk.patch debian/update-sync-to-main0000775000000000000000000000340712272245730012724 0ustar #!/bin/sh fail() { [ $# -eq 0 ] || echo "$@" 1>&2; exit 1; } Usage() { cat < "${tmpd}/raw" ret=$? [ $ret -eq 0 -o $ret -eq 1 ] || fail "getting diff failed" echo "Patch created with '${0} ${*}'" > "${tmpd}/${pname}" filterdiff --exclude "*/debian/*" < "${tmpd}/raw" >> "${tmpd}/${pname}" || fail "failed to filterdiff" if quilt applied; then quilt pop -a || fail "failed to pop patches in quilt queue" fi if quilt series | grep ^${pname} ; then quilt delete -r "${pname}" || fail "failed patch ${pname}" fi quilt import "${tmpd}/${pname}" || fail "failed to import" quilt push "${pname}" || fail "failed to push to ${pname}" for f in $(quilt files); do if [ -e "$f" ]; then bzr add "$f" || fail "failed to add $f" fi done echo "refreshed ${pname}" echo "you may need to quilt push -a now if you had other changes" debian/cloud-guest-utils.install0000664000000000000000000000011712272245730014154 0ustar usr/bin/ec2metadata usr/bin/growpart usr/bin/vcs-run usr/share/man/*/growpart* debian/control0000664000000000000000000000273512272245730010606 0ustar Source: cloud-utils Section: admin Priority: extra Maintainer: Scott Moser Build-Depends: debhelper (>= 7), python-all (>= 2.6) XS-Python-Version: >= 2.6 Standards-Version: 3.9.4 Package: cloud-utils Priority: extra Section: admin Architecture: all Depends: cloud-guest-utils, cloud-image-utils, ${misc:Depends} Description: metapackage for installation of upstream cloud-utils source This meta-package will depend on all sub-packages built by the upstream cloud-utils source. Package: cloud-guest-utils Architecture: all Depends: e2fsprogs (>=1.4), util-linux (>= 2.17.2), ${misc:Depends}, ${python:Depends} Recommends: gdisk Conflicts: cloud-utils (<< 0.27-0ubuntu3) Replaces: cloud-utils (<< 0.27-0ubuntu3) Description: cloud guest utilities This package contains programs useful inside cloud instance. It contains 'growpart' for resizing a partition during boot. Package: cloud-image-utils Architecture: all Depends: ca-certificates, euca2ools, file, genisoimage, qemu-utils, wget, ${misc:Depends}, ${python:Depends} Conflicts: cloud-utils (<< 0.27-0ubuntu3) Replaces: cloud-utils (<< 0.27-0ubuntu3) Recommends: distro-info, python-distro-info Description: cloud image management utilities This package provides a useful set of utilities for managing cloud images. It contains tools to help in publishing and modifying cloud images, and querying data related to cloud-images. debian/compat0000664000000000000000000000000212272245730010372 0ustar 7