pax_global_header00006660000000000000000000000064132657334420014523gustar00rootroot0000000000000052 comment=6d48d265a0548a2dc23e587f2a335d4e38e8db90 cloud-init-18.2-14-g6d48d265/000077500000000000000000000000001326573344200152015ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/.gitignore000066400000000000000000000001561326573344200171730ustar00rootroot00000000000000build cloud_init.egg-info dist *.pyc __pycache__ .tox .coverage doc/rtd_html parts prime stage *.snap *.cover cloud-init-18.2-14-g6d48d265/.pylintrc000066400000000000000000000036371326573344200170570ustar00rootroot00000000000000[MASTER] # --go-faster, use multiple processes to speed up Pylint jobs=4 [MESSAGES CONTROL] # Errors and warings with some filtered: # W0105(pointless-string-statement) # W0107(unnecessary-pass) # W0201(attribute-defined-outside-init) # W0212(protected-access) # W0221(arguments-differ) # W0222(signature-differs) # W0223(abstract-method) # W0231(super-init-not-called) # W0311(bad-indentation) # W0511(fixme) # W0602(global-variable-not-assigned) # W0603(global-statement) # W0611(unused-import) # W0612(unused-variable) # W0613(unused-argument) # W0621(redefined-outer-name) # W0622(redefined-builtin) # W0631(undefined-loop-variable) # W0703(broad-except) # W1401(anomalous-backslash-in-string) disable=C, F, I, R, W0105, W0107, W0201, W0212, W0221, W0222, W0223, W0231, W0311, W0511, W0602, W0603, W0611, W0612, W0613, W0621, W0622, W0631, W0703, W1401 [REPORTS] # Set the output format. Available formats are text, parseable, colorized, msvs output-format=parseable # Just the errors please, no full report reports=no [TYPECHECK] # List of module names for which member attributes should not be checked # (useful for modules/projects where namespaces are manipulated during runtime # and thus existing member attributes cannot be deduced by static analysis. It # supports qualified module names, as well as Unix pattern matching. ignored-modules= http.client, httplib, pkg_resources, six.moves, # cloud_tests requirements. boto3, botocore, paramiko, pylxd, simplestreams # List of class names for which member attributes should not be checked (useful # for classes with dynamically set attributes). This supports the use of # qualified names. ignored-classes=optparse.Values,thread._local # List of members which are set dynamically and missed by pylint inference # system, and so shouldn't trigger E1101 when accessed. Python regular # expressions are accepted. generated-members=types,http.client,command_handlers,m_.* cloud-init-18.2-14-g6d48d265/ChangeLog000066400000000000000000002472571326573344200167740ustar00rootroot0000000000000018.2: - Hetzner: Exit early if dmi system-manufacturer is not Hetzner. - Add missing dependency on isc-dhcp-client to trunk ubuntu packaging. (LP: #1759307) - FreeBSD: resizefs module now able to handle zfs/zpool. [Dominic Schlegel] (LP: #1721243) - cc_puppet: Revert regression of puppet creating ssl and ssl_cert dirs - Enable IBMCloud datasource in settings.py. - IBMCloud: Initial IBM Cloud datasource. - tests: remove jsonschema from xenial tox environment. - tests: Fix newly added schema unit tests to skip if no jsonschema. - ec2: Adjust ec2 datasource after exception_cb change. - Reduce AzurePreprovisioning HTTP timeouts. [Douglas Jordan] (LP: #1752977) - Revert the logic of exception_cb in read_url. [Kurt Garloff] (LP: #1702160, #1298921) - ubuntu-advantage: Add new config module to support ubuntu-advantage-tools - Handle global dns entries in netplan (LP: #1750884) - Identify OpenTelekomCloud Xen as OpenStack DS. [Kurt Garloff] (LP: #1756471) - datasources: fix DataSource subclass get_hostname method signature (LP: #1757176) - OpenNebula: Update network to return v2 config rather than ENI. [Akihiko Ota] - Add Hetzner Cloud DataSource - net: recognize iscsi root cases without ip= on kernel command line. (LP: #1752391) - tests: fix flakes warning for unused variable - tests: patch leaked stderr messages from snap unit tests - cc_snap: Add new module to install and configure snapd and snap packages. - tests: Make pylint happy and fix python2.6 uses of assertRaisesRegex. - netplan: render bridge port-priority values (LP: #1735821) - util: Fix subp regression. Allow specifying subp command as a string. (LP: #1755965) - doc: fix all warnings issued by 'tox -e doc' - FreeBSD: Set hostname to FQDN. [Dominic Schlegel] (LP: #1753499) - tests: fix run_tree and bddeb - tests: Fix some warnings in tests that popped up with newer python. - set_hostname: When present in metadata, set it before network bringup. (LP: #1746455) - tests: Centralize and re-use skipTest based on json schema presense. - This commit fixes get_hostname on the AzureDataSource. [Douglas Jordan] (LP: #1754495) - shellify: raise TypeError on bad input. - Make salt minion module work on FreeBSD. [Dominic Schlegel] (LP: #1721503) - Simplify some comparisions. [Rémy Léone] - Change some list creation and population to literal. [Rémy Léone] - GCE: fix reading of user-data that is not base64 encoded. (LP: #1752711) - doc: fix chef install from apt packages example in RTD. - Implement puppet 4 support [Romanos Skiadas] (LP: #1446804) - subp: Fix subp usage with non-ascii characters when no system locale. (LP: #1751051) - salt: configure grains in grains file rather than in minion config. [Daniel Wallace] 18.1: - OVF: Fix VMware support for 64-bit platforms. [Sankar Tanguturi] - ds-identify: Fix searching for iso9660 OVF cdroms. (LP: #1749980) - SUSE: Fix groups used for ownership of cloud-init.log [Robert Schweikert] - ds-identify: check /writable/system-data/ for nocloud seed. (LP: #1747070) - tests: run nosetests in cloudinit/ directory, fix py26 fallout. - tools: run-centos: git clone rather than tar. - tests: add support for logs with lxd from snap and future lxd 3. (LP: #1745663) - EC2: Fix get_instance_id called against cached datasource pickle. (LP: #1748354) - cli: fix cloud-init status to report running when before result.json (LP: #1747965) - net: accept network-config in netplan format for renaming interfaces (LP: #1709715) - Fix ssh keys validation in ssh_util [Tatiana Kholkina] - docs: Update RTD content for cloud-init subcommands. - OVF: Extend well-known labels to include OVFENV. (LP: #1698669) - Fix potential cases of uninitialized variables. (LP: #1744796) - tests: Collect script output as binary, collect systemd journal, fix lxd. - HACKING.rst: mention setting user name and email via git config. - Azure VM Preprovisioning support. [Douglas Jordan] (LP: #1734991) - tools/read-version: Fix read-version when in a git worktree. - docs: Fix typos in docs and one debug message. [Florian Grignon] - btrfs: support resizing if root is mounted ro. [Robert Schweikert] (LP: #1734787) - OpenNebula: Improve network configuration support. [Akihiko Ota] (LP: #1719157, #1716397, #1736750) - tests: Fix EC2 Platform to return console output as bytes. - tests: Fix attempted use of /run in a test case. - GCE: Improvements and changes to ssh key behavior for default user. [Max Illfelder] (LP: #1670456, #1707033, #1707037, #1707039) - subp: make ProcessExecutionError have expected types in stderr, stdout. - tests: when querying ntp server, do not do dns resolution. - Recognize uppercase vfat disk labels [James Penick] (LP: #1598783) - tests: remove zesty as supported OS to test [Joshua Powers] - Do not log warning on config files that represent None. (LP: #1742479) - tests: Use git hash pip dependency format for pylxd. - tests: add integration requirements text file [Joshua Powers] - MAAS: add check_instance_id based off oauth tokens. (LP: #1712680) - tests: update apt sources list test [Joshua Powers] - tests: clean up image properties [Joshua Powers] - tests: rename test ssh keys to avoid appearance of leaking private keys. [Joshua Powers] - tests: Enable AWS EC2 Integration Testing [Joshua Powers] - cli: cloud-init clean handles symlinks (LP: #1741093) - SUSE: Add a basic test of network config rendering. [Robert Schweikert] - Azure: Only bounce network when necessary. (LP: #1722668) - lint: Fix lints seen by pylint version 1.8.1. - cli: Fix error in cloud-init modules --mode=init. (LP: #1736600) 17.2: - ds-identify: failure in NoCloud due to unset variable usage. (LP: #1737704) - tests: fix collect_console when not implemented [Joshua Powers] - ec2: Use instance-identity doc for region and instance-id [Andrew Jorgensen] - tests: remove leaked tmp files in config drive tests. - setup.py: Do not include rendered files in SOURCES.txt - SUSE: remove delta in systemd local template for SUSE [Robert Schweikert] - tests: move to using tox 1.7.5 - OVF: improve ds-identify to support finding OVF iso transport. (LP: #1731868) - VMware: Support for user provided pre and post-customization scripts [Maitreyee Saikia] - citest: In NoCloudKVM provide keys via metadata not userdata. - pylint: Update pylint to 1.7.1, run on tests/ and tools and fix complaints. - Datasources: Formalize DataSource get_data and related properties. - cli: Add clean and status subcommands - tests: consolidate platforms into specific dirs - ec2: Fix sandboxed dhclient background process cleanup. (LP: #1735331) - tests: NoCloudKVMImage do not modify the original local cache image. - tests: Enable bionic in integration tests. [Joshua Powers] - tests: Use apt-get to install a deb so that depends get resolved. - sysconfig: Correctly render dns and dns search info. [Ryan McCabe] (LP: #1705804) - integration test: replace curtin test ppa with cloud-init test ppa. - EC2: Fix bug using fallback_nic and metadata when restoring from cache. (LP: #1732917) - EC2: Kill dhclient process used in sandbox dhclient. (LP: #1732964) - ntp: fix configuration template rendering for openSUSE and SLES (LP: #1726572) - centos: Provide the failed #include url in error messages - Catch UrlError when #include'ing URLs [Andrew Jorgensen] - hosts: Fix openSUSE and SLES setup for /etc/hosts and clarify docs. [Robert Schweikert] (LP: #1731022) - rh_subscription: Perform null checks for enabled and disabled repos. [Dave Mulford] - Improve warning message when a template is not found. [Robert Schweikert] (LP: #1731035) - Replace the temporary i9n.brickies.net with i9n.cloud-init.io. - Azure: don't generate network configuration for SRIOV devices (LP: #1721579) - tests: address some minor feedback missed in last merge. - tests: integration test cleanup and full pass of nocloud-kvm. - Gentoo: chmod +x on all files in sysvinit/gentoo/ [ckonstanski] (LP: #1727126) - EC2: Limit network config to fallback nic, fix local-ipv4 only instances. (LP: #1728152) - Gentoo: Use "rc-service" rather than "service". [Carlos Konstanski] (LP: #1727121) - resizefs: Fix regression when system booted with root=PARTUUID= (LP: #1725067) - tools: make yum package installation more reliable - citest: fix remaining warnings raised by integration tests. - citest: show the class actual class name in results. - ntp: fix config module schema to allow empty ntp config (LP: #1724951) - tools: disable fastestmirror if using proxy [Joshua Powers] - schema: Log debug instead of warning when jsonschema is not available. (LP: #1724354) - simpletable: Fix get_string method to return table-formatted string (LP: #1722566) - net: Handle bridge stp values of 0 and convert to boolean type - tools: Give specific --abbrev=8 to "git describe" - network: bridge_stp value not always correct (LP: #1721157) - tests: re-enable tox with nocloud-kvm support [Joshua Powers] - systemd: remove limit on tasks created by cloud-init-final.service. [Robert Schweikert] (LP: #1717969) - suse: Support addition of zypper repos via cloud-config. [Robert Schweikert] (LP: #1718675) - tests: Combine integration configs and testcases [Joshua Powers] - Azure, CloudStack: Support reading dhcp options from systemd-networkd. [Dimitri John Ledkov] (LP: #1718029) - packages/debian/copyright: remove mention of boto and MIT license - systemd: only mention Before=apt-daily.service on debian based distros. [Robert Schweikert] - Add missing simpletable and simpletable tests for failed merge - Remove prettytable dependency, introduce simpletable [Andrew Jorgensen] - debian/copyright: dep5 updates, reorganize, add Apache 2.0 license. [Joshua Powers] (LP: #1718681) - tests: remove dependency on shlex [Joshua Powers] - AltCloud: Trust PATH for udevadm and modprobe. - DataSourceOVF: use util.find_devs_with(TYPE=iso9660) (LP: #1718287) - tests: remove a temp file used in bootcmd tests. 17.1: - doc: document GCE datasource. [Arnd Hannemann] - suse: updates to templates to support openSUSE and SLES. [Robert Schweikert] (LP: #1718640) - suse: Copy sysvinit files from redhat with slight changes. [Robert Schweikert] (LP: #1718649) - docs: fix sphinx module schema documentation [Chad Smith] - tests: Add cloudinit package to all test targets [Chad Smith] - Makefile: No longer look for yaml files in obsolete ./bin/. - tests: fix ds-identify unit tests to set EC2_STRICT_ID_DEFAULT. - ec2: Fix maybe_perform_dhcp_discovery to use /var/tmp as a tmpdir [Chad Smith] (LP: #1717627) - Azure: wait longer for SSH pub keys to arrive. [Paul Meyer] (LP: #1717611) - GCE: Fix usage of user-data. (LP: #1717598) - cmdline: add collect-logs subcommand. [Chad Smith] (LP: #1607345) - CloudStack: consider dhclient lease files named with a hyphen. (LP: #1717147) - resizefs: Drop check for read-only device file, do not warn on overlayroot. [Chad Smith] - Do not provide systemd-fsck drop-in which could cause ordering cycles. [Balint Reczey] (LP: #1717477) - tests: Enable the NoCloud KVM platform [Joshua Powers] - resizefs: pass mount point to xfs_growfs [Dusty Mabe] - vmware: Enable nics before sending the SUCCESS event. [Sankar Tanguturi] - cloud-config modules: honor distros definitions in each module [Chad Smith] (LP: #1715738, #1715690) - chef: Add option to pin chef omnibus install version [Ethan Apodaca] (LP: #1462693) - tests: execute: support command as string [Joshua Powers] - schema and docs: Add jsonschema to resizefs and bootcmd modules [Chad Smith] - tools: Add xkvm script, wrapper around qemu-system [Joshua Powers] - vmware customization: return network config format [Sankar Tanguturi] (LP: #1675063) - Ec2: only attempt to operate at local mode on known platforms. (LP: #1715128) - Use /run/cloud-init for tempfile operations. (LP: #1707222) - ds-identify: Make OpenStack return maybe on arch other than intel. (LP: #1715241) - tests: mock missed openstack metadata uri network_data.json [Chad Smith] (LP: #1714376) - relocate tests/unittests/helpers.py to cloudinit/tests [Lars Kellogg-Stedman] - tox: add nose timer output [Joshua Powers] - upstart: do not package upstart jobs, drop ubuntu-init-switch module. - tests: Stop leaking calls through unmocked metadata addresses [Chad Smith] (LP: #1714117) - distro: allow distro to specify a default locale [Ryan Harper] - tests: fix two recently added tests for sles distro. - url_helper: dynamically import oauthlib import from inside oauth_headers [Chad Smith] - tox: make xenial environment run with python3.6 - suse: Add support for openSUSE and return SLES to a working state. [Robert Schweikert] - GCE: Add a main to the GCE Datasource. - ec2: Add IPv6 dhcp support to Ec2DataSource. [Chad Smith] (LP: #1639030) - url_helper: fail gracefully if oauthlib is not available [Lars Kellogg-Stedman] (LP: #1713760) - cloud-init analyze: fix issues running under python 2. [Andrew Jorgensen] - Configure logging module to always use UTC time. [Ryan Harper] (LP: #1713158) - Log a helpful message if a user script does not include shebang. [Andrew Jorgensen] - cli: Fix command line parsing of coniditionally loaded subcommands. [Chad Smith] (LP: #1712676) - doc: Explain error behavior in user data include file format. [Jason Butz] - cc_landscape & cc_puppet: Fix six.StringIO use in writing configs [Chad Smith] (LP: #1699282, #1710932) - schema cli: Add schema subcommand to cloud-init cli and cc_runcmd schema [Chad Smith] - Debian: Remove non-free repositories from apt sources template. [Joonas Kylmälä] (LP: #1700091) - tools: Add tooling for basic cloud-init performance analysis. [Chad Smith] (LP: #1709761) - network: add v2 passthrough and fix parsing v2 config with bonds/bridge params [Ryan Harper] (LP: #1709180) - doc: update capabilities with features available, link doc reference, cli example [Ryan Harper] - vcloud directory: Guest Customization support for passwords [Maitreyee Saikia] - ec2: Allow Ec2 to run in init-local using dhclient in a sandbox. [Chad Smith] (LP: #1709772) - cc_ntp: fallback on timesyncd configuration if ntp is not installable [Ryan Harper] (LP: #1686485) - net: Reduce duplicate code. Have get_interfaces_by_mac use get_interfaces. - tests: Fix build tree integration tests [Joshua Powers] - sysconfig: Dont repeat header when rendering resolv.conf [Ryan Harper] (LP: #1701420) - archlinux: Fix bug with empty dns, do not render 'lo' devices. (LP: #1663045, #1706593) - cloudinit.net: add initialize_network_device function and tests [Chad Smith] - makefile: fix ci-deps-ubuntu target [Chad Smith] - tests: adjust locale integration test to parse default locale. - tests: remove 'yakkety' from releases as it is EOL. - tests: Add initial tests for EC2 and improve a docstring. - locale: Do not re-run locale-gen if provided locale is system default. - archlinux: fix set hostname usage of write_file. [Joshua Powers] (LP: #1705306) - sysconfig: support subnet type of 'manual'. - tools/run-centos: make running with no argument show help. - Drop rand_str() usage in DNS redirection detection [Bob Aman] (LP: #1088611) - sysconfig: use MACADDR on bonds/bridges to configure mac_address [Ryan Harper] (LP: #1701417) - net: eni route rendering missed ipv6 default route config [Ryan Harper] (LP: #1701097) - sysconfig: enable mtu set per subnet, including ipv6 mtu [Ryan Harper] (LP: #1702513) - sysconfig: handle manual type subnets [Ryan Harper] (LP: #1687725) - sysconfig: fix ipv6 gateway routes [Ryan Harper] (LP: #1694801) - sysconfig: fix rendering of bond, bridge and vlan types. [Ryan Harper] (LP: #1695092) - Templatize systemd unit files for cross distro deltas. [Ryan Harper] - sysconfig: ipv6 and default gateway fixes. [Ryan Harper] (LP: #1704872) - net: fix renaming of nics to support mac addresses written in upper case. (LP: #1705147) - tests: fixes for issues uncovered when moving to python 3.6. (LP: #1703697) - sysconfig: include GATEWAY value if set in subnet [Ryan Harper] (LP: #1686856) - Scaleway: add datasource with user and vendor data for Scaleway. [Julien Castets] - Support comments in content read by load_shell_content. - cloudinitlocal fail to run during boot [Hongjiang Zhang] - doc: fix disk setup example table_type options [Sandor Zeestraten] (LP: #1703789) - tools: Fix exception handling. [Joonas Kylmälä] (LP: #1701527) - tests: fix usage of mock in GCE test. - test_gce: Fix invalid mock of platform_reports_gce to return False [Chad Smith] - test: fix incorrect keyid for apt repository. [Joshua Powers] (LP: #1702717) - tests: Update version of pylxd [Joshua Powers] - write_files: Remove log from helper function signatures. [Andrew Jorgensen] - doc: document the cmdline options to NoCloud [Brian Candler] - read_dmi_data: always return None when inside a container. (LP: #1701325) - requirements.txt: remove trailing white space. - Azure: Add network-config, Refactor net layer to handle duplicate macs. [Ryan Harper] - Tests: Simplify the check on ssh-import-id [Joshua Powers] - tests: update ntp tests after sntp added [Joshua Powers] - FreeBSD: Make freebsd a variant, fix unittests and tools/build-on-freebsd. - FreeBSD: fix test failure - FreeBSD: replace ifdown/ifup with "ifconfig down" and "ifconfig up". [Hongjiang Zhang] (LP: #1697815) - FreeBSD: fix cdrom mounting failure if /mnt/cdrom/secure did not exist. [Hongjiang Zhang] (LP: #1696295) - main: Don't use templater to format the welcome message [Andrew Jorgensen] - docs: Automatically generate module docs form schema if present. [Chad Smith] - debian: fix path comment in /etc/hosts template. [Jens Sandmann] (LP: #1606406) - suse: add hostname and fully qualified domain to template. [Jens Sandmann] - write_file(s): Print permissions as octal, not decimal [Andrew Jorgensen] - ci deps: Add --test-distro to read-dependencies to install all deps [Chad Smith] - tools/run-centos: cleanups and move to using read-dependencies - pkg build ci: Add make ci-deps- target to install pkgs [Chad Smith] - systemd: make cloud-final.service run before apt daily services. (LP: #1693361) - selinux: Allow restorecon to be non-fatal. [Ryan Harper] (LP: #1686751) - net: Allow netinfo subprocesses to return 0 or 1. [Ryan Harper] (LP: #1686751) - net: Allow for NetworkManager configuration [Ryan McCabe] (LP: #1693251) - Use distro release version to determine if we use systemd in redhat spec [Ryan Harper] - net: normalize data in network_state object - Integration Testing: tox env, pyxld 2.2.3, and revamp framework [Wesley Wiedenmeier] - Chef: Update omnibus url to chef.io, minor doc changes. [JJ Asghar] - tools: add centos scripts to build and test [Joshua Powers] - Drop cheetah python module as it is not needed by trunk [Ryan Harper] - rhel/centos spec cleanups. - cloud.cfg: move to a template. setup.py changes along the way. - Makefile: add deb-src and srpm targets. use PYVER more places. - makefile: fix python 2/3 detection in the Makefile [Chad Smith] - snap: Removing snapcraft plug line [Joshua Powers] (LP: #1695333) - RHEL/CentOS: Fix default routes for IPv4/IPv6 configuration. [Andreas Karis] (LP: #1696176) - test: Fix pyflakes complaint of unused import. [Joshua Powers] (LP: #1695918) - NoCloud: support seed of nocloud from smbios information [Vladimir Pouzanov] (LP: #1691772) - net: when selecting a network device, use natural sort order [Marc-Aurèle Brothier] - fix typos and remove whitespace in various docs [Stephan Telling] - systemd: Fix typo in comment in cloud-init.target. [Chen-Han Hsiao] - Tests: Skip jsonschema related unit tests when dependency is absent. [Chad Smith] (LP: #1695318) - azure: remove accidental duplicate line in merge. - azure: identify platform by well known value in chassis asset tag. [Chad Smith] (LP: #1693939) - tools/net-convert.py: support old cloudinit versions by using kwargs. - ntp: Add schema definition and passive schema validation. [Chad Smith] (LP: #1692916) - Fix eni rendering for bridge params that require repeated key for values. [Ryan Harper] - net: remove systemd link file writing from eni renderer [Ryan Harper] - AliYun: Enable platform identification and enable by default. [Junjie Wang] (LP: #1638931) - net: fix reading and rendering addresses in cidr format. [Dimitri John Ledkov] (LP: #1689346, #1684349) - disk_setup: udev settle before attempting partitioning or fs creation. (LP: #1692093) - GCE: Update the attribute used to find instance SSH keys. [Daniel Watkins] (LP: #1693582) - nplan: For bonds, allow dashed or underscore names of keys. [Dimitri John Ledkov] (LP: #1690480) - python2.6: fix unit tests usage of assertNone and format. - test: update docstring on test_configured_list_with_none - fix tools/ds-identify to not write None twice. - tox/build: do not package depend on style requirements. - cc_ntp: Restructure cc_ntp unit tests. [Chad Smith] (LP: #1692794) - flake8: move the pinned version of flake8 up to 3.3.0 - tests: Apply workaround for snapd bug in test case. [Joshua Powers] - RHEL/CentOS: Fix dual stack IPv4/IPv6 configuration. [Andreas Karis] (LP: #1679817, #1685534, #1685532) - disk_setup: fix several issues with gpt disk partitions. (LP: #1692087) - function spelling & docstring update [Joshua Powers] - Fixing wrong file name regression. [Joshua Powers] - tox: move pylint target to 1.7.1 - Fix get_interfaces_by_mac for empty macs (LP: #1692028) - DigitalOcean: remove routes except for the public interface. [Ben Howard] (LP: #1681531.) - netplan: pass macaddress, when specified, for vlans [Dimitri John Ledkov] (LP: #1690388) - doc: various improvements for the docs on cc_users_groups. [Felix Dreissig] - cc_ntp: write template before installing and add service restart [Ryan Harper] (LP: #1645644) - cloudstack: fix tests to avoid accessing /var/lib/NetworkManager [Lars Kellogg-Stedman] - tests: fix hardcoded path to mkfs.ext4 [Joshua Powers] (LP: #1691517) - Actually skip warnings when .skip file is present. [Chris Brinker] (LP: #1691551) - netplan: fix netplan render_network_state signature. [Dimitri John Ledkov] (LP: #1685944) - Azure: fix reformatting of ephemeral disks on resize to large types. (LP: #1686514) - Revert "tools/net-convert: fix argument order for render_network_state" - make deb: Add devscripts dependency for make deb. Cleanup packages/bddeb. [Chad Smith] (LP: #1685935) - tools/net-convert: fix argument order for render_network_state [Ryan Harper] (LP: #1685944) - openstack: fix log message copy/paste typo in _get_url_settings [Lars Kellogg-Stedman] - unittests: fix unittests run on centos [Joshua Powers] - Improve detection of snappy to include os-release and kernel cmdline. (LP: #1689944) - Add address to config entry generated by _klibc_to_config_entry. [Julien Castets] (LP: #1691135) - sysconfig: Raise ValueError when multiple default gateways are present. [Chad Smith] (LP: #1687485) - FreeBSD: improvements and fixes for use on Azure [Hongjiang Zhang] (LP: #1636345) - Add unit tests for ds-identify, fix Ec2 bug found. - fs_setup: if cmd is specified, use shell interpretation. [Paul Meyer] (LP: #1687712) - doc: document network configuration defaults policy and formats. [Ryan Harper] - Fix name of "uri" key in docs for "cc_apt_configure" module [Felix Dreissig] - tests: Enable artful [Joshua Powers] - nova-lxd: read product_name from environment, not platform. (LP: #1685810) - Fix yum repo config where keys contain array values [Dylan Perry] (LP: #1592150) - template: Update debian backports template [Joshua Powers] (LP: #1627293) - rsyslog: replace ~ with stop [Joshua Powers] (LP: #1367899) - Doc: add additional RTD examples [Joshua Powers] (LP: #1459604) - Fix growpart for some cases when booted with root=PARTUUID. (LP: #1684869) - pylint: update output style to parseable [Joshua Powers] - pylint: fix all logging warnings [Joshua Powers] - CloudStack: Add NetworkManager to list of supported DHCP lease dirs. [Syed] - net: kernel lies about vlans not stealing mac addresses, when they do [Dimitri John Ledkov] (LP: #1682871) - ds-identify: Check correct path for "latest" config drive [Daniel Watkins] (LP: #1673637) - doc: Fix example for resolve.conf configuration. [Jon Grimm] (LP: #1531582) - Fix examples that reference upstream chef repository. [Jon Grimm] (LP: #1678145) - doc: correct grammar and improve clarity in merging documentation. [David Tagatac] - doc: Add missing doc link to snap-config module. [Ryan Harper] - snap: allows for creating cloud-init snap [Joshua Powers] - DigitalOcean: assign IPv4ll address to lowest indexed interface. [Ben Howard] - DigitalOcean: configure all NICs presented in meta-data. [Ben Howard] - Remove (and/or fix) URL shortener references [Jon Grimm] (LP: #1669727) - HACKING.rst: more info on filling out contributors agreement. - util: teach write_file about copy_mode option [Lars Kellogg-Stedman] (LP: #1644064) - DigitalOcean: bind resolvers to loopback interface. [Ben Howard] - tests: fix AltCloud tests to not rely on blkid (LP: #1636531) - OpenStack: add 'dvs' to the list of physical link types. (LP: #1674946) - Fix bug that resulted in an attempt to rename bonds or vlans. (LP: #1669860) - tests: update OpenNebula and Digital Ocean to not rely on host interfaces. - net: in netplan renderer delete known image-builtin content. (LP: #1675576) - doc: correct grammar in capabilities.rst [David Tagatac] - ds-identify: fix detecting of maas datasource. (LP: #1677710) - netplan: remove debugging prints, add debug logging [Ryan Harper] - ds-identify: do not write None twice to datasource_list. - support resizing partition and rootfs on system booted without initramfs. [Steve Langasek] (LP: #1677376) - apt_configure: run only when needed. (LP: #1675185) - OpenStack: identify OpenStack by product 'OpenStack Compute'. (LP: #1675349) - GCE: Search GCE in ds-identify, consider serial number in check. (LP: #1674861) - Add support for setting hashed passwords [Tore S. Lonoy] (LP: #1570325) - Fix filesystem creation when using "partition: auto" [Jonathan Ballet] (LP: #1634678) - ConfigDrive: support reading config drive data from /config-drive. (LP: #1673411) - ds-identify: fix detection of Bigstep datasource. (LP: #1674766) - test: add running of pylint [Joshua Powers] - ds-identify: fix bug where filename expansion was left on. - advertise network config v2 support (NETWORK_CONFIG_V2) in features. - Bigstep: fix bug when executing in python3. [root] - Fix unit test when running in a system deployed with cloud-init. - Bounce network interface for Azure when using the built-in path. [Brent Baude] (LP: #1674685) - cloudinit.net: add network config v2 parsing and rendering [Ryan Harper] - net: Fix incorrect call to isfile [Joshua Powers] (LP: #1674317) - net: add renderers for automatically selecting the renderer. - doc: fix config drive doc with regard to unpartitioned disks. (LP: #1673818) - test: Adding integratiron test for password as list [Joshua Powers] - render_network_state: switch arguments around, do not require target - support 'loopback' as a device type. - Integration Testing: improve testcase subclassing [Wesley Wiedenmeier] - gitignore: adding doc/rtd_html [Joshua Powers] - doc: add instructions for running integration tests via tox. [Joshua Powers] - test: avoid differences in 'date' output due to daylight savings. - Fix chef config module in omnibus install. [Jeremy Melvin] (LP: #1583837) - Add feature flags to cloudinit.version. [Wesley Wiedenmeier] - tox: add a citest environment - Further fix regression to support 'password' for default user. - fix regression when no chpasswd/list was provided. - Support chpasswd/list being a list in addition to a string. [Sergio Lystopad] (LP: #1665694) - doc: Fix configuration example for cc_set_passwords module. [Sergio Lystopad] (LP: #1665773) - net: support both ipv4 and ipv6 gateways in sysconfig. [Lars Kellogg-Stedman] (LP: #1669504) - net: do not raise exception for > 3 nameservers [Lars Kellogg-Stedman] (LP: #1670052) - ds-identify: report cleanups for config and exit value. (LP: #1669949) - ds-identify: move default setting for Ec2/strict_id to a global. - ds-identify: record not found in cloud.cfg and always add None. - Support warning if the used datasource is not in ds-identify's list. - tools/ds-identify: make report mode write namespaced results. - Move warning functionality to cloudinit/warnings.py - Add profile.d script for showing warnings on login. - Z99-cloud-locale-test.sh: install and make consistent. - tools/ds-identify: look at cloud.cfg when looking for ec2 strict_id. - tools/ds-identify: disable vmware_guest_customization by default. - tools/ds-identify: ovf identify vmware guest customization. - Identify Brightbox as an Ec2 datasource user. (LP: #1661693) - DatasourceEc2: add warning message when not on AWS. - ds-identify: add reading of datasource/Ec2/strict_id - tools/ds-identify: add support for found or maybe contributing config. - tools/ds-identify: read the seed directory on Ec2 - tools/ds-identify: use quotes in local declarations. - tools/ds-identify: fix documentation of policy setting in a comment. - ds-identify: only run once per boot unless --force is given. - flake8: fix flake8 complaints in previous commit. - net: correct errors in cloudinit/net/sysconfig.py [Lars Kellogg-Stedman] (LP: #1665441) - ec2_utils: fix MetadataLeafDecoder that returned bytes on empty - apply the runtime configuration written by ds-identify. - ds-identify: fix checking for filesystem label (LP: #1663735) - ds-identify: read ds=nocloud properly (LP: #1663723) - support nova-lxd by reading platform from environment of pid 1. (LP: #1661797) - ds-identify: change aarch64 to use the default for non-dmi systems. - Remove style checking during build and add latest style checks to tox [Joshua Powers] (LP: #1652329) - code-style: make master pass pycodestyle (2.3.1) cleanly, currently: [Joshua Powers] - manual_cache_clean: When manually cleaning touch a file in instance dir. - Add tools/ds-identify to identify datasources available. - Fix small typo and change iso-filename for consistency [Robin Naundorf] - Fix eni rendering of multiple IPs per interface [Ryan Harper] (LP: #1657940) - tools/mock-meta: support python2 or python3 and ipv6 in both. - tests: remove executable bit on test_net, so it runs, and fix it. - tests: No longer monkey patch httpretty for python 3.4.2 - Add 3 ecdsa-sha2-nistp* ssh key types now that they are standardized [Lars Kellogg-Stedman] (LP: #1658174) - reset httppretty for each test [Lars Kellogg-Stedman] (LP: #1658200) - build: fix running Make on a branch with tags other than master - EC2: Do not cache security credentials on disk [Andrew Jorgensen] (LP: #1638312) - doc: Fix typos and clarify some aspects of the part-handler [Erik M. Bray] - doc: add some documentation on OpenStack datasource. - OpenStack: Use timeout and retries from config in get_data. [Lars Kellogg-Stedman] (LP: #1657130) - Fixed Misc issues related to VMware customization. [Sankar Tanguturi] - Fix minor docs typo: perserve > preserve [Jeremy Bicha] - Use dnf instead of yum when available [Lars Kellogg-Stedman] (LP: #1647118) - validate-yaml: use python rather than explicitly python3 - Get early logging logged, including failures of cmdline url. 0.7.9: - doc: adjust headers in tests documentation for consistency. - pep8: fix issue found in zesty build with pycodestyle. - integration test: initial commit of integration test framework [Wesley Wiedenmeier] - LICENSE: Allow dual licensing GPL-3 or Apache 2.0 [Jon Grimm] - Fix config order of precedence, putting kernel command line over system. [Wesley Wiedenmeier] (LP: #1582323) - pep8: whitespace fix - Update the list of valid ssh keys. [Michael Felt] - network: add ENI unit test for statically rendered routes. - set_hostname: avoid erroneously appending domain to fqdn [Lars Kellogg-Stedman] (LP: #1647910) - doc: change 'nobootwait' to 'nofail' in docs [Anhad Jai Singh] - Replace an expired bit.ly link in code comment. - user-groups: fix bug when groups was provided as string and had spaces (LP: #1354694) - mounts: use mount -a again to accomplish mounts (LP: #1647708) - CloudSigma: Fix bug where datasource was not loaded in local search. (LP: #1648380) - when adding a user, strip whitespace from group list [Lars Kellogg-Stedman] (LP: #1354694) - fix decoding of utf-8 chars in yaml test - Replace usage of sys_netdev_info with read_sys_net (LP: #1625766) - fix problems found in python2.6 test. - OpenStack: extend physical types to include hyperv, hw_veb, vhost_user. (LP: #1642679) - tests: fix assumptions that expected no eth0 in system. (LP: #1644043) - net/cmdline: Consider ip= or ip6= on command line not only ip= (LP: #1639930) - Just use file logging by default (LP: #1643990) - Improve formatting for ProcessExecutionError [Wesley Wiedenmeier] - flake8: fix trailing white space - Doc: various documentation fixes [Sean Bright] - cloudinit/config/cc_rh_subscription.py: Remove repos before adding [Brent Baude] - packages/redhat: fix rpm spec file. - main: set TZ in environment if not already set. [Ryan Harper] - Azure: No longer rely on walinux agent. (LP: #1538522) - disk_setup: Use sectors as unit when formatting MBR disks with sfdisk. [Daniel Watkins] (LP: #1460715) - Add activate_datasource, for datasource specific code paths. (LP: #1611074) - systemd: cloud-init-local use RequiresMountsFor=/var/lib/cloud (LP: #1642062) - systemd: cloud-init remove After=systemd-networkd-wait-online - systemd: cloud-init-local change Before basic to sysinit - pep8: fix style errors reported by pycodestyle 2.1.0 - systemd: drop both Wants and After local-fs.target - systemd: networking service adjustments. (LP: #1636912) - systemd: replace Before=basic.target, dbus.target with sysinit.target (LP: #1629797) - doc: Add documentation on stages of boot. - doc: make the RST files consistently formated and other improvements. - Ec2: fix syntax and tox in previous commit. - Ec2: protect against non-dictionary in block-device-mapping. - doc: fixed example to not overwrite /etc/hosts [Chris Glass] - Doc: fix spelling / typos in ca_certs and scripts_vendor. - pyflakes: fix issue with pyflakes 1.3 found in ubuntu zesty-proposed. - net/cmdline: Further adjustments to ipv6 support [LaMont Jones] (LP: #1621615) - Add coverage dependency to bddeb to fix package build. - doc: improve HACKING.rst file - dmidecode: Allow dmidecode to be used on aarch64 [Robert Schweikert] - AliYun: Add new datasource for Ali-Cloud ECS [kaihuan.pkh] - Add coverage collection to tox unit tests. [Joshua Powers] - cc_users_groups: fix remaing call to ds.normalize_user_groups [Ryan Harper] - disk-config: udev settle after partitioning in gpt format. (LP: #1626243) - unittests: do not read system /etc/cloud/cloud.cfg.d (LP: #1635350) - Add documentation for logging features. [Wesley Wiedenmeier] - Add support for snap create-user on Ubuntu Core images. [Ryan Harper] - Fix sshd restarts for rhel distros. [Jim Gorz] - OpenNebula: replace 'ip' parsing with cloudinit.net usage. - Fix python2.6 things found running in centos 6. - Move user/group functions to new ug_util file - DigitalOcean: enable usage of data source by default. - update Gentoo initscripts to run in the correct order [Matthew Thode] - MAAS: improve the main of datasource to look at kernel cmdline config. - tests: silence the Cheetah UserWarning about NameMapper C version. - systemd: Run cloud-init.service Before dbus.socket not dbus.target [Daniel Watkins] (LP: #1629797) - systemd: run cloud-init.service Before dbus.service (LP: #1629797) - unittests: fix use of mock 2.0 'assert_called' when running make check [Ryan Harper] - Improve module documentation and doc cleanup. [Wesley Wiedenmeier] - lxd: Update network config for LXD 2.3 [Stéphane Graber] - DigitalOcean: use meta-data for network configruation [Ben Howard] - ntp: move to run after apt configuration (LP: #1628337) - Decode unicode types in decode_binary [Robert Schweikert] - systemd: Ensure that cloud-init-local happens before NetworkManager - Allow ephemeral drive to be unpartitioned [Paul Meyer] - subp: add 'update_env' argument - net: support reading ipv6 dhcp config from initramfs [LaMont Jones] (LP: #1621615, #1621507) - Adjust mounts and disk configuration for systemd. (LP: #1611074) - dmidecode: run dmidecode only on i?86 or x86_64 arch. [Robert Schweikert] - systemd: put cloud-init.target After multi-user.target (LP: #1623868) 0.7.8: - SmartOS: more improvements for network configuration - add ntp config module [Ryan Harper] - ChangeLog: update changelog for previous commit. - Add distro tags on config modules that should have it. - NoCloud: fix bug providing network-interfaces via meta-data. (LP: 1577982) - ConfigDrive: recognize 'tap' as a link type. (LP: #1610784) - Upgrade to a configobj package new enough to work - MAAS: add vendor-data support (LP: #1612313) - DigitalOcean: use the v1.json endpoint [Ben Howard] - Get Azure endpoint server from DHCP client [Brent Baude] - Apt: add new apt configuration format [Christian Ehrhardt] - distros: fix get_primary_arch method use of os.uname [Andrew Jorgensen] - Fix Gentoo net config generation [Matthew Thode] - Minor cleanups to atomic_helper and add unit tests. - azure dhclient-hook cleanups - network: fix get_interface_mac for bond slave, read_sys_net for ENOTDIR - Generate a dummy bond name for OpenStack (LP: #1605749) - add install option for openrc [Matthew Thode] - Add a module that can configure spacewalk. - python2.6: fix dict comprehension usage in _lsb_release. - apt-config: allow both old and new format to be present. [Christian Ehrhardt] (LP: #1616831) - bddeb: add --release flag to specify the release in changelog. - salt minion: update default pki directory for newer salt minion. (LP: #1609899) - Fix typo in default keys for phone_home [Roland Sommer] (LP: #1607810) - apt config conversion: treat empty string as not provided. (LP: #1621180) - tests: cleanup tempdirs in apt_source tests - systemd: Better support package and upgrade. (LP: #1576692, #1621336) - remove obsolete .bzrignore - DataSourceOVF: fix user-data as base64 with python3 (LP: #1619394) - Allow link type of null in network_data.json [Jon Grimm] (LP: #1621968) 0.7.7: - open 0.7.7 - Digital Ocean: add datasource for Digital Ocean. [Neal Shrader] - expose uses_systemd as a distro function (fix rhel7) - fix broken 'output' config (LP: #1387340) - begin adding cloud config module docs to config modules (LP: #1383510) - retain trailing eol from template files (sources.list) when rendered with jinja (LP: #1355343) - Only use datafiles and initsys addon outside virtualenvs - Fix the digital ocean test case on python 2.6 - Increase the usefulness, robustness, configurability of the chef module so that it is more useful, more documented and better for users - Fix how '=' signs are not handled that well in ssh_utils (LP: #1391303) - Be more tolerant of ssh keys passed into 'ssh_authorized_keys'; allowing for list, tuple, set, dict, string types and warning on other unexpected types - Update to use newer/better OMNIBUS_URL for chef module - GCE: Allow base64 encoded user-data (LP: #1404311) [Wayne Witzell III] - GCE: use short hostname rather than fqdn (LP: #1383794) [Ben Howard] - systemd: make init stage run before login prompts shown [Steve Langasek] - hostname: on first boot apply hostname to be same as is written for persistent hostname. (LP: #1246485) - remove usage of dmidecode on linux in favor of /sys interface [Ben Howard] - python3 support [Barry Warsaw, Daniel Watkins, Josh Harlow] (LP: #1247132) - support managing gpt partitions in disk config [Daniel Watkins] - Azure: utilze gpt support for ephemeral formating [Daniel Watkins] - CloudStack: support fetching password from virtual router [Daniel Watkins] (LP: #1422388) - readurl, read_file_or_url returns bytes, user must convert as necessary - SmartOS: use v2 metadata service (LP: #1436417) [Daniel Watkins] - NoCloud: fix local datasource claiming found without explicit dsmode - Snappy: add support for installing snappy packages and configuring. - systemd: use network-online instead of network.target (LP: #1440180) [Steve Langasek] - Add functionality to fixate the uid of a newly added user. - Don't overwrite the hostname if the user has changed it after we set it. - GCE datasource does not handle instance ssh keys (LP: 1403617) - sysvinit: make cloud-init-local run before network (LP: #1275098) [Surojit Pathak] - Azure: do not re-set hostname if user has changed it (LP: #1375252) - Fix exception when running with no arguments on Python 3. [Daniel Watkins] - Centos: detect/expect use of systemd on centos 7. [Brian Rak] - Azure: remove dependency on walinux-agent [Daniel Watkins] - EC2: know about eu-central-1 availability-zone (LP: #1456684) - Azure: remove password from on-disk ovf-env.xml (LP: #1443311) [Ben Howard] - Doc: include information on user-data in OpenStack [Daniel Watkins] - Systemd: check for systemd using sd_booted symantics (LP: #1461201) [Lars Kellogg-Stedman] - Add an rh_subscription module to handle registration of Red Hat instances. [Brent Baude] - cc_apt_configure: fix importing keys under python3 (LP: #1463373) - cc_growpart: fix specification of 'devices' list (LP: #1465436) - CloudStack: fix password setting on cloudstack > 4.5.1 (LP: #1464253) - GCE: fix determination of availability zone (LP: #1470880) - ssh: generate ed25519 host keys (LP: #1461242) - distro mirrors: provide datasource to mirror selection code to support GCE regional mirrors. (LP: #1470890) - add udev rules that identify ephemeral device on Azure (LP: #1411582) - _read_dmi_syspath: fix bad log message causing unintended exception - rsyslog: add additional configuration mode (LP: #1478103) - status_wrapper in main: fix use of print_exc when handling exception - reporting: add reporting module for web hook or logging of events. - NoCloud: fix consumption of vendordata (LP: #1493453) - power_state_change: support 'condition' to disable or enable poweroff - ubuntu fan: support for config and installing of ubuntu fan (LP: #1504604) - Azure: support extracting SSH key values from ovf-env.xml (LP: #1506244) - AltCloud: fix call to udevadm settle (LP: #1507526) - Ubuntu templates: modify sources.list template to provide same sources as install from server or desktop ISO. (LP: #1177432) - cc_mounts: use 'nofail' if system uses systemd. (LP: #1514485) - Azure: get instance id from dmi instead of SharedConfig (LP: #1506187) - systemd/power_state: fix power_state to work even if cloud-final exited non-zero (LP: #1449318) - SmartOS: Add support for Joyent LX-Brand Zones (LP: #1540965) [Robert C Jennings] - systemd: support using systemd-detect-virt to detect container (LP: #1539016) [Martin Pitt] - docs: fix lock_passwd documentation [Robert C Jennings] - Azure: Handle escaped quotes in WALinuxAgentShim.find_endpoint. (LP: #1488891) [Dan Watkins] - lxd: add support for setting up lxd using 'lxd init' (LP: #1522879) - Add Image Customization Parser for VMware vSphere Hypervisor Support. [Sankar Tanguturi] - timezone: use a symlink rather than copy for /etc/localtime unless it is already a file (LP: #1543025). - Enable password changing via a hashed string [Alex Sirbu] - Added BigStep datasource [Alex Sirbu] - No longer run pollinate in seed_random (LP: #1554152) - groups: add defalt user to 'lxd' group. Create groups listed for a user if they do not exist. (LP: #1539317) - dmi data: fix failure of reading dmi data for unset dmi values - doc: mention label for nocloud datasource must be 'cidata' [Peter Hurley] - ssh_pwauth: fix module to support 'unchanged' and match behavior described in documentation [Chris Cosby] - quickly check to see if the previous instance id is still valid to avoid dependency on network metadata service on every boot (LP: #1553815) - support network configuration in cloud-init --local with support device naming via systemd.link. - FreeBSD: add support for installing packages, setting password and timezone. Change default user to 'freebsd'. [Ben Arblaster] - locale: list unsupported environment settings in warning (LP: #1558069) - disk_setup: correctly send --force to mkfs on block devices (LP: #1548772) - chef: fix chef install from gems (LP: #1553345) - systemd: do not specify After of obsolete syslog.target (LP: #1536964) - centos: Ensure that resolve conf object is written as a str (LP: #1479988) - chef: straighten out validation_cert and validation_key (LP: #1568940) - phone_home: allow usage of fqdn (LP: #1566824) [Ollie Armstrong] - cloudstack: Only use DHCPv4 lease files as a datasource (LP: #1576273) [Wido den Hollander] - Paths: fix instance path if datasource's id has a '/'. (LP: #1575938) [Robert Jennings] - Ec2: do not retry requests for user-data path on 404. - settings on the kernel command line (cc:) override all local settings rather than only those in /etc/cloud/cloud.cfg (LP: #1582323) - Improve merging documentation [Daniel Watkins] - apt sources: support inserting key/key-id only, custom sources.list, long gpg key fingerprints with spaces, and dictionary format (LP: #1574113) - SmartOS: datasource improvements and support for metadata service providing networking information. - Datasources: centrally handle 'dsmode' and no longer require datasources to "pass" if modules_init should be executed with network access. - ConfigDrive: improved support for networking information from a network_data.json or older interfaces formated network_config. - Change missing Cheetah log warning to debug [Andrew Jorgensen] - Remove trailing dot from GCE metadata URL (LP: #1581200) [Phil Roche] - support network rendering to sysconfig (for centos and RHEL) - write_files: if no permissions are given, just use default without warn. - user_data: fix error when user-data is not utf-8 decodable (LP: #1532072) - fix mcollective module with python3 (LP: #1597699) [Sergii Golovatiuk] 0.7.6: - open 0.7.6 - Enable vendordata on CloudSigma datasource (LP: #1303986) - Poll on /dev/ttyS1 in CloudSigma datasource only if dmidecode says we're running on cloudsigma (LP: #1316475) [Kiril Vladimiroff] - SmartOS test: do not require existance of /dev/ttyS1. [LP: #1316597] - doc: fix user-groups doc to reference plural ssh-authorized-keys (LP: #1327065) [Joern Heissler] - fix 'make test' in python 2.6 - support jinja2 as a templating engine. Drop the hard requirement on cheetah. This helps in python3 effort. (LP: #1219223) - change install path for systemd files to /lib/systemd/system [Dimitri John Ledkov] - change trunk debian packaging to use pybuild and drop cdbs. [Dimitri John Ledkov] - SeLinuxGuard: remove invalid check that looked for stat.st_mode in os.lstat. - do not write comments in /etc/timezone (LP: #1341710) - ubuntu: provide 'ubuntu-init-switch' module to aid in systemd testing. - status/result json: remove 'end' entry which was always null - systemd: make cloud-init block ssh service startup to guarantee keys are generated. [Jordan Evans] (LP: #1333920) - default settings: fix typo resulting in OpenStack and GCE not working unless config explicitly provided (LP: #1329583) [Garrett Holmstrom]) - fix rendering resolv.conf if no 'options' are provided (LP: #1328953) - docs: fix disk-setup to reference 'table_type' [Rail Aliiev] (LP: #1313114) - ssh_authkey_fingerprints: fix bug that prevented disabling the module. (LP: #1340903) [Patrick Lucas] - no longer use pylint as a checker, fix pep8 [Jay Faulkner]. - Openstack: do not load some urls twice. - FreeBsd: fix initscripts and add working config file [Harm Weites] - Datasource: fix broken logic to provide hostname if datasource does not provide one - Improved and less verbose logging. - resizefs: first check that device is writable. - configdrive: fix reading of vendor data to be like metadata service reader. [Jay Faulkner] - resizefs: fix broken background resizing [Jay Faulkner] (LP: #1338614) - cc_grub_dpkg: fix EC2 hvm instances to avoid prompt on grub update. (LP: #1336855) - FreeBsd: support config drive datasource [Joseph bajin] - cc_mounts: support creating a swap file - DigitalOcean & GCE: fix get_hostname consistency 0.7.5: - open 0.7.5 - Add a debug log message around import failures - add a 'debug' module for easily printing out some information about datasource and cloud-init [Shraddha Pandhe] - support running apt with 'eatmydata' via configuration token apt_get_wrapper (LP: #1236531). - convert paths provided in config-drive 'files' to string before writing (LP: #1260072). - Azure: minor changes in logging output. ensure filenames are strings (not unicode). - config/cloud.cfg.d/05_logging.cfg: provide a default 'output' setting, to redirect cloud-init stderr and stdout /var/log/cloud-init-output.log. - drop support for resizing partitions with parted entirely (LP: #1212492). This was broken as it was anyway. - add support for vendordata in SmartOS and NoCloud datasources. - drop dependency on boto for crawling ec2 metadata service. - add 'Requires' on sudo (for OpenNebula datasource) in rpm specs, and 'Recommends' in the debian/control.in [Vlastimil Holer] - if mount_info reports /dev/root is a device path for /, then convert that to a device via help of kernel cmdline. - configdrive: consider partitions as possible datasources if they have theh correct filesystem label. [Paul Querna] - initial freebsd support [Harm Weites] - fix in is_ipv4 to accept IP addresses with a '0' in them. - Azure: fix issue when stale data in /var/lib/waagent (LP: #1269626) - skip config_modules that declare themselves only verified on a set of distros. Add them to 'unverified_modules' list to run anyway. - Add CloudSigma datasource [Kiril Vladimiroff] - Add initial support for Gentoo and Arch distributions [Nate House] - Add GCE datasource [Vaidas Jablonskis] - Add native Openstack datasource which reads openstack metadata rather than relying on EC2 data in openstack metadata service. - SmartOS, AltCloud: disable running on arm systems due to bug (LP: #1243287, #1285686) [Oleg Strikov] - Allow running a command to seed random, default is 'pollinate -q' (LP: #1286316) [Dustin Kirkland] - Write status to /run/cloud-init/status.json for consumption by other programs (LP: #1284439) - Azure: if a reboot causes ephemeral storage to be re-provisioned Then we need to re-format it. (LP: #1292648) - OpenNebula: support base64 encoded user-data [Enol Fernandez, Peter Kotcauer] 0.7.4: - fix issue mounting 'ephemeral0' if ephemeral0 was an alias for a partitioned block device with target filesystem on ephemeral0.1. (LP: #1236594) - fix DataSourceAzure incompatibility with 2.6 (LP: #1232175) - fix power_state_change config module so that example works. Improve its documentation and add reference to 'timeout' - support apt-add-archive with 'cloud-archive:' format. (LP: #1244355) - Change SmartOS verb for availability zone (LP: #1249124) - documentation fix for boothooks to use 'cloud-init-per' - fix resizefs module by supporting kernels that do not have /proc/PID/mountinfo. (LP: #1248625) [Tim Daly Jr.] - fix 'make rpm' by removing 0.6.4 entry from ChangeLog (LP: #1241834) 0.7.3: - fix omnibus chef installer (LP: #1182265) [Chris Wing] - small fix for OVF datasource for iso transport on non-iso9660 filesystem - determine if upstart version is suitable for 'initctl reload-configuration' (LP: #1124384). If so, then invoke it. supports setting up instance-store disk with partition table and filesystem. - add Azure datasource. - add support for SuSE / SLES [Juerg Haefliger] - add a trailing carriage return to chpasswd input, which reportedly caused a problem on rhel5 if missing. - support individual MIME segments to be gzip compressed (LP: #1203203) - always finalize handlers even if processing failed (LP: #1203368) - support merging into cloud-config via jsonp. (LP: #1200476) - add datasource 'SmartOS' for Joyent Cloud. Adds a dependency on serial. - add 'log_time' helper to util for timing how long things take which also reads from uptime. uptime is useful as clock may change during boot due to ntp. - prefer growpart resizer to 'parted resizepart' (LP: #1212492) - support random data seed from config drive or azure, and a module 'seed_random' to read that and write it to /dev/urandom. - add OpenNebula Datasource [Vlastimil Holer] - add 'cc_disk_setup' config module for paritioning disks and creating filesystems. Useful if attached disks are not formatted (LP: #1218506) - Fix usage of libselinux-python when selinux is disabled. [Garrett Holmstrom] - multi_log: only write to /dev/console if it exists [Garrett Holmstrom] - config/cloud.cfg: add 'sudo' to list groups for the default user (LP: #1228228) - documentation fix for use of 'mkpasswd' [Eric Nordlund] - respect /etc/growroot-disabled file (LP: #1234331) 0.7.2: - add a debian watch file - add 'sudo' entry to ubuntu's default user (LP: #1080717) - fix resizefs module when 'noblock' was provided (LP: #1080985) - make sure there is no blank line before cloud-init entry in there are no blank lines in /etc/ca-certificates.conf (LP: #1077020) - fix sudoers writing when entry is a string (LP: #1079002) - tools/write-ssh-key-fingerprints: use '-s' rather than '--stderr' option (LP: #1083715) - make install of puppet configurable (LP: #1090205) [Craig Tracey] - support omnibus installer for chef [Anatoliy Dobrosynets] - fix bug where cloud-config in user-data could not modify system_info settings (LP: #1090482) - fix CloudStack DataSource to use Virtual Router as described by CloudStack documentation if it is available by searching through dhclient lease files. If it is not available, then fall back to the default gateway. (LP: #1089989) - fix redaction of password field in log (LP: #1096417) - fix to cloud-config user setup. Previously, lock_passwd was broken and all accounts would be locked unless 'system' was given (LP: #1096423). - Allow 'sr0' (or sr[0-9]) to be specified without /dev/ as a source for mounts. [Vlastimil Holer] - allow config-drive-data to come from a CD device by more correctly filtering out partitions. (LP: #1100545) - setup docs to be available on read-the-docs https://cloudinit.readthedocs.org/en/latest/ (LP: #1093039) - add HACKING file for information on contributing - handle the legacy 'user:' configuration better, making it affect the configured OS default user (LP: #1100920) - Adding a resolv.conf configuration module (LP: #1100434). Currently only working on redhat systems (no support for resolvconf) - support grouping linux distros into "os_families". This allows a module to operate on the family (redhat or debian) rather than the distro (ubuntu, debian, fedora, rhel) (LP: #1100029) - fix /etc/hosts writing when templates are used (LP: #1100036) - add package versioning logic to package installation functionality (LP: #1108047) - fix documentation for write_files to correctly list 'permissions' rather than 'perms' (LP: #1111205) - cloud-init-container.conf: ensure /run/network before running ifquery - DataSourceNoCloud: allow user-data and meta-data to be specified in config (LP: #1115833). - improve debian support in sysvinit scripts, package build scripts, and split sources.list template to be distro specific. - support for resizing btrfs root filesystems [Blair Zajac] - fix issue when writing ssh keys to .ssh/authorized_keys (LP: #1136343) - upstart: cloud-init-nonet.conf trap the TERM signal, so that dmesg or other output does not get a 'killed by TERM signal' message. - support resizing partitions via growpart or parted (LP: #1136936) - allow specifying apt-get command in distro config ('apt_get_command') - support different and user-suppliable merging algorithms for cloud-config (LP: #1023179) - use python-requests rather than urllib2. By using recent versions of python-requests, we get https support (LP: #1067888). - make apt-get invoke 'dist-upgrade' rather than 'upgrade' for package_upgrade. (LP: #1164147) - improvements for systemd with Fedora 18 - workaround 2.6 kernel issue that stopped blkid from showing /dev/sr0 - add new, backwards compatible merging syntax so merging of cloud-config can be more useful. 0.7.1: - sysvinit: fix missing dependency in cloud-init job for RHEL 5.6 - config-drive: map hostname to local-hostname (LP: #1061964) - landscape: install landscape-client package if not installed. only take action if cloud-config is present (LP: #1066115) - cc_landscape: restart landscape after install or config (LP: #1070345) - multipart/archive. do not fail on unknown headers in multipart mime or cloud-archive config (LP: #1065116). - tools/Z99-cloud-locale-test.sh: avoid warning when user's shell is zsh (LP: #1073077) - fix stack trace when unknown user-data input had unicode (LP: #1075756) - split 'apt-update-upgrade' config module into 'apt-configure' and 'package-update-upgrade-install'. The 'package-update-upgrade-install' will be a cross distro module. - Cleanups: - Remove usage of paths.join, as all code should run through util helpers - Fix pylint complaining about tests folder 'helpers.py' not being found - Add a pylintrc file that is used instead options hidden in 'run_pylint' - fix bug where cloud-config from user-data could not affect system_info settings [revno 703] (LP: #1076811) - for write fqdn to system config for rh/fedora [revno 704] - add yaml/cloud config examples checking tool [revno 706] - Fix the merging of group configuration when that group configuration is a dict => members. [revno 707] - add yum_add_repo configuration module for adding additional yum repos - fix public key importing with config-drive-v2 datasource (LP: #1077700) - handle renaming and fixing up of marker names (LP: 1075980) [revno 710] this relieves that burden from the distro/packaging. - group config: fix how group members weren't being translated correctly when the group: [member, member...] format was used (LP: #1077245) - sysconfig: fix how the /etc/sysconfig/network should be using the fully qualified domain name instead of the partially qualified domain name which is used in the ubuntu/debian case (LP: #1076759) - fix how string escaping was not working when the string was a unicode string which was causing the warning message not to be written out (LP: #1075756) - for boto > 0.6.0 there was a lazy load of the metadata added, when cloud-init runs the usage of this lazy loading is hidden and since that lazy loading will be performed on future attribute access we must traverse the lazy loaded dictionary and force it to full expand so that if cloud-init blocks the ec2 metadata port the lazy loaded dictionary will continue working properly instead of trying to make additional url calls which will fail (LP: #1068801) - use a set of helper/parsing classes to perform system configuration for easier test. (/etc/sysconfig, /etc/hostname, resolv.conf, /etc/hosts) - add power_state_change config module for shutting down stystem after cloud-init finishes. (LP: #1064665) 0.7.0: - add a 'exception_cb' argument to 'wait_for_url'. If provided, this method will be called back with the exception received and the message. - utilize the 'exception_cb' above to modify the oauth timestamp in DataSourceMAAS requests if a 401 or 403 is received. (LP: #978127) - catch signals and exit rather than stack tracing - if logging fails, enable a fallback logger by patching the logging module - do not 'start networking' in cloud-init-nonet, but add cloud-init-container job that runs only if in container and emits net-device-added (LP: #1031065) - search only top level dns for 'instance-data' in DataSourceEc2 (LP: #1040200) - add support for config-drive-v2 (LP:#1037567) - support creating users, including the default user. [Ben Howard] (LP: #1028503) - add apt_reboot_if_required to reboot if an upgrade or package installation forced the need for one (LP: #1038108) - allow distro mirror selection to include availability-zone (LP: #1037727) - allow arch specific mirror selection (select ports.ubuntu.com on arm) LP: #1028501 - allow specification of security mirrors (LP: #1006963) - add the 'None' datasource (LP: #906669), which will allow jobs to run even if there is no "real" datasource found. - write ssh authorized keys to console, ssh_authkey_fingerprints config module [Joshua Harlow] (LP: #1010582) - Added RHEVm and vSphere support as source AltCloud [Joseph VLcek] - add write-files module (LP: #1012854) - Add setuptools + cheetah to debian package build dependencies (LP: #1022101) - Adjust the sysvinit local script to provide 'cloud-init-local' and have the cloud-config script depend on that as well. - Add the 'bzr' name to all packages built - Reduce logging levels for certain non-critical cases to DEBUG instead of the previous level of WARNING - unified binary that activates the various stages - Now using argparse + subcommands to specify the various CLI options - a stage module that clearly separates the stages of the different components (also described how they are used and in what order in the new unified binary) - user_data is now a module that just does user data processing while the actual activation and 'handling' of the processed user data is done via a separate set of files (and modules) with the main 'init' stage being the controller of this - creation of boot_hook, cloud_config, shell_script, upstart_job version 2 modules (with classes that perform there functionality) instead of those having functionality that is attached to the cloudinit object (which reduces reuse and limits future functionality, and makes testing harder) - removal of global config that defined paths, shared config, now this is via objects making unit testing testing and global side-effects a non issue - creation of a 'helpers.py' - this contains an abstraction for the 'lock' like objects that the various module/handler running stages use to avoid re-running a given module/handler for a given frequency. this makes it separated from the actual usage of that object (thus helpful for testing and clear lines usage and how the actual job is accomplished) - a common 'runner' class is the main entrypoint using these locks to run function objects passed in (along with there arguments) and there frequency - add in a 'paths' object that provides access to the previously global and/or config based paths (thus providing a single entrypoint object/type that provides path information) - this also adds in the ability to change the path when constructing that path 'object' and adding in additional config that can be used to alter the root paths of 'joins' (useful for testing or possibly useful in chroots?) - config options now avaiable that can alter the 'write_root' and the 'read_root' when backing code uses the paths join() function - add a config parser subclass that will automatically add unknown sections and return default values (instead of throwing exceptions for these cases) - a new config merging class that will be the central object that knows how to do the common configuration merging from the various configuration sources. The order is the following: - cli config files override environment config files which override instance configs which override datasource configs which override base configuration which overrides default configuration. - remove the passing around of the 'cloudinit' object as a 'cloud' variable and instead pass around an 'interface' object that can be given to modules and handlers as there cloud access layer while the backing of that object can be varied (good for abstraction and testing) - use a single set of functions to do importing of modules - add a function in which will search for a given set of module names with a given set of attributes and return those which are found - refactor logging so that instead of using a single top level 'log' that instead each component/module can use its own logger (if desired), this should be backwards compatible with handlers and config modules that used the passed in logger (its still passed in) - ensure that all places where exception are caught and where applicable that the util logexc() is called, so that no exceptions that may occur are dropped without first being logged (where it makes sense for this to happen) - add a 'requires' file that lists cloud-init dependencies - applying it in package creation (bdeb and brpm) as well as using it in the modified setup.py to ensure dependencies are installed when using that method of packaging - add a 'version.py' that lists the active version (in code) so that code inside cloud-init can report the version in messaging and other config files - cleanup of subprocess usage so that all subprocess calls go through the subp() utility method, which now has an exception type that will provide detailed information on python 2.6 and 2.7 - forced all code loading, moving, chmod, writing files and other system level actions to go through standard set of util functions, this greatly helps in debugging and determining exactly which system actions cloud-init is performing - adjust url fetching and url trying to go through a single function that reads urls in the new 'url helper' file, this helps in tracing, debugging and knowing which urls are being called and/or posted to from with-in cloud-init code - add in the sending of a 'User-Agent' header for all urls fetched that do not provide there own header mapping, derive this user-agent from the following template, 'Cloud-Init/{version}' where the version is the cloud-init version number - using prettytable for netinfo 'debug' printing since it provides a standard and defined output that should be easier to parse than a custom format - add a set of distro specific classes, that handle distro specific actions that modules and or handler code can use as needed, this is organized into a base abstract class with child classes that implement the shared functionality. config determines exactly which subclass to load, so it can be easily extended as needed. - current functionality - network interface config file writing - hostname setting/updating - locale/timezone/ setting - updating of /etc/hosts (with templates or generically) - package commands (ie installing, removing)/mirror finding - interface up/down activating - implemented a debian + ubuntu subclass - implemented a redhat + fedora subclass - adjust the root 'cloud.cfg' file to now have distrobution/path specific configuration values in it. these special configs are merged as the normal config is, but the system level config is not passed into modules/handlers - modules/handlers must go through the path and distro object instead - have the cloudstack datasource test the url before calling into boto to avoid the long wait for boto to finish retrying and finally fail when the gateway meta-data address is unavailable - add a simple mock ec2 meta-data python based http server that can serve a very simple set of ec2 meta-data back to callers - useful for testing or for understanding what the ec2 meta-data service can provide in terms of data or functionality - for ssh key and authorized key file parsing add in classes and util functions that maintain the state of individual lines, allowing for a clearer separation of parsing and modification (useful for testing and tracing) - add a set of 'base' init.d scripts that can be used on systems that do not have full upstart or systemd support (or support that does not match the standard fedora/ubuntu implementation) - currently these are being tested on RHEL 6.2 - separate the datasources into there own subdirectory (instead of being a top-level item), this matches how config 'modules' and user-data 'handlers' are also in there own subdirectory (thus helping new developers and others understand the code layout in a quicker manner) - add the building of rpms based off a new cli tool and template 'spec' file that will templatize and perform the necessary commands to create a source and binary package to be used with a cloud-init install on a 'rpm' supporting system - uses the new standard set of requires and converts those pypi requirements into a local set of package requirments (that are known to exist on RHEL systems but should also exist on fedora systems) - adjust the bdeb builder to be a python script (instead of a shell script) and make its 'control' file a template that takes in the standard set of pypi dependencies and uses a local mapping (known to work on ubuntu) to create the packages set of dependencies (that should also work on ubuntu-like systems) - pythonify a large set of various pieces of code - remove wrapping return statements with () when it has no effect - upper case all constants used - correctly 'case' class and method names (where applicable) - use os.path.join (and similar commands) instead of custom path creation - use 'is None' instead of the frowned upon '== None' which picks up a large set of 'true' cases than is typically desired (ie for objects that have there own equality) - use context managers on locks, tempdir, chdir, file, selinux, umask, unmounting commands so that these actions do not have to be closed and/or cleaned up manually in finally blocks, which is typically not done and will eventually be a bug in the future - use the 'abc' module for abstract classes base where possible - applied in the datasource root class, the distro root class, and the user-data v2 root class - when loading yaml, check that the 'root' type matches a predefined set of valid types (typically just 'dict') and throw a type error if a mismatch occurs, this seems to be a good idea to do when loading user config files - when forking a long running task (ie resizing a filesytem) use a new util function that will fork and then call a callback, instead of having to implement all that code in a non-shared location (thus allowing it to be used by others in the future) - when writing out filenames, go through a util function that will attempt to ensure that the given filename is 'filesystem' safe by replacing '/' with '_' and removing characters which do not match a given whitelist of allowed filename characters - for the varying usages of the 'blkid' command make a function in the util module that can be used as the single point of entry for interaction with that command (and its results) instead of having X separate implementations - place the rfc 8222 time formatting and uptime repeated pieces of code in the util module as a set of function with the name 'time_rfc2822'/'uptime' - separate the pylint+pep8 calling from one tool into two indivudal tools so that they can be called independently, add make file sections that can be used to call these independently - remove the support for the old style config that was previously located in '/etc/ec2-init/ec2-config.cfg', no longer supported! - instead of using a altered config parser that added its own 'dummy' section on in the 'mcollective' module, use configobj which handles the parsing of config without sections better (and it also maintains comments instead of removing them) - use the new defaulting config parser (that will not raise errors on sections that do not exist or return errors when values are fetched that do not exist) in the 'puppet' module - for config 'modules' add in the ability for the module to provide a list of distro names which it is known to work with, if when ran and the distro being used name does not match one of those in this list, a warning will be written out saying that this module may not work correctly on this distrobution - for all dynamically imported modules ensure that they are fixed up before they are used by ensuring that they have certain attributes, if they do not have those attributes they will be set to a sensible set of defaults instead - adjust all 'config' modules and handlers to use the adjusted util functions and the new distro objects where applicable so that those pieces of code can benefit from the unified and enhanced functionality being provided in that util module - fix a potential bug whereby when a #includeonce was encountered it would enable checking of urls against a cache, if later a #include was encountered it would continue checking against that cache, instead of refetching (which would likely be the expected case) - add a openstack/nova based pep8 extension utility ('hacking.py') that allows for custom checks (along with the standard pep8 checks) to occur when running 'make pep8' and its derivatives - support relative path in AuthorizedKeysFile (LP: #970071). - make apt-get update run with --quiet (suitable for logging) (LP: #1012613) - cc_salt_minion: use package 'salt-minion' rather than 'salt' (LP: #996166) - use yaml.safe_load rather than yaml.load (LP: #1015818) 0.6.3: - add sample systemd config files [Garrett Holmstrom] - add Fedora support [Garrent Holstrom] (LP: #883286) - fix bug in netinfo.debug_info if no net devices available (LP: #883367) - use python module hashlib rather than md5 to avoid deprecation warnings. - support configuration of mirror based on dns name ubuntu-mirror in local domain. - support setting of Acquire::HTTP::Proxy via 'apt_proxy' - DataSourceEc2: more resilliant to slow metadata service - config change: 'retries' dropped, 'max_wait' added, timeout increased - close stdin in all cloud-init programs that are launched at boot (LP: #903993) - revert management of /etc/hosts to 0.6.1 style (LP: #890501, LP: #871966) - write full ssh keys to console for easy machine consumption (LP: #893400) - put INSTANCE_ID environment variable in bootcmd scripts - add 'cloud-init-per' script for easily running things with a given frequency - replace cloud-init-run-module with cloud-init-per - support configuration of landscape-client via cloud-config (LP: #857366) - part-handlers now get base64 decoded content rather than 2xbase64 encoded in the payload parameter. (LP: #874342) - add test case framework [Mike Milner] (LP: #890851) - fix pylint warnings [Juerg Haefliger] (LP: #914739) - add support for adding and deleting CA Certificates [Mike Milner] (LP: #915232) - in ci-info lines, use '.' to indicate empty field for easier machine reading - support empty lines in "#include" files (LP: #923043) - support configuration of salt minions (Jeff Bauer) (LP: #927795) - DataSourceOVF: only search for OVF data on ISO9660 filesystems (LP: #898373) - DataSourceConfigDrive: support getting data from openstack config drive (LP: #857378) - DataSourceNoCloud: support seed from external disk of ISO or vfat (LP: #857378) - DataSourceNoCloud: support inserting /etc/network/interfaces - DataSourceMaaS: add data source for Ubuntu Machines as a Service (MaaS) (LP: #942061) - DataSourceCloudStack: add support for CloudStack datasource [Cosmin Luta] - add option 'apt_pipelining' to address issue with S3 mirrors (LP: #948461) [Ben Howard] - warn on non-multipart, non-handled user-data [Martin Packman] - run resizefs in the background in order to not block boot (LP: #961226) - Fix bug in Chef support where validation_key was present in config, but 'validation_cert' was not (LP: #960547) - Provide user friendly message when an invalid locale is set [Ben Howard] (LP: #859814) - Support reading cloud-config from kernel command line parameter and populating local file with it, which can then provide data for DataSources - improve chef examples for working configurations on 11.10 and 12.04 [Lorin Hochstein] (LP: #960564) 0.6.2: - fix bug where update was not done unless update was explicitly set. It would not be run if 'upgrade' or packages were set to be installed - fix bug in part-handler code, that prevented working part-handlers (LP: #739694) - fix bug in resizefs cloud-config that would cause trace based on failure of 'blkid /dev/root' (LP: #726938) - convert dos formated files to unix for user-scripts, boothooks, and upstart jobs (LP: #744965) - fix bug in seeding of grub dpkg configuration (LP: #752361) due to renamed devices in newer (natty) kernels (/dev/sda1 -> /dev/xvda1) - make metadata urls configurable, to support eucalyptus in STATIC or SYSTEM modes (LP: #761847) - support disabling byobu in cloud-config - run cc_ssh as a cloud-init module so it is guaranteed to run before ssh starts (LP: #781101) - make prefix for keys added to /root/.ssh/authorized_keys configurable and add 'no-port-forwarding,no-agent-forwarding,no-X11-forwarding' to the default (LP: #798505) - make 'cloud-config ready' command configurable (LP: #785551) - make fstab fields used to 'fill in' shorthand entries configurable This means you do not have to have 'nobootwait' in the values (LP: #785542) - read /etc/ssh/sshd_config for AuthorizedKeysFile rather than assuming ~/.ssh/authorized_keys (LP: #731849) - fix cloud-init in ubuntu lxc containers (LP: #800824) - sanitize hosts file for system's hostname to 127.0.1.1 (LP: #802637) - add chef support (cloudinit/CloudConfig/cc_chef.py) (LP: ##798844) - do not give trace on failure to resize in lxc container (LP: #800856) - increase the timeout on url gets for "seedfrom" values (LP: #812646) - do not write entries for ephemeral0 on t1.micro (LP: #744019) - support 'include-once' so that expiring or one-time use urls can be used for '#include' to provide sensitive data. - support for passing public and private keys to mcollective via cloud-config - support multiple staticly configured network devices, as long as all of them come up early (LP: #810044) - Changes to handling user data mean that: * boothooks will now run more than once as they were intended (and as bootcmd commands do) * cloud-config and user-scripts will be updated from user data every boot - Fix issue where 'isatty' would return true for apt-add-repository. apt-add-repository would get stdin which was attached to a terminal (/dev/console) and would thus hang when running during boot. (LP: 831505) This was done by changing all users of util.subp to have None input unless specified - Add some debug info to the console when cloud-init runs. This is useful if debugging, IP and route information is printed to the console. - change the mechanism for handling .ssh/authorized_keys, to update entries rather than appending. This ensures that the authorized_keys that are being inserted actually do something (LP: #434076, LP: #833499) - log warning on failure to set hostname (LP: #832175) - upstart/cloud-init-nonet.conf: wait for all network interfaces to be up allow for the possibility of /var/run != /run. - DataSourceNoCloud, DataSourceOVF : do not provide a default hostname. This way the configured hostname of the system will be used if not provided by metadata (LP: #838280) - DataSourceOVF: change the default instance id to 'iid-dsovf' from 'nocloud' - Improve the OVF documentation, and provide a simple command line tool for creating a useful ISO file. 0.6.1: - fix bug in fixing permission on /var/log/cloud-init.log (LP: #704509) - improve comment strings in rsyslog file tools/21-cloudinit.conf - add previous-instance-id and previous-datasource files to datadir - add 'datasource' file to instance dir - add setting of passwords and enabling/disabling of PasswordAuthentication for sshd. By default no changes are done to sshd. - fix for puppet configuration options (LP: #709946) [Ryan Lane] - fix pickling of DataSource, which broke seeding. - turn resize_rootfs default to True - avoid mounts in DataSourceOVF if 'read' on device fails 'mount /dev/sr0' for an empty virtual cdrom device was taking 18 seconds - add 'manual_cache_clean' option to select manual cleaning of the /var/lib/cloud/instance/ link, for a data source that might not be present on every boot - make DataSourceEc2 retries and timeout configurable - add helper routines for apt-get update and install - add 'bootcmd' like 'runcmd' to cloud-config syntax for running things early - move from '#opt_include' in config file format to conf_d. ie, now files in /etc/cloud.cfg.d/ is read rather than reading '#opt_include ' or '#include ' in cloud.cfg - allow /etc/hosts to be written from hosts.tmpl. which allows getting local-hostname into /etc/hosts (LP: #720440) - better handle startup if there is no eth0 (LP: #714807) - update rather than append in puppet config [Marc Cluet] - add cloud-config for mcollective [Marc Cluet] 0.6.0: - change permissions of /var/log/cloud-init.log to accomodate syslog writing to it (LP: #704509) - rework of /var/lib/cloud layout - remove updates-check (LP: #653220) - support resizing / on first boot (enabled by default) - added support for running CloudConfig modules at cloud-init time rather than cloud-config time, and the new 'cloud_init_modules' entry in cloud.cfg to indicate which should run then. The driving force behind this was to have the rsyslog module able to run before rsyslog even runs so that a restart would not be needed (rsyslog on ubuntu runs on 'filesystem') - moved setting and updating of hostname to cloud_init_modules this allows the user to easily disable these from running. This also means: - the semaphore name for 'set_hostname' and 'update_hostname' changes to 'config_set_hostname' and 'config_update_hostname' - added cloud-config option 'hostname' for setting hostname - moved upstart/cloud-run-user-script.conf to upstart/cloud-final.conf - cloud-final.conf now runs runs cloud-config modules similar to cloud-config and cloud-init. - LP: #653271 - added writing of "boot-finished" to /var/lib/cloud/instance/boot-finished this is the last thing done, indicating cloud-init is finished booting - writes message to console with timestamp and uptime - write ssh keys to console as one of the last things done this is to ensure they don't get run off the 'get-console-ouptut' buffer - user_scripts run via cloud-final and thus semaphore renamed from user_scripts to config_user_scripts - add support for redirecting output of cloud-init, cloud-config, cloud-final via the config file, or user data config file - add support for posting data about the instance to a url (phone_home) - add minimal OVF transport (iso) support - make DataSources that are attempted dynamic and configurable from system config. changen "cloud_type: auto" as configuration for this to 'datasource_list: [ "Ec2" ]'. Each of the items in that list must be modules that can be loaded by "DataSource" - add 'timezone' option to cloud-config (LP: #645458) - Added an additional archive format, that can be used for multi-part input to cloud-init. This may be more user friendly then mime-multipart See example in doc/examples/cloud-config-archive.txt (LP: #641504) - add support for reading Rightscale style user data (LP: #668400) and acting on it in cloud-config (cc_rightscale_userdata.py) - make the message on 'disable_root' more clear (LP: #672417) - do not require public key if private is given in ssh cloud-config (LP: #648905) # vi: syntax=text textwidth=79 cloud-init-18.2-14-g6d48d265/HACKING.rst000066400000000000000000000075661326573344200170150ustar00rootroot00000000000000********************* Hacking on cloud-init ********************* This document describes how to contribute changes to cloud-init. It assumes you have a `Launchpad`_ account, and refers to your launchpad user as ``LP_USER`` throughout. Do these things once ==================== * To contribute, you must sign the Canonical `contributor license agreement`_ If you have already signed it as an individual, your Launchpad user will be listed in the `contributor-agreement-canonical`_ group. Unfortunately there is no easy way to check if an organization or company you are doing work for has signed. If you are unsure or have questions, email `Scott Moser `_ or ping smoser in ``#cloud-init`` channel via freenode. When prompted for 'Project contact' or 'Canonical Project Manager' enter 'Scott Moser'. * Configure git with your email and name for commit messages. Your name will appear in commit messages and will also be used in changelogs or release notes. Give yourself credit!:: git config user.name "Your Name" git config user.email "Your Email" * Clone the upstream `repository`_ on Launchpad:: git clone https://git.launchpad.net/cloud-init cd cloud-init There is more information on Launchpad as a git hosting site in `Launchpad git documentation`_. * Create a new remote pointing to your personal Launchpad repository. This is equivalent to 'fork' on github. .. code:: sh git remote add LP_USER ssh://LP_USER@git.launchpad.net/~LP_USER/cloud-init git push LP_USER master .. _repository: https://git.launchpad.net/cloud-init .. _contributor license agreement: http://www.canonical.com/contributors .. _contributor-agreement-canonical: https://launchpad.net/%7Econtributor-agreement-canonical/+members .. _Launchpad git documentation: https://help.launchpad.net/Code/Git Do these things for each feature or bug ======================================= * Create a new topic branch for your work:: git checkout -b my-topic-branch * Make and commit your changes (note, you can make multiple commits, fixes, more commits.):: git commit * Run unit tests and lint/formatting checks with `tox`_:: tox * Push your changes to your personal Launchpad repository:: git push -u LP_USER my-topic-branch * Use your browser to create a merge request: - Open the branch on Launchpad. - You can see a web view of your repository and navigate to the branch at: ``https://code.launchpad.net/~LP_USER/cloud-init/`` - It will typically be at: ``https://code.launchpad.net/~LP_USER/cloud-init/+git/cloud-init/+ref/BRANCHNAME`` for example, here is larsks move-to-git branch: https://code.launchpad.net/~larsks/cloud-init/+git/cloud-init/+ref/feature/move-to-git - Click 'Propose for merging' - Select 'lp:cloud-init' as the target repository - Type '``master``' as the Target reference path - Click 'Propose Merge' - On the next page, hit 'Set commit message' and type a git combined git style commit message like:: Activate the frobnicator. The frobnicator was previously inactive and now runs by default. This may save the world some day. Then, list the bugs you fixed as footers with syntax as shown here. The commit message should be one summary line of less than 74 characters followed by a blank line, and then one or more paragraphs describing the change and why it was needed. This is the message that will be used on the commit when it is sqaushed and merged into trunk. LP: #1 Then, someone in the `cloud-init-dev`_ group will review your changes and follow up in the merge request. Feel free to ping and/or join ``#cloud-init`` on freenode irc if you have any questions. .. _tox: https://tox.readthedocs.io/en/latest/ .. _Launchpad: https://launchpad.net .. _cloud-init-dev: https://launchpad.net/~cloud-init-dev/+members#active cloud-init-18.2-14-g6d48d265/LICENSE000066400000000000000000000022761326573344200162150ustar00rootroot00000000000000Copyright 2015 Canonical Ltd. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License version 3, as published by the Free Software Foundation. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranties of MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see Alternatively, this program may be used under the terms of the Apache License, Version 2.0, in which case the provisions of that license are applicable instead of those above. If you wish to allow use of your version of this program under the terms of the Apache License, Version 2.0 only, indicate your decision by deleting the provisions above and replace them with the notice and other provisions required by the Apache License, Version 2.0. If you do not delete the provisions above, a recipient may use your version of this file under the terms of either the GPLv3 or the Apache License, Version 2.0. cloud-init-18.2-14-g6d48d265/LICENSE-Apache2.0000066400000000000000000000261361326573344200176150ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. cloud-init-18.2-14-g6d48d265/LICENSE-GPLv3000066400000000000000000001045131326573344200171030ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . cloud-init-18.2-14-g6d48d265/MANIFEST.in000066400000000000000000000005341326573344200167410ustar00rootroot00000000000000include *.py MANIFEST.in LICENSE* ChangeLog global-include *.txt *.rst *.ini *.in *.conf *.cfg *.sh graft bash_completion graft config graft doc graft packages graft systemd graft sysvinit graft templates graft tests graft tools graft udev graft upstart prune build prune dist prune .tox prune .git prune .bzr exclude .gitignore exclude .bzrignore cloud-init-18.2-14-g6d48d265/Makefile000066400000000000000000000051231326573344200166420ustar00rootroot00000000000000CWD=$(shell pwd) PYVER ?= $(shell for p in python3 python2; do \ out=$$(command -v $$p 2>&1) && echo $$p && exit; done; exit 1) noseopts ?= -v YAML_FILES=$(shell find cloudinit tests tools -name "*.yaml" -type f ) YAML_FILES+=$(shell find doc/examples -name "cloud-config*.txt" -type f ) PIP_INSTALL := pip install ifeq ($(PYVER),python3) pyflakes = pyflakes3 unittests = unittest3 yaml = yaml else ifeq ($(PYVER),python2) pyflakes = pyflakes unittests = unittest else pyflakes = pyflakes pyflakes3 unittests = unittest unittest3 endif endif ifeq ($(distro),) distro = redhat endif READ_VERSION=$(shell $(PYVER) $(CWD)/tools/read-version || \ echo read-version-failed) CODE_VERSION=$(shell $(PYVER) -c "from cloudinit import version; print(version.version_string())") all: check check: check_version test $(yaml) style-check: pep8 $(pyflakes) pep8: @$(CWD)/tools/run-pep8 pyflakes: @$(CWD)/tools/run-pyflakes pyflakes3: @$(CWD)/tools/run-pyflakes3 unittest: clean_pyc nosetests $(noseopts) tests/unittests cloudinit unittest3: clean_pyc nosetests3 $(noseopts) tests/unittests cloudinit ci-deps-ubuntu: @$(PYVER) $(CWD)/tools/read-dependencies --distro ubuntu --test-distro ci-deps-centos: @$(PYVER) $(CWD)/tools/read-dependencies --distro centos --test-distro pip-requirements: @echo "Installing cloud-init dependencies..." $(PIP_INSTALL) -r "$@.txt" -q pip-test-requirements: @echo "Installing cloud-init test dependencies..." $(PIP_INSTALL) -r "$@.txt" -q test: $(unittests) check_version: @if [ "$(READ_VERSION)" != "$(CODE_VERSION)" ]; then \ echo "Error: read-version version '$(READ_VERSION)'" \ "not equal to code version '$(CODE_VERSION)'"; exit 2; \ else true; fi config/cloud.cfg: $(PYVER) ./tools/render-cloudcfg config/cloud.cfg.tmpl config/cloud.cfg clean_pyc: @find . -type f -name "*.pyc" -delete clean: clean_pyc rm -rf /var/log/cloud-init.log /var/lib/cloud/ yaml: @$(PYVER) $(CWD)/tools/validate-yaml.py $(YAML_FILES) rpm: $(PYVER) ./packages/brpm --distro=$(distro) srpm: $(PYVER) ./packages/brpm --srpm --distro=$(distro) deb: @which debuild || \ { echo "Missing devscripts dependency. Install with:"; \ echo sudo apt-get install devscripts; exit 1; } $(PYVER) ./packages/bddeb deb-src: @which debuild || \ { echo "Missing devscripts dependency. Install with:"; \ echo sudo apt-get install devscripts; exit 1; } $(PYVER) ./packages/bddeb -S -d .PHONY: test pyflakes pyflakes3 clean pep8 rpm srpm deb deb-src yaml .PHONY: check_version pip-test-requirements pip-requirements clean_pyc .PHONY: unittest unittest3 style-check cloud-init-18.2-14-g6d48d265/TODO.rst000066400000000000000000000043051326573344200165020ustar00rootroot00000000000000============================================== Things that cloud-init may do (better) someday ============================================== - Consider making ``failsafe`` ``DataSource`` - sets the user password, writing it to console - Consider a ``previous`` ``DataSource``, if no other data source is found, fall back to the ``previous`` one that worked. - Rewrite ``cloud-init-query`` (currently not implemented) - Possibly have a ``DataSource`` expose explicit fields: - instance-id - hostname - mirror - release - ssh public keys - Remove the conversion of the ubuntu network interface format conversion to a RH/fedora format and replace it with a top level format that uses the netcf libraries format instead (which itself knows how to translate into the specific formats). See for example `netcf`_ which seems to be an active project that has this capability. - Replace the ``apt*`` modules with variants that now use the distro classes to perform distro independent packaging commands (wherever possible). - Replace some the LOG.debug calls with a LOG.info where appropriate instead of how right now there is really only 2 levels (``WARN`` and ``DEBUG``) - Remove the ``cc_`` prefix for config modules, either have them fully specified (ie ``cloudinit.config.resizefs``) or by default only look in the ``cloudinit.config`` namespace for these modules (or have a combination of the above), this avoids having to understand where your modules are coming from (which can be altered by the current python inclusion path) - Instead of just warning when a module is being ran on a ``unknown`` distribution perhaps we should not run that module in that case? Or we might want to start reworking those modules so they will run on all distributions? Or if that is not the case, then maybe we want to allow fully specified python paths for modules and start encouraging packages of ``ubuntu`` modules, packages of ``rhel`` specific modules that people can add instead of having them all under the cloud-init ``root`` tree? This might encourage more development of other modules instead of having to go edit the cloud-init code to accomplish this. .. _netcf: https://fedorahosted.org/netcf/ cloud-init-18.2-14-g6d48d265/bash_completion/000077500000000000000000000000001326573344200203475ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/bash_completion/cloud-init000066400000000000000000000052741326573344200223510ustar00rootroot00000000000000# Copyright (C) 2018 Canonical Ltd. # # This file is part of cloud-init. See LICENSE file for license information. # bash completion for cloud-init cli _cloudinit_complete() { local cur_word prev_word cur_word="${COMP_WORDS[COMP_CWORD]}" prev_word="${COMP_WORDS[COMP_CWORD-1]}" subcmds="analyze clean collect-logs devel dhclient-hook features init modules single status" base_params="--help --file --version --debug --force" case ${COMP_CWORD} in 1) COMPREPLY=($(compgen -W "$base_params $subcmds" -- $cur_word)) ;; 2) case ${prev_word} in analyze) COMPREPLY=($(compgen -W "--help blame dump show" -- $cur_word)) ;; clean) COMPREPLY=($(compgen -W "--help --logs --reboot --seed" -- $cur_word)) ;; collect-logs) COMPREPLY=($(compgen -W "--help --tarfile --include-userdata" -- $cur_word)) ;; devel) COMPREPLY=($(compgen -W "--help schema" -- $cur_word)) ;; dhclient-hook|features) COMPREPLY=($(compgen -W "--help" -- $cur_word)) ;; init) COMPREPLY=($(compgen -W "--help --local" -- $cur_word)) ;; modules) COMPREPLY=($(compgen -W "--help --mode" -- $cur_word)) ;; single) COMPREPLY=($(compgen -W "--help --name --frequency --report" -- $cur_word)) ;; status) COMPREPLY=($(compgen -W "--help --long --wait" -- $cur_word)) ;; esac ;; 3) case ${prev_word} in blame|dump) COMPREPLY=($(compgen -W "--help --infile --outfile" -- $cur_word)) ;; --mode) COMPREPLY=($(compgen -W "--help init config final" -- $cur_word)) ;; --frequency) COMPREPLY=($(compgen -W "--help instance always once" -- $cur_word)) ;; schema) COMPREPLY=($(compgen -W "--help --config-file --doc --annotate" -- $cur_word)) ;; show) COMPREPLY=($(compgen -W "--help --format --infile --outfile" -- $cur_word)) ;; esac ;; *) COMPREPLY=() ;; esac } complete -F _cloudinit_complete cloud-init # vi: syntax=bash expandtab cloud-init-18.2-14-g6d48d265/cloudinit/000077500000000000000000000000001326573344200171735ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/__init__.py000066400000000000000000000000001326573344200212720ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/analyze/000077500000000000000000000000001326573344200206365ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/analyze/__init__.py000066400000000000000000000000001326573344200227350ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/analyze/__main__.py000066400000000000000000000130121326573344200227250ustar00rootroot00000000000000# Copyright (C) 2017 Canonical Ltd. # # This file is part of cloud-init. See LICENSE file for license information. import argparse import re import sys from cloudinit.util import json_dumps from . import dump from . import show def get_parser(parser=None): if not parser: parser = argparse.ArgumentParser( prog='cloudinit-analyze', description='Devel tool: Analyze cloud-init logs and data') subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand') subparsers.required = True parser_blame = subparsers.add_parser( 'blame', help='Print list of executed stages ordered by time to init') parser_blame.add_argument( '-i', '--infile', action='store', dest='infile', default='/var/log/cloud-init.log', help='specify where to read input.') parser_blame.add_argument( '-o', '--outfile', action='store', dest='outfile', default='-', help='specify where to write output. ') parser_blame.set_defaults(action=('blame', analyze_blame)) parser_show = subparsers.add_parser( 'show', help='Print list of in-order events during execution') parser_show.add_argument('-f', '--format', action='store', dest='print_format', default='%I%D @%Es +%ds', help='specify formatting of output.') parser_show.add_argument('-i', '--infile', action='store', dest='infile', default='/var/log/cloud-init.log', help='specify where to read input.') parser_show.add_argument('-o', '--outfile', action='store', dest='outfile', default='-', help='specify where to write output.') parser_show.set_defaults(action=('show', analyze_show)) parser_dump = subparsers.add_parser( 'dump', help='Dump cloud-init events in JSON format') parser_dump.add_argument('-i', '--infile', action='store', dest='infile', default='/var/log/cloud-init.log', help='specify where to read input. ') parser_dump.add_argument('-o', '--outfile', action='store', dest='outfile', default='-', help='specify where to write output. ') parser_dump.set_defaults(action=('dump', analyze_dump)) return parser def analyze_blame(name, args): """Report a list of records sorted by largest time delta. For example: 30.210s (init-local) searching for datasource 8.706s (init-network) reading and applying user-data 166ms (modules-config) .... 807us (modules-final) ... We generate event records parsing cloud-init logs, formatting the output and sorting by record data ('delta') """ (infh, outfh) = configure_io(args) blame_format = ' %ds (%n)' r = re.compile(r'(^\s+\d+\.\d+)', re.MULTILINE) for idx, record in enumerate(show.show_events(_get_events(infh), blame_format)): srecs = sorted(filter(r.match, record), reverse=True) outfh.write('-- Boot Record %02d --\n' % (idx + 1)) outfh.write('\n'.join(srecs) + '\n') outfh.write('\n') outfh.write('%d boot records analyzed\n' % (idx + 1)) def analyze_show(name, args): """Generate output records using the 'standard' format to printing events. Example output follows: Starting stage: (init-local) ... Finished stage: (init-local) 0.105195 seconds Starting stage: (init-network) ... Finished stage: (init-network) 0.339024 seconds Starting stage: (modules-config) ... Finished stage: (modules-config) 0.NNN seconds Starting stage: (modules-final) ... Finished stage: (modules-final) 0.NNN seconds """ (infh, outfh) = configure_io(args) for idx, record in enumerate(show.show_events(_get_events(infh), args.print_format)): outfh.write('-- Boot Record %02d --\n' % (idx + 1)) outfh.write('The total time elapsed since completing an event is' ' printed after the "@" character.\n') outfh.write('The time the event takes is printed after the "+" ' 'character.\n\n') outfh.write('\n'.join(record) + '\n') outfh.write('%d boot records analyzed\n' % (idx + 1)) def analyze_dump(name, args): """Dump cloud-init events in json format""" (infh, outfh) = configure_io(args) outfh.write(json_dumps(_get_events(infh)) + '\n') def _get_events(infile): rawdata = None events, rawdata = show.load_events(infile, None) if not events: events, _ = dump.dump_events(rawdata=rawdata) return events def configure_io(args): """Common parsing and setup of input/output files""" if args.infile == '-': infh = sys.stdin else: try: infh = open(args.infile, 'r') except OSError: sys.stderr.write('Cannot open file %s\n' % args.infile) sys.exit(1) if args.outfile == '-': outfh = sys.stdout else: try: outfh = open(args.outfile, 'w') except OSError: sys.stderr.write('Cannot open file %s\n' % args.outfile) sys.exit(1) return (infh, outfh) if __name__ == '__main__': parser = get_parser() args = parser.parse_args() (name, action_functor) = args.action action_functor(name, args) # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/analyze/dump.py000066400000000000000000000126621326573344200221640ustar00rootroot00000000000000# This file is part of cloud-init. See LICENSE file for license information. import calendar from datetime import datetime import sys from cloudinit import util stage_to_description = { 'finished': 'finished running cloud-init', 'init-local': 'starting search for local datasources', 'init-network': 'searching for network datasources', 'init': 'searching for network datasources', 'modules-config': 'running config modules', 'modules-final': 'finalizing modules', 'modules': 'running modules for', 'single': 'running single module ', } # logger's asctime format CLOUD_INIT_ASCTIME_FMT = "%Y-%m-%d %H:%M:%S,%f" # journctl -o short-precise CLOUD_INIT_JOURNALCTL_FMT = "%b %d %H:%M:%S.%f %Y" # other DEFAULT_FMT = "%b %d %H:%M:%S %Y" def parse_timestamp(timestampstr): # default syslog time does not include the current year months = [calendar.month_abbr[m] for m in range(1, 13)] if timestampstr.split()[0] in months: # Aug 29 22:55:26 FMT = DEFAULT_FMT if '.' in timestampstr: FMT = CLOUD_INIT_JOURNALCTL_FMT dt = datetime.strptime(timestampstr + " " + str(datetime.now().year), FMT) timestamp = dt.strftime("%s.%f") elif "," in timestampstr: # 2016-09-12 14:39:20,839 dt = datetime.strptime(timestampstr, CLOUD_INIT_ASCTIME_FMT) timestamp = dt.strftime("%s.%f") else: # allow date(1) to handle other formats we don't expect timestamp = parse_timestamp_from_date(timestampstr) return float(timestamp) def parse_timestamp_from_date(timestampstr): out, _ = util.subp(['date', '+%s.%3N', '-d', timestampstr]) timestamp = out.strip() return float(timestamp) def parse_ci_logline(line): # Stage Starts: # Cloud-init v. 0.7.7 running 'init-local' at \ # Fri, 02 Sep 2016 19:28:07 +0000. Up 1.0 seconds. # Cloud-init v. 0.7.7 running 'init' at \ # Fri, 02 Sep 2016 19:28:08 +0000. Up 2.0 seconds. # Cloud-init v. 0.7.7 finished at # Aug 29 22:55:26 test1 [CLOUDINIT] handlers.py[DEBUG]: \ # finish: modules-final: SUCCESS: running modules for final # 2016-08-30T21:53:25.972325+00:00 y1 [CLOUDINIT] handlers.py[DEBUG]: \ # finish: modules-final: SUCCESS: running modules for final # # Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT] util.py[DEBUG]: \ # Cloud-init v. 0.7.8 running 'init-local' at \ # Thu, 03 Nov 2016 06:51:06 +0000. Up 1.0 seconds. # # 2017-05-22 18:02:01,088 - util.py[DEBUG]: Cloud-init v. 0.7.9 running \ # 'init-local' at Mon, 22 May 2017 18:02:01 +0000. Up 2.0 seconds. separators = [' - ', ' [CLOUDINIT] '] found = False for sep in separators: if sep in line: found = True break if not found: return None (timehost, eventstr) = line.split(sep) # journalctl -o short-precise if timehost.endswith(":"): timehost = " ".join(timehost.split()[0:-1]) if "," in timehost: timestampstr, extra = timehost.split(",") timestampstr += ",%s" % extra.split()[0] if ' ' in extra: hostname = extra.split()[-1] else: hostname = timehost.split()[-1] timestampstr = timehost.split(hostname)[0].strip() if 'Cloud-init v.' in eventstr: event_type = 'start' if 'running' in eventstr: stage_and_timestamp = eventstr.split('running')[1].lstrip() event_name, _ = stage_and_timestamp.split(' at ') event_name = event_name.replace("'", "").replace(":", "-") if event_name == "init": event_name = "init-network" else: # don't generate a start for the 'finished at' banner return None event_description = stage_to_description[event_name] else: (pymodloglvl, event_type, event_name) = eventstr.split()[0:3] event_description = eventstr.split(event_name)[1].strip() event = { 'name': event_name.rstrip(":"), 'description': event_description, 'timestamp': parse_timestamp(timestampstr), 'origin': 'cloudinit', 'event_type': event_type.rstrip(":"), } if event['event_type'] == "finish": result = event_description.split(":")[0] desc = event_description.split(result)[1].lstrip(':').strip() event['result'] = result event['description'] = desc.strip() return event def dump_events(cisource=None, rawdata=None): events = [] event = None CI_EVENT_MATCHES = ['start:', 'finish:', 'Cloud-init v.'] if not any([cisource, rawdata]): raise ValueError('Either cisource or rawdata parameters are required') if rawdata: data = rawdata.splitlines() else: data = cisource.readlines() for line in data: for match in CI_EVENT_MATCHES: if match in line: try: event = parse_ci_logline(line) except ValueError: sys.stderr.write('Skipping invalid entry\n') if event: events.append(event) return events, data def main(): if len(sys.argv) > 1: cisource = open(sys.argv[1]) else: cisource = sys.stdin return util.json_dumps(dump_events(cisource)) if __name__ == "__main__": print(main()) cloud-init-18.2-14-g6d48d265/cloudinit/analyze/show.py000066400000000000000000000127741326573344200222030ustar00rootroot00000000000000# Copyright (C) 2016 Canonical Ltd. # # Author: Ryan Harper # # This file is part of cloud-init. See LICENSE file for license information. import base64 import datetime import json import os from cloudinit import util # An event: ''' { "description": "executing late commands", "event_type": "start", "level": "INFO", "name": "cmd-install/stage-late" "origin": "cloudinit", "timestamp": 1461164249.1590767, }, { "description": "executing late commands", "event_type": "finish", "level": "INFO", "name": "cmd-install/stage-late", "origin": "cloudinit", "result": "SUCCESS", "timestamp": 1461164249.1590767 } ''' format_key = { '%d': 'delta', '%D': 'description', '%E': 'elapsed', '%e': 'event_type', '%I': 'indent', '%l': 'level', '%n': 'name', '%o': 'origin', '%r': 'result', '%t': 'timestamp', '%T': 'total_time', } formatting_help = " ".join(["{0}: {1}".format(k.replace('%', '%%'), v) for k, v in format_key.items()]) def format_record(msg, event): for i, j in format_key.items(): if i in msg: # ensure consistent formatting of time values if j in ['delta', 'elapsed', 'timestamp']: msg = msg.replace(i, "{%s:08.5f}" % j) else: msg = msg.replace(i, "{%s}" % j) return msg.format(**event) def dump_event_files(event): content = dict((k, v) for k, v in event.items() if k not in ['content']) files = content['files'] saved = [] for f in files: fname = f['path'] fn_local = os.path.basename(fname) fcontent = base64.b64decode(f['content']).decode('ascii') util.write_file(fn_local, fcontent) saved.append(fn_local) return saved def event_name(event): if event: return event.get('name') return None def event_type(event): if event: return event.get('event_type') return None def event_parent(event): if event: return event_name(event).split("/")[0] return None def event_timestamp(event): return float(event.get('timestamp')) def event_datetime(event): return datetime.datetime.utcfromtimestamp(event_timestamp(event)) def delta_seconds(t1, t2): return (t2 - t1).total_seconds() def event_duration(start, finish): return delta_seconds(event_datetime(start), event_datetime(finish)) def event_record(start_time, start, finish): record = finish.copy() record.update({ 'delta': event_duration(start, finish), 'elapsed': delta_seconds(start_time, event_datetime(start)), 'indent': '|' + ' ' * (event_name(start).count('/') - 1) + '`->', }) return record def total_time_record(total_time): return 'Total Time: %3.5f seconds\n' % total_time def generate_records(events, blame_sort=False, print_format="(%n) %d seconds in %I%D", dump_files=False, log_datafiles=False): sorted_events = sorted(events, key=lambda x: x['timestamp']) records = [] start_time = None total_time = 0.0 stage_start_time = {} stages_seen = [] boot_records = [] unprocessed = [] for e in range(0, len(sorted_events)): event = events[e] try: next_evt = events[e + 1] except IndexError: next_evt = None if event_type(event) == 'start': if event.get('name') in stages_seen: records.append(total_time_record(total_time)) boot_records.append(records) records = [] start_time = None total_time = 0.0 if start_time is None: stages_seen = [] start_time = event_datetime(event) stage_start_time[event_parent(event)] = start_time # see if we have a pair if event_name(event) == event_name(next_evt): if event_type(next_evt) == 'finish': records.append(format_record(print_format, event_record(start_time, event, next_evt))) else: # This is a parent event records.append("Starting stage: %s" % event.get('name')) unprocessed.append(event) stages_seen.append(event.get('name')) continue else: prev_evt = unprocessed.pop() if event_name(event) == event_name(prev_evt): record = event_record(start_time, prev_evt, event) records.append(format_record("Finished stage: " "(%n) %d seconds ", record) + "\n") total_time += record.get('delta') else: # not a match, put it back unprocessed.append(prev_evt) records.append(total_time_record(total_time)) boot_records.append(records) return boot_records def show_events(events, print_format): return generate_records(events, print_format=print_format) def load_events(infile, rawdata=None): if rawdata: data = rawdata.read() else: data = infile.read() j = None try: j = json.loads(data) except ValueError: pass return j, data cloud-init-18.2-14-g6d48d265/cloudinit/analyze/tests/000077500000000000000000000000001326573344200220005ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/analyze/tests/test_dump.py000066400000000000000000000205131326573344200243570ustar00rootroot00000000000000# This file is part of cloud-init. See LICENSE file for license information. from datetime import datetime from textwrap import dedent from cloudinit.analyze.dump import ( dump_events, parse_ci_logline, parse_timestamp) from cloudinit.util import subp, write_file from cloudinit.tests.helpers import CiTestCase class TestParseTimestamp(CiTestCase): def test_parse_timestamp_handles_cloud_init_default_format(self): """Logs with cloud-init detailed formats will be properly parsed.""" trusty_fmt = '%Y-%m-%d %H:%M:%S,%f' trusty_stamp = '2016-09-12 14:39:20,839' parsed = parse_timestamp(trusty_stamp) # convert ourselves dt = datetime.strptime(trusty_stamp, trusty_fmt) expected = float(dt.strftime('%s.%f')) # use date(1) out, _err = subp(['date', '+%s.%3N', '-d', trusty_stamp]) timestamp = out.strip() date_ts = float(timestamp) self.assertEqual(expected, parsed) self.assertEqual(expected, date_ts) self.assertEqual(date_ts, parsed) def test_parse_timestamp_handles_syslog_adding_year(self): """Syslog timestamps lack a year. Add year and properly parse.""" syslog_fmt = '%b %d %H:%M:%S %Y' syslog_stamp = 'Aug 08 15:12:51' # convert stamp ourselves by adding the missing year value year = datetime.now().year dt = datetime.strptime(syslog_stamp + " " + str(year), syslog_fmt) expected = float(dt.strftime('%s.%f')) parsed = parse_timestamp(syslog_stamp) # use date(1) out, _ = subp(['date', '+%s.%3N', '-d', syslog_stamp]) timestamp = out.strip() date_ts = float(timestamp) self.assertEqual(expected, parsed) self.assertEqual(expected, date_ts) self.assertEqual(date_ts, parsed) def test_parse_timestamp_handles_journalctl_format_adding_year(self): """Journalctl precise timestamps lack a year. Add year and parse.""" journal_fmt = '%b %d %H:%M:%S.%f %Y' journal_stamp = 'Aug 08 17:15:50.606811' # convert stamp ourselves by adding the missing year value year = datetime.now().year dt = datetime.strptime(journal_stamp + " " + str(year), journal_fmt) expected = float(dt.strftime('%s.%f')) parsed = parse_timestamp(journal_stamp) # use date(1) out, _ = subp(['date', '+%s.%6N', '-d', journal_stamp]) timestamp = out.strip() date_ts = float(timestamp) self.assertEqual(expected, parsed) self.assertEqual(expected, date_ts) self.assertEqual(date_ts, parsed) def test_parse_unexpected_timestamp_format_with_date_command(self): """Dump sends unexpected timestamp formats to data for processing.""" new_fmt = '%H:%M %m/%d %Y' new_stamp = '17:15 08/08' # convert stamp ourselves by adding the missing year value year = datetime.now().year dt = datetime.strptime(new_stamp + " " + str(year), new_fmt) expected = float(dt.strftime('%s.%f')) parsed = parse_timestamp(new_stamp) # use date(1) out, _ = subp(['date', '+%s.%6N', '-d', new_stamp]) timestamp = out.strip() date_ts = float(timestamp) self.assertEqual(expected, parsed) self.assertEqual(expected, date_ts) self.assertEqual(date_ts, parsed) class TestParseCILogLine(CiTestCase): def test_parse_logline_returns_none_without_separators(self): """When no separators are found, parse_ci_logline returns None.""" expected_parse_ignores = [ '', '-', 'adsf-asdf', '2017-05-22 18:02:01,088', 'CLOUDINIT'] for parse_ignores in expected_parse_ignores: self.assertIsNone(parse_ci_logline(parse_ignores)) def test_parse_logline_returns_event_for_cloud_init_logs(self): """parse_ci_logline returns an event parse from cloud-init format.""" line = ( "2017-08-08 20:05:07,147 - util.py[DEBUG]: Cloud-init v. 0.7.9" " running 'init-local' at Tue, 08 Aug 2017 20:05:07 +0000. Up" " 6.26 seconds.") dt = datetime.strptime( '2017-08-08 20:05:07,147', '%Y-%m-%d %H:%M:%S,%f') timestamp = float(dt.strftime('%s.%f')) expected = { 'description': 'starting search for local datasources', 'event_type': 'start', 'name': 'init-local', 'origin': 'cloudinit', 'timestamp': timestamp} self.assertEqual(expected, parse_ci_logline(line)) def test_parse_logline_returns_event_for_journalctl_logs(self): """parse_ci_logline returns an event parse from journalctl format.""" line = ("Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT]" " util.py[DEBUG]: Cloud-init v. 0.7.8 running 'init-local' at" " Thu, 03 Nov 2016 06:51:06 +0000. Up 1.0 seconds.") year = datetime.now().year dt = datetime.strptime( 'Nov 03 06:51:06.074410 %d' % year, '%b %d %H:%M:%S.%f %Y') timestamp = float(dt.strftime('%s.%f')) expected = { 'description': 'starting search for local datasources', 'event_type': 'start', 'name': 'init-local', 'origin': 'cloudinit', 'timestamp': timestamp} self.assertEqual(expected, parse_ci_logline(line)) def test_parse_logline_returns_event_for_finish_events(self): """parse_ci_logline returns a finish event for a parsed log line.""" line = ('2016-08-30 21:53:25.972325+00:00 y1 [CLOUDINIT]' ' handlers.py[DEBUG]: finish: modules-final: SUCCESS: running' ' modules for final') expected = { 'description': 'running modules for final', 'event_type': 'finish', 'name': 'modules-final', 'origin': 'cloudinit', 'result': 'SUCCESS', 'timestamp': 1472594005.972} self.assertEqual(expected, parse_ci_logline(line)) SAMPLE_LOGS = dedent("""\ Nov 03 06:51:06.074410 x2 cloud-init[106]: [CLOUDINIT] util.py[DEBUG]:\ Cloud-init v. 0.7.8 running 'init-local' at Thu, 03 Nov 2016\ 06:51:06 +0000. Up 1.0 seconds. 2016-08-30 21:53:25.972325+00:00 y1 [CLOUDINIT] handlers.py[DEBUG]: finish:\ modules-final: SUCCESS: running modules for final """) class TestDumpEvents(CiTestCase): maxDiff = None def test_dump_events_with_rawdata(self): """Rawdata is split and parsed into a tuple of events and data""" events, data = dump_events(rawdata=SAMPLE_LOGS) expected_data = SAMPLE_LOGS.splitlines() year = datetime.now().year dt1 = datetime.strptime( 'Nov 03 06:51:06.074410 %d' % year, '%b %d %H:%M:%S.%f %Y') timestamp1 = float(dt1.strftime('%s.%f')) expected_events = [{ 'description': 'starting search for local datasources', 'event_type': 'start', 'name': 'init-local', 'origin': 'cloudinit', 'timestamp': timestamp1}, { 'description': 'running modules for final', 'event_type': 'finish', 'name': 'modules-final', 'origin': 'cloudinit', 'result': 'SUCCESS', 'timestamp': 1472594005.972}] self.assertEqual(expected_events, events) self.assertEqual(expected_data, data) def test_dump_events_with_cisource(self): """Cisource file is read and parsed into a tuple of events and data.""" tmpfile = self.tmp_path('logfile') write_file(tmpfile, SAMPLE_LOGS) events, data = dump_events(cisource=open(tmpfile)) year = datetime.now().year dt1 = datetime.strptime( 'Nov 03 06:51:06.074410 %d' % year, '%b %d %H:%M:%S.%f %Y') timestamp1 = float(dt1.strftime('%s.%f')) expected_events = [{ 'description': 'starting search for local datasources', 'event_type': 'start', 'name': 'init-local', 'origin': 'cloudinit', 'timestamp': timestamp1}, { 'description': 'running modules for final', 'event_type': 'finish', 'name': 'modules-final', 'origin': 'cloudinit', 'result': 'SUCCESS', 'timestamp': 1472594005.972}] self.assertEqual(expected_events, events) self.assertEqual(SAMPLE_LOGS.splitlines(), [d.strip() for d in data]) cloud-init-18.2-14-g6d48d265/cloudinit/apport.py000066400000000000000000000076731326573344200210670ustar00rootroot00000000000000# Copyright (C) 2017 Canonical Ltd. # # This file is part of cloud-init. See LICENSE file for license information. '''Cloud-init apport interface''' try: from apport.hookutils import ( attach_file, attach_root_command_outputs, root_command_output) has_apport = True except ImportError: has_apport = False KNOWN_CLOUD_NAMES = [ 'AliYun', 'AltCloud', 'Amazon - Ec2', 'Azure', 'Bigstep', 'Brightbox', 'CloudSigma', 'CloudStack', 'DigitalOcean', 'GCE - Google Compute Engine', 'Hetzner Cloud', 'IBM - (aka SoftLayer or BlueMix)', 'LXD', 'MAAS', 'NoCloud', 'OpenNebula', 'OpenStack', 'OVF', 'OpenTelekomCloud', 'Scaleway', 'SmartOS', 'VMware', 'Other'] # Potentially clear text collected logs CLOUDINIT_LOG = '/var/log/cloud-init.log' CLOUDINIT_OUTPUT_LOG = '/var/log/cloud-init-output.log' USER_DATA_FILE = '/var/lib/cloud/instance/user-data.txt' # Optional def attach_cloud_init_logs(report, ui=None): '''Attach cloud-init logs and tarfile from 'cloud-init collect-logs'.''' attach_root_command_outputs(report, { 'cloud-init-log-warnings': 'egrep -i "warn|error" /var/log/cloud-init.log', 'cloud-init-output.log.txt': 'cat /var/log/cloud-init-output.log'}) root_command_output( ['cloud-init', 'collect-logs', '-t', '/tmp/cloud-init-logs.tgz']) attach_file(report, '/tmp/cloud-init-logs.tgz', 'logs.tgz') def attach_hwinfo(report, ui=None): '''Optionally attach hardware info from lshw.''' prompt = ( 'Your device details (lshw) may be useful to developers when' ' addressing this bug, but gathering it requires admin privileges.' ' Would you like to include this info?') if ui and ui.yesno(prompt): attach_root_command_outputs(report, {'lshw.txt': 'lshw'}) def attach_cloud_info(report, ui=None): '''Prompt for cloud details if available.''' if ui: prompt = 'Is this machine running in a cloud environment?' response = ui.yesno(prompt) if response is None: raise StopIteration # User cancelled if response: prompt = ('Please select the cloud vendor or environment in which' ' this instance is running') response = ui.choice(prompt, KNOWN_CLOUD_NAMES) if response: report['CloudName'] = KNOWN_CLOUD_NAMES[response[0]] else: report['CloudName'] = 'None' def attach_user_data(report, ui=None): '''Optionally provide user-data if desired.''' if ui: prompt = ( 'Your user-data or cloud-config file can optionally be provided' ' from {0} and could be useful to developers when addressing this' ' bug. Do you wish to attach user-data to this bug?'.format( USER_DATA_FILE)) response = ui.yesno(prompt) if response is None: raise StopIteration # User cancelled if response: attach_file(report, USER_DATA_FILE, 'user_data.txt') def add_bug_tags(report): '''Add any appropriate tags to the bug.''' if 'JournalErrors' in report.keys(): errors = report['JournalErrors'] if 'Breaking ordering cycle' in errors: report['Tags'] = 'systemd-ordering' def add_info(report, ui): '''This is an entry point to run cloud-init's apport functionality. Distros which want apport support will have a cloud-init package-hook at /usr/share/apport/package-hooks/cloud-init.py which defines an add_info function and returns the result of cloudinit.apport.add_info(report, ui). ''' if not has_apport: raise RuntimeError( 'No apport imports discovered. Apport functionality disabled') attach_cloud_init_logs(report, ui) attach_hwinfo(report, ui) attach_cloud_info(report, ui) attach_user_data(report, ui) add_bug_tags(report) return True # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/atomic_helper.py000066400000000000000000000021401326573344200223550ustar00rootroot00000000000000# This file is part of cloud-init. See LICENSE file for license information. import json import os import stat import tempfile _DEF_PERMS = 0o644 def write_file(filename, content, mode=_DEF_PERMS, omode="wb", copy_mode=False): # open filename in mode 'omode', write content, set permissions to 'mode' if copy_mode: try: file_stat = os.stat(filename) mode = stat.S_IMODE(file_stat.st_mode) except OSError: pass tf = None try: tf = tempfile.NamedTemporaryFile(dir=os.path.dirname(filename), delete=False, mode=omode) tf.write(content) tf.close() os.chmod(tf.name, mode) os.rename(tf.name, filename) except Exception as e: if tf is not None: os.unlink(tf.name) raise e def write_json(filename, data, mode=_DEF_PERMS): # dump json representation of data to file filename. return write_file( filename, json.dumps(data, indent=1, sort_keys=True) + "\n", omode="w", mode=mode) # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/cloud.py000066400000000000000000000062221326573344200206550ustar00rootroot00000000000000# Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # This file is part of cloud-init. See LICENSE file for license information. import copy import os from cloudinit import log as logging from cloudinit.reporting import events LOG = logging.getLogger(__name__) # This class is the high level wrapper that provides # access to cloud-init objects without exposing the stage objects # to handler and or module manipulation. It allows for cloud # init to restrict what those types of user facing code may see # and or adjust (which helps avoid code messing with each other) # # It also provides util functions that avoid having to know # how to get a certain member from this submembers as well # as providing a backwards compatible object that can be maintained # while the stages/other objects can be worked on independently... class Cloud(object): def __init__(self, datasource, paths, cfg, distro, runners, reporter=None): self.datasource = datasource self.paths = paths self.distro = distro self._cfg = cfg self._runners = runners if reporter is None: reporter = events.ReportEventStack( name="unnamed-cloud-reporter", description="unnamed-cloud-reporter", reporting_enabled=False) self.reporter = reporter # If a 'user' manipulates logging or logging services # it is typically useful to cause the logging to be # setup again. def cycle_logging(self): logging.resetLogging() logging.setupLogging(self.cfg) @property def cfg(self): # Ensure that not indirectly modified return copy.deepcopy(self._cfg) def run(self, name, functor, args, freq=None, clear_on_fail=False): return self._runners.run(name, functor, args, freq, clear_on_fail) def get_template_filename(self, name): fn = self.paths.template_tpl % (name) if not os.path.isfile(fn): LOG.warning("No template found in %s for template named %s", os.path.dirname(fn), name) return None return fn # The rest of thes are just useful proxies def get_userdata(self, apply_filter=True): return self.datasource.get_userdata(apply_filter) def get_instance_id(self): return self.datasource.get_instance_id() @property def launch_index(self): return self.datasource.launch_index def get_public_ssh_keys(self): return self.datasource.get_public_ssh_keys() def get_locale(self): return self.datasource.get_locale() def get_hostname(self, fqdn=False, metadata_only=False): return self.datasource.get_hostname( fqdn=fqdn, metadata_only=metadata_only) def device_name_to_device(self, name): return self.datasource.device_name_to_device(name) def get_ipath_cur(self, name=None): return self.paths.get_ipath_cur(name) def get_cpath(self, name=None): return self.paths.get_cpath(name) def get_ipath(self, name=None): return self.paths.get_ipath(name) # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/cmd/000077500000000000000000000000001326573344200177365ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/cmd/__init__.py000066400000000000000000000000001326573344200220350ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/cmd/clean.py000066400000000000000000000063751326573344200214050ustar00rootroot00000000000000# Copyright (C) 2017 Canonical Ltd. # # This file is part of cloud-init. See LICENSE file for license information. """Define 'clean' utility and handler as part of cloud-init commandline.""" import argparse import os import sys from cloudinit.stages import Init from cloudinit.util import ( ProcessExecutionError, chdir, del_dir, del_file, get_config_logfiles, is_link, subp) def error(msg): sys.stderr.write("ERROR: " + msg + "\n") def get_parser(parser=None): """Build or extend an arg parser for clean utility. @param parser: Optional existing ArgumentParser instance representing the clean subcommand which will be extended to support the args of this utility. @returns: ArgumentParser with proper argument configuration. """ if not parser: parser = argparse.ArgumentParser( prog='clean', description=('Remove logs and artifacts so cloud-init re-runs on ' 'a clean system')) parser.add_argument( '-l', '--logs', action='store_true', default=False, dest='remove_logs', help='Remove cloud-init logs.') parser.add_argument( '-r', '--reboot', action='store_true', default=False, help='Reboot system after logs are cleaned so cloud-init re-runs.') parser.add_argument( '-s', '--seed', action='store_true', default=False, dest='remove_seed', help='Remove cloud-init seed directory /var/lib/cloud/seed.') return parser def remove_artifacts(remove_logs, remove_seed=False): """Helper which removes artifacts dir and optionally log files. @param: remove_logs: Boolean. Set True to delete the cloud_dir path. False preserves them. @param: remove_seed: Boolean. Set True to also delete seed subdir in paths.cloud_dir. @returns: 0 on success, 1 otherwise. """ init = Init(ds_deps=[]) init.read_cfg() if remove_logs: for log_file in get_config_logfiles(init.cfg): del_file(log_file) if not os.path.isdir(init.paths.cloud_dir): return 0 # Artifacts dir already cleaned with chdir(init.paths.cloud_dir): for path in os.listdir('.'): if path == 'seed' and not remove_seed: continue try: if os.path.isdir(path) and not is_link(path): del_dir(path) else: del_file(path) except OSError as e: error('Could not remove {0}: {1}'.format(path, str(e))) return 1 return 0 def handle_clean_args(name, args): """Handle calls to 'cloud-init clean' as a subcommand.""" exit_code = remove_artifacts(args.remove_logs, args.remove_seed) if exit_code == 0 and args.reboot: cmd = ['shutdown', '-r', 'now'] try: subp(cmd, capture=False) except ProcessExecutionError as e: error( 'Could not reboot this system using "{0}": {1}'.format( cmd, str(e))) exit_code = 1 return exit_code def main(): """Tool to collect and tar all cloud-init related logs.""" parser = get_parser() sys.exit(handle_clean_args('clean', parser.parse_args())) if __name__ == '__main__': main() # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/cmd/devel/000077500000000000000000000000001326573344200210355ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/cmd/devel/__init__.py000066400000000000000000000000001326573344200231340ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/cmd/devel/logs.py000066400000000000000000000067561326573344200223710ustar00rootroot00000000000000# Copyright (C) 2017 Canonical Ltd. # # This file is part of cloud-init. See LICENSE file for license information. """Define 'collect-logs' utility and handler to include in cloud-init cmd.""" import argparse from cloudinit.util import ( ProcessExecutionError, chdir, copy, ensure_dir, subp, write_file) from cloudinit.temp_utils import tempdir from datetime import datetime import os import shutil CLOUDINIT_LOGS = ['/var/log/cloud-init.log', '/var/log/cloud-init-output.log'] CLOUDINIT_RUN_DIR = '/run/cloud-init' USER_DATA_FILE = '/var/lib/cloud/instance/user-data.txt' # Optional def get_parser(parser=None): """Build or extend and arg parser for collect-logs utility. @param parser: Optional existing ArgumentParser instance representing the collect-logs subcommand which will be extended to support the args of this utility. @returns: ArgumentParser with proper argument configuration. """ if not parser: parser = argparse.ArgumentParser( prog='collect-logs', description='Collect and tar all cloud-init debug info') parser.add_argument( "--tarfile", '-t', default='cloud-init.tar.gz', help=('The tarfile to create containing all collected logs.' ' Default: cloud-init.tar.gz')) parser.add_argument( "--include-userdata", '-u', default=False, action='store_true', dest='userdata', help=( 'Optionally include user-data from {0} which could contain' ' sensitive information.'.format(USER_DATA_FILE))) return parser def _write_command_output_to_file(cmd, filename): """Helper which runs a command and writes output or error to filename.""" try: out, _ = subp(cmd) except ProcessExecutionError as e: write_file(filename, str(e)) else: write_file(filename, out) def collect_logs(tarfile, include_userdata): """Collect all cloud-init logs and tar them up into the provided tarfile. @param tarfile: The path of the tar-gzipped file to create. @param include_userdata: Boolean, true means include user-data. """ tarfile = os.path.abspath(tarfile) date = datetime.utcnow().date().strftime('%Y-%m-%d') log_dir = 'cloud-init-logs-{0}'.format(date) with tempdir(dir='/tmp') as tmp_dir: log_dir = os.path.join(tmp_dir, log_dir) _write_command_output_to_file( ['dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'], os.path.join(log_dir, 'version')) _write_command_output_to_file( ['dmesg'], os.path.join(log_dir, 'dmesg.txt')) _write_command_output_to_file( ['journalctl', '-o', 'short-precise'], os.path.join(log_dir, 'journal.txt')) for log in CLOUDINIT_LOGS: copy(log, log_dir) if include_userdata: copy(USER_DATA_FILE, log_dir) run_dir = os.path.join(log_dir, 'run') ensure_dir(run_dir) shutil.copytree(CLOUDINIT_RUN_DIR, os.path.join(run_dir, 'cloud-init')) with chdir(tmp_dir): subp(['tar', 'czvf', tarfile, log_dir.replace(tmp_dir + '/', '')]) def handle_collect_logs_args(name, args): """Handle calls to 'cloud-init collect-logs' as a subcommand.""" collect_logs(args.tarfile, args.userdata) def main(): """Tool to collect and tar all cloud-init related logs.""" parser = get_parser() handle_collect_logs_args('collect-logs', parser.parse_args()) return 0 if __name__ == '__main__': main() # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/cmd/devel/parser.py000066400000000000000000000015611326573344200227060ustar00rootroot00000000000000# Copyright (C) 2017 Canonical Ltd. # # This file is part of cloud-init. See LICENSE file for license information. """Define 'devel' subcommand argument parsers to include in cloud-init cmd.""" import argparse from cloudinit.config.schema import ( get_parser as schema_parser, handle_schema_args) def get_parser(parser=None): if not parser: parser = argparse.ArgumentParser( prog='cloudinit-devel', description='Run development cloud-init tools') subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand') subparsers.required = True parser_schema = subparsers.add_parser( 'schema', help='Validate cloud-config files or document schema') # Construct schema subcommand parser schema_parser(parser_schema) parser_schema.set_defaults(action=('schema', handle_schema_args)) return parser cloud-init-18.2-14-g6d48d265/cloudinit/cmd/devel/tests/000077500000000000000000000000001326573344200221775ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/cmd/devel/tests/__init__.py000066400000000000000000000000001326573344200242760ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/cmd/devel/tests/test_logs.py000066400000000000000000000121321326573344200245530ustar00rootroot00000000000000# This file is part of cloud-init. See LICENSE file for license information. from cloudinit.cmd.devel import logs from cloudinit.util import ensure_dir, load_file, subp, write_file from cloudinit.tests.helpers import FilesystemMockingTestCase, wrap_and_call from datetime import datetime import os class TestCollectLogs(FilesystemMockingTestCase): def setUp(self): super(TestCollectLogs, self).setUp() self.new_root = self.tmp_dir() self.run_dir = self.tmp_path('run', self.new_root) def test_collect_logs_creates_tarfile(self): """collect-logs creates a tarfile with all related cloud-init info.""" log1 = self.tmp_path('cloud-init.log', self.new_root) write_file(log1, 'cloud-init-log') log2 = self.tmp_path('cloud-init-output.log', self.new_root) write_file(log2, 'cloud-init-output-log') ensure_dir(self.run_dir) write_file(self.tmp_path('results.json', self.run_dir), 'results') output_tarfile = self.tmp_path('logs.tgz') date = datetime.utcnow().date().strftime('%Y-%m-%d') date_logdir = 'cloud-init-logs-{0}'.format(date) expected_subp = { ('dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'): '0.7fake\n', ('dmesg',): 'dmesg-out\n', ('journalctl', '-o', 'short-precise'): 'journal-out\n', ('tar', 'czvf', output_tarfile, date_logdir): '' } def fake_subp(cmd): cmd_tuple = tuple(cmd) if cmd_tuple not in expected_subp: raise AssertionError( 'Unexpected command provided to subp: {0}'.format(cmd)) if cmd == ['tar', 'czvf', output_tarfile, date_logdir]: subp(cmd) # Pass through tar cmd so we can check output return expected_subp[cmd_tuple], '' wrap_and_call( 'cloudinit.cmd.devel.logs', {'subp': {'side_effect': fake_subp}, 'CLOUDINIT_LOGS': {'new': [log1, log2]}, 'CLOUDINIT_RUN_DIR': {'new': self.run_dir}}, logs.collect_logs, output_tarfile, include_userdata=False) # unpack the tarfile and check file contents subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root]) out_logdir = self.tmp_path(date_logdir, self.new_root) self.assertEqual( '0.7fake\n', load_file(os.path.join(out_logdir, 'version'))) self.assertEqual( 'cloud-init-log', load_file(os.path.join(out_logdir, 'cloud-init.log'))) self.assertEqual( 'cloud-init-output-log', load_file(os.path.join(out_logdir, 'cloud-init-output.log'))) self.assertEqual( 'dmesg-out\n', load_file(os.path.join(out_logdir, 'dmesg.txt'))) self.assertEqual( 'journal-out\n', load_file(os.path.join(out_logdir, 'journal.txt'))) self.assertEqual( 'results', load_file( os.path.join(out_logdir, 'run', 'cloud-init', 'results.json'))) def test_collect_logs_includes_optional_userdata(self): """collect-logs include userdata when --include-userdata is set.""" log1 = self.tmp_path('cloud-init.log', self.new_root) write_file(log1, 'cloud-init-log') log2 = self.tmp_path('cloud-init-output.log', self.new_root) write_file(log2, 'cloud-init-output-log') userdata = self.tmp_path('user-data.txt', self.new_root) write_file(userdata, 'user-data') ensure_dir(self.run_dir) write_file(self.tmp_path('results.json', self.run_dir), 'results') output_tarfile = self.tmp_path('logs.tgz') date = datetime.utcnow().date().strftime('%Y-%m-%d') date_logdir = 'cloud-init-logs-{0}'.format(date) expected_subp = { ('dpkg-query', '--show', "-f=${Version}\n", 'cloud-init'): '0.7fake', ('dmesg',): 'dmesg-out\n', ('journalctl', '-o', 'short-precise'): 'journal-out\n', ('tar', 'czvf', output_tarfile, date_logdir): '' } def fake_subp(cmd): cmd_tuple = tuple(cmd) if cmd_tuple not in expected_subp: raise AssertionError( 'Unexpected command provided to subp: {0}'.format(cmd)) if cmd == ['tar', 'czvf', output_tarfile, date_logdir]: subp(cmd) # Pass through tar cmd so we can check output return expected_subp[cmd_tuple], '' wrap_and_call( 'cloudinit.cmd.devel.logs', {'subp': {'side_effect': fake_subp}, 'CLOUDINIT_LOGS': {'new': [log1, log2]}, 'CLOUDINIT_RUN_DIR': {'new': self.run_dir}, 'USER_DATA_FILE': {'new': userdata}}, logs.collect_logs, output_tarfile, include_userdata=True) # unpack the tarfile and check file contents subp(['tar', 'zxvf', output_tarfile, '-C', self.new_root]) out_logdir = self.tmp_path(date_logdir, self.new_root) self.assertEqual( 'user-data', load_file(os.path.join(out_logdir, 'user-data.txt'))) cloud-init-18.2-14-g6d48d265/cloudinit/cmd/main.py000066400000000000000000000774441326573344200212540ustar00rootroot00000000000000#!/usr/bin/python # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # Copyright (C) 2017 Amazon.com, Inc. or its affiliates # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # Author: Andrew Jorgensen # # This file is part of cloud-init. See LICENSE file for license information. import argparse import json import os import sys import time import traceback from cloudinit import patcher patcher.patch() # noqa from cloudinit import log as logging from cloudinit import netinfo from cloudinit import signal_handler from cloudinit import sources from cloudinit import stages from cloudinit import url_helper from cloudinit import util from cloudinit import version from cloudinit import warnings from cloudinit import reporting from cloudinit.reporting import events from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE, CLOUD_CONFIG) from cloudinit import atomic_helper from cloudinit.config import cc_set_hostname from cloudinit.dhclient_hook import LogDhclient # Welcome message template WELCOME_MSG_TPL = ("Cloud-init v. {version} running '{action}' at " "{timestamp}. Up {uptime} seconds.") # Module section template MOD_SECTION_TPL = "cloud_%s_modules" # Frequency shortname to full name # (so users don't have to remember the full name...) FREQ_SHORT_NAMES = { 'instance': PER_INSTANCE, 'always': PER_ALWAYS, 'once': PER_ONCE, } LOG = logging.getLogger() # Used for when a logger may not be active # and we still want to print exceptions... def print_exc(msg=''): if msg: sys.stderr.write("%s\n" % (msg)) sys.stderr.write('-' * 60) sys.stderr.write("\n") traceback.print_exc(file=sys.stderr) sys.stderr.write('-' * 60) sys.stderr.write("\n") def welcome(action, msg=None): if not msg: msg = welcome_format(action) util.multi_log("%s\n" % (msg), console=False, stderr=True, log=LOG) return msg def welcome_format(action): return WELCOME_MSG_TPL.format( version=version.version_string(), uptime=util.uptime(), timestamp=util.time_rfc2822(), action=action) def extract_fns(args): # Files are already opened so lets just pass that along # since it would of broke if it couldn't have # read that file already... fn_cfgs = [] if args.files: for fh in args.files: # The realpath is more useful in logging # so lets resolve to that... fn_cfgs.append(os.path.realpath(fh.name)) return fn_cfgs def run_module_section(mods, action_name, section): full_section_name = MOD_SECTION_TPL % (section) (which_ran, failures) = mods.run_section(full_section_name) total_attempted = len(which_ran) + len(failures) if total_attempted == 0: msg = ("No '%s' modules to run" " under section '%s'") % (action_name, full_section_name) sys.stderr.write("%s\n" % (msg)) LOG.debug(msg) return [] else: LOG.debug("Ran %s modules with %s failures", len(which_ran), len(failures)) return failures def apply_reporting_cfg(cfg): if cfg.get('reporting'): reporting.update_configuration(cfg.get('reporting')) def parse_cmdline_url(cmdline, names=('cloud-config-url', 'url')): data = util.keyval_str_to_dict(cmdline) for key in names: if key in data: return key, data[key] raise KeyError("No keys (%s) found in string '%s'" % (cmdline, names)) def attempt_cmdline_url(path, network=True, cmdline=None): """Write data from url referenced in command line to path. path: a file to write content to if downloaded. network: should network access be assumed. cmdline: the cmdline to parse for cloud-config-url. This is used in MAAS datasource, in "ephemeral" (read-only root) environment where the instance netboots to iscsi ro root. and the entity that controls the pxe config has to configure the maas datasource. An attempt is made on network urls even in local datasource for case of network set up in initramfs. Return value is a tuple of a logger function (logging.DEBUG) and a message indicating what happened. """ if cmdline is None: cmdline = util.get_cmdline() try: cmdline_name, url = parse_cmdline_url(cmdline) except KeyError: return (logging.DEBUG, "No kernel command line url found.") path_is_local = url.startswith("file://") or url.startswith("/") if path_is_local and os.path.exists(path): if network: m = ("file '%s' existed, possibly from local stage download" " of command line url '%s'. Not re-writing." % (path, url)) level = logging.INFO if path_is_local: level = logging.DEBUG else: m = ("file '%s' existed, possibly from previous boot download" " of command line url '%s'. Not re-writing." % (path, url)) level = logging.WARN return (level, m) kwargs = {'url': url, 'timeout': 10, 'retries': 2} if network or path_is_local: level = logging.WARN kwargs['sec_between'] = 1 else: level = logging.DEBUG kwargs['sec_between'] = .1 data = None header = b'#cloud-config' try: resp = util.read_file_or_url(**kwargs) if resp.ok(): data = resp.contents if not resp.contents.startswith(header): if cmdline_name == 'cloud-config-url': level = logging.WARN else: level = logging.INFO return ( level, "contents of '%s' did not start with %s" % (url, header)) else: return (level, "url '%s' returned code %s. Ignoring." % (url, resp.code)) except url_helper.UrlError as e: return (level, "retrieving url '%s' failed: %s" % (url, e)) util.write_file(path, data, mode=0o600) return (logging.INFO, "wrote cloud-config data from %s='%s' to %s" % (cmdline_name, url, path)) def main_init(name, args): deps = [sources.DEP_FILESYSTEM, sources.DEP_NETWORK] if args.local: deps = [sources.DEP_FILESYSTEM] early_logs = [attempt_cmdline_url( path=os.path.join("%s.d" % CLOUD_CONFIG, "91_kernel_cmdline_url.cfg"), network=not args.local)] # Cloud-init 'init' stage is broken up into the following sub-stages # 1. Ensure that the init object fetches its config without errors # 2. Setup logging/output redirections with resultant config (if any) # 3. Initialize the cloud-init filesystem # 4. Check if we can stop early by looking for various files # 5. Fetch the datasource # 6. Connect to the current instance location + update the cache # 7. Consume the userdata (handlers get activated here) # 8. Construct the modules object # 9. Adjust any subsequent logging/output redirections using the modules # objects config as it may be different from init object # 10. Run the modules for the 'init' stage # 11. Done! if not args.local: w_msg = welcome_format(name) else: w_msg = welcome_format("%s-local" % (name)) init = stages.Init(ds_deps=deps, reporter=args.reporter) # Stage 1 init.read_cfg(extract_fns(args)) # Stage 2 outfmt = None errfmt = None try: early_logs.append((logging.DEBUG, "Closing stdin.")) util.close_stdin() (outfmt, errfmt) = util.fixup_output(init.cfg, name) except Exception: msg = "Failed to setup output redirection!" util.logexc(LOG, msg) print_exc(msg) early_logs.append((logging.WARN, msg)) if args.debug: # Reset so that all the debug handlers are closed out LOG.debug(("Logging being reset, this logger may no" " longer be active shortly")) logging.resetLogging() logging.setupLogging(init.cfg) apply_reporting_cfg(init.cfg) # Any log usage prior to setupLogging above did not have local user log # config applied. We send the welcome message now, as stderr/out have # been redirected and log now configured. welcome(name, msg=w_msg) # re-play early log messages before logging was setup for lvl, msg in early_logs: LOG.log(lvl, msg) # Stage 3 try: init.initialize() except Exception: util.logexc(LOG, "Failed to initialize, likely bad things to come!") # Stage 4 path_helper = init.paths mode = sources.DSMODE_LOCAL if args.local else sources.DSMODE_NETWORK if mode == sources.DSMODE_NETWORK: existing = "trust" sys.stderr.write("%s\n" % (netinfo.debug_info())) LOG.debug(("Checking to see if files that we need already" " exist from a previous run that would allow us" " to stop early.")) # no-net is written by upstart cloud-init-nonet when network failed # to come up stop_files = [ os.path.join(path_helper.get_cpath("data"), "no-net"), ] existing_files = [] for fn in stop_files: if os.path.isfile(fn): existing_files.append(fn) if existing_files: LOG.debug("[%s] Exiting. stop file %s existed", mode, existing_files) return (None, []) else: LOG.debug("Execution continuing, no previous run detected that" " would allow us to stop early.") else: existing = "check" mcfg = util.get_cfg_option_bool(init.cfg, 'manual_cache_clean', False) if mcfg: LOG.debug("manual cache clean set from config") existing = "trust" else: mfile = path_helper.get_ipath_cur("manual_clean_marker") if os.path.exists(mfile): LOG.debug("manual cache clean found from marker: %s", mfile) existing = "trust" init.purge_cache() # Delete the non-net file as well util.del_file(os.path.join(path_helper.get_cpath("data"), "no-net")) # Stage 5 try: init.fetch(existing=existing) # if in network mode, and the datasource is local # then work was done at that stage. if mode == sources.DSMODE_NETWORK and init.datasource.dsmode != mode: LOG.debug("[%s] Exiting. datasource %s in local mode", mode, init.datasource) return (None, []) except sources.DataSourceNotFoundException: # In the case of 'cloud-init init' without '--local' it is a bit # more likely that the user would consider it failure if nothing was # found. When using upstart it will also mentions job failure # in console log if exit code is != 0. if mode == sources.DSMODE_LOCAL: LOG.debug("No local datasource found") else: util.logexc(LOG, ("No instance datasource found!" " Likely bad things to come!")) if not args.force: init.apply_network_config(bring_up=not args.local) LOG.debug("[%s] Exiting without datasource in local mode", mode) if mode == sources.DSMODE_LOCAL: return (None, []) else: return (None, ["No instance datasource found."]) else: LOG.debug("[%s] barreling on in force mode without datasource", mode) # Stage 6 iid = init.instancify() LOG.debug("[%s] %s will now be targeting instance id: %s. new=%s", mode, name, iid, init.is_new_instance()) if mode == sources.DSMODE_LOCAL: # Before network comes up, set any configured hostname to allow # dhcp clients to advertize this hostname to any DDNS services # LP: #1746455. _maybe_set_hostname(init, stage='local', retry_stage='network') init.apply_network_config(bring_up=bool(mode != sources.DSMODE_LOCAL)) if mode == sources.DSMODE_LOCAL: if init.datasource.dsmode != mode: LOG.debug("[%s] Exiting. datasource %s not in local mode.", mode, init.datasource) return (init.datasource, []) else: LOG.debug("[%s] %s is in local mode, will apply init modules now.", mode, init.datasource) # Give the datasource a chance to use network resources. # This is used on Azure to communicate with the fabric over network. init.setup_datasource() # update fully realizes user-data (pulling in #include if necessary) init.update() _maybe_set_hostname(init, stage='init-net', retry_stage='modules:config') # Stage 7 try: # Attempt to consume the data per instance. # This may run user-data handlers and/or perform # url downloads and such as needed. (ran, _results) = init.cloudify().run('consume_data', init.consume_data, args=[PER_INSTANCE], freq=PER_INSTANCE) if not ran: # Just consume anything that is set to run per-always # if nothing ran in the per-instance code # # See: https://bugs.launchpad.net/bugs/819507 for a little # reason behind this... init.consume_data(PER_ALWAYS) except Exception: util.logexc(LOG, "Consuming user data failed!") return (init.datasource, ["Consuming user data failed!"]) apply_reporting_cfg(init.cfg) # Stage 8 - re-read and apply relevant cloud-config to include user-data mods = stages.Modules(init, extract_fns(args), reporter=args.reporter) # Stage 9 try: outfmt_orig = outfmt errfmt_orig = errfmt (outfmt, errfmt) = util.get_output_cfg(mods.cfg, name) if outfmt_orig != outfmt or errfmt_orig != errfmt: LOG.warning("Stdout, stderr changing to (%s, %s)", outfmt, errfmt) (outfmt, errfmt) = util.fixup_output(mods.cfg, name) except Exception: util.logexc(LOG, "Failed to re-adjust output redirection!") logging.setupLogging(mods.cfg) # give the activated datasource a chance to adjust init.activate_datasource() di_report_warn(datasource=init.datasource, cfg=init.cfg) # Stage 10 return (init.datasource, run_module_section(mods, name, name)) def di_report_warn(datasource, cfg): if 'di_report' not in cfg: LOG.debug("no di_report found in config.") return dicfg = cfg['di_report'] if dicfg is None: # ds-identify may write 'di_report:\n #comment\n' # which reads as {'di_report': None} LOG.debug("di_report was None.") return if not isinstance(dicfg, dict): LOG.warning("di_report config not a dictionary: %s", dicfg) return dslist = dicfg.get('datasource_list') if dslist is None: LOG.warning("no 'datasource_list' found in di_report.") return elif not isinstance(dslist, list): LOG.warning("di_report/datasource_list not a list: %s", dslist) return # ds.__module__ is like cloudinit.sources.DataSourceName # where Name is the thing that shows up in datasource_list. modname = datasource.__module__.rpartition(".")[2] if modname.startswith(sources.DS_PREFIX): modname = modname[len(sources.DS_PREFIX):] else: LOG.warning("Datasource '%s' came from unexpected module '%s'.", datasource, modname) if modname in dslist: LOG.debug("used datasource '%s' from '%s' was in di_report's list: %s", datasource, modname, dslist) return warnings.show_warning('dsid_missing_source', cfg, source=modname, dslist=str(dslist)) def main_modules(action_name, args): name = args.mode # Cloud-init 'modules' stages are broken up into the following sub-stages # 1. Ensure that the init object fetches its config without errors # 2. Get the datasource from the init object, if it does # not exist then that means the main_init stage never # worked, and thus this stage can not run. # 3. Construct the modules object # 4. Adjust any subsequent logging/output redirections using # the modules objects configuration # 5. Run the modules for the given stage name # 6. Done! w_msg = welcome_format("%s:%s" % (action_name, name)) init = stages.Init(ds_deps=[], reporter=args.reporter) # Stage 1 init.read_cfg(extract_fns(args)) # Stage 2 try: init.fetch(existing="trust") except sources.DataSourceNotFoundException: # There was no datasource found, theres nothing to do msg = ('Can not apply stage %s, no datasource found! Likely bad ' 'things to come!' % name) util.logexc(LOG, msg) print_exc(msg) if not args.force: return [(msg)] # Stage 3 mods = stages.Modules(init, extract_fns(args), reporter=args.reporter) # Stage 4 try: LOG.debug("Closing stdin") util.close_stdin() util.fixup_output(mods.cfg, name) except Exception: util.logexc(LOG, "Failed to setup output redirection!") if args.debug: # Reset so that all the debug handlers are closed out LOG.debug(("Logging being reset, this logger may no" " longer be active shortly")) logging.resetLogging() logging.setupLogging(mods.cfg) apply_reporting_cfg(init.cfg) # now that logging is setup and stdout redirected, send welcome welcome(name, msg=w_msg) # Stage 5 return run_module_section(mods, name, name) def main_single(name, args): # Cloud-init single stage is broken up into the following sub-stages # 1. Ensure that the init object fetches its config without errors # 2. Attempt to fetch the datasource (warn if it doesn't work) # 3. Construct the modules object # 4. Adjust any subsequent logging/output redirections using # the modules objects configuration # 5. Run the single module # 6. Done! mod_name = args.name w_msg = welcome_format(name) init = stages.Init(ds_deps=[], reporter=args.reporter) # Stage 1 init.read_cfg(extract_fns(args)) # Stage 2 try: init.fetch(existing="trust") except sources.DataSourceNotFoundException: # There was no datasource found, # that might be bad (or ok) depending on # the module being ran (so continue on) util.logexc(LOG, ("Failed to fetch your datasource," " likely bad things to come!")) print_exc(("Failed to fetch your datasource," " likely bad things to come!")) if not args.force: return 1 # Stage 3 mods = stages.Modules(init, extract_fns(args), reporter=args.reporter) mod_args = args.module_args if mod_args: LOG.debug("Using passed in arguments %s", mod_args) mod_freq = args.frequency if mod_freq: LOG.debug("Using passed in frequency %s", mod_freq) mod_freq = FREQ_SHORT_NAMES.get(mod_freq) # Stage 4 try: LOG.debug("Closing stdin") util.close_stdin() util.fixup_output(mods.cfg, None) except Exception: util.logexc(LOG, "Failed to setup output redirection!") if args.debug: # Reset so that all the debug handlers are closed out LOG.debug(("Logging being reset, this logger may no" " longer be active shortly")) logging.resetLogging() logging.setupLogging(mods.cfg) apply_reporting_cfg(init.cfg) # now that logging is setup and stdout redirected, send welcome welcome(name, msg=w_msg) # Stage 5 (which_ran, failures) = mods.run_single(mod_name, mod_args, mod_freq) if failures: LOG.warning("Ran %s but it failed!", mod_name) return 1 elif not which_ran: LOG.warning("Did not run %s, does it exist?", mod_name) return 1 else: # Guess it worked return 0 def dhclient_hook(name, args): record = LogDhclient(args) record.check_hooks_dir() record.record() def status_wrapper(name, args, data_d=None, link_d=None): if data_d is None: data_d = os.path.normpath("/var/lib/cloud/data") if link_d is None: link_d = os.path.normpath("/run/cloud-init") status_path = os.path.join(data_d, "status.json") status_link = os.path.join(link_d, "status.json") result_path = os.path.join(data_d, "result.json") result_link = os.path.join(link_d, "result.json") util.ensure_dirs((data_d, link_d,)) (_name, functor) = args.action if name == "init": if args.local: mode = "init-local" else: mode = "init" elif name == "modules": mode = "modules-%s" % args.mode else: raise ValueError("unknown name: %s" % name) modes = ('init', 'init-local', 'modules-init', 'modules-config', 'modules-final') if mode not in modes: raise ValueError( "Invalid cloud init mode specified '{0}'".format(mode)) status = None if mode == 'init-local': for f in (status_link, result_link, status_path, result_path): util.del_file(f) else: try: status = json.loads(util.load_file(status_path)) except Exception: pass nullstatus = { 'errors': [], 'start': None, 'finished': None, } if status is None: status = {'v1': {}} for m in modes: status['v1'][m] = nullstatus.copy() status['v1']['datasource'] = None elif mode not in status['v1']: status['v1'][mode] = nullstatus.copy() v1 = status['v1'] v1['stage'] = mode v1[mode]['start'] = time.time() atomic_helper.write_json(status_path, status) util.sym_link(os.path.relpath(status_path, link_d), status_link, force=True) try: ret = functor(name, args) if mode in ('init', 'init-local'): (datasource, errors) = ret if datasource is not None: v1['datasource'] = str(datasource) else: errors = ret v1[mode]['errors'] = [str(e) for e in errors] except Exception as e: util.logexc(LOG, "failed stage %s", mode) print_exc("failed run of stage %s" % mode) v1[mode]['errors'] = [str(e)] v1[mode]['finished'] = time.time() v1['stage'] = None atomic_helper.write_json(status_path, status) if mode == "modules-final": # write the 'finished' file errors = [] for m in modes: if v1[m]['errors']: errors.extend(v1[m].get('errors', [])) atomic_helper.write_json( result_path, {'v1': {'datasource': v1['datasource'], 'errors': errors}}) util.sym_link(os.path.relpath(result_path, link_d), result_link, force=True) return len(v1[mode]['errors']) def _maybe_set_hostname(init, stage, retry_stage): """Call set-hostname if metadata, vendordata or userdata provides it. @param stage: String representing current stage in which we are running. @param retry_stage: String represented logs upon error setting hostname. """ cloud = init.cloudify() (hostname, _fqdn) = util.get_hostname_fqdn( init.cfg, cloud, metadata_only=True) if hostname: # meta-data or user-data hostname content try: cc_set_hostname.handle('set-hostname', init.cfg, cloud, LOG, None) except cc_set_hostname.SetHostnameError as e: LOG.debug( 'Failed setting hostname in %s stage. Will' ' retry in %s stage. Error: %s.', stage, retry_stage, str(e)) def main_features(name, args): sys.stdout.write('\n'.join(sorted(version.FEATURES)) + '\n') def main(sysv_args=None): if not sysv_args: sysv_args = sys.argv parser = argparse.ArgumentParser(prog=sysv_args[0]) sysv_args = sysv_args[1:] # Top level args parser.add_argument('--version', '-v', action='version', version='%(prog)s ' + (version.version_string())) parser.add_argument('--file', '-f', action='append', dest='files', help=('additional yaml configuration' ' files to use'), type=argparse.FileType('rb')) parser.add_argument('--debug', '-d', action='store_true', help=('show additional pre-action' ' logging (default: %(default)s)'), default=False) parser.add_argument('--force', action='store_true', help=('force running even if no datasource is' ' found (use at your own risk)'), dest='force', default=False) parser.set_defaults(reporter=None) subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand') subparsers.required = True # Each action and its sub-options (if any) parser_init = subparsers.add_parser('init', help=('initializes cloud-init and' ' performs initial modules')) parser_init.add_argument("--local", '-l', action='store_true', help="start in local mode (default: %(default)s)", default=False) # This is used so that we can know which action is selected + # the functor to use to run this subcommand parser_init.set_defaults(action=('init', main_init)) # These settings are used for the 'config' and 'final' stages parser_mod = subparsers.add_parser('modules', help=('activates modules using ' 'a given configuration key')) parser_mod.add_argument("--mode", '-m', action='store', help=("module configuration name " "to use (default: %(default)s)"), default='config', choices=('init', 'config', 'final')) parser_mod.set_defaults(action=('modules', main_modules)) # This subcommand allows you to run a single module parser_single = subparsers.add_parser('single', help=('run a single module ')) parser_single.add_argument("--name", '-n', action="store", help="module name to run", required=True) parser_single.add_argument("--frequency", action="store", help=("frequency of the module"), required=False, choices=list(FREQ_SHORT_NAMES.keys())) parser_single.add_argument("--report", action="store_true", help="enable reporting", required=False) parser_single.add_argument("module_args", nargs="*", metavar='argument', help=('any additional arguments to' ' pass to this module')) parser_single.set_defaults(action=('single', main_single)) parser_dhclient = subparsers.add_parser('dhclient-hook', help=('run the dhclient hook' 'to record network info')) parser_dhclient.add_argument("net_action", help=('action taken on the interface')) parser_dhclient.add_argument("net_interface", help=('the network interface being acted' ' upon')) parser_dhclient.set_defaults(action=('dhclient_hook', dhclient_hook)) parser_features = subparsers.add_parser('features', help=('list defined features')) parser_features.set_defaults(action=('features', main_features)) parser_analyze = subparsers.add_parser( 'analyze', help='Devel tool: Analyze cloud-init logs and data') parser_devel = subparsers.add_parser( 'devel', help='Run development tools') parser_collect_logs = subparsers.add_parser( 'collect-logs', help='Collect and tar all cloud-init debug info') parser_clean = subparsers.add_parser( 'clean', help='Remove logs and artifacts so cloud-init can re-run.') parser_status = subparsers.add_parser( 'status', help='Report cloud-init status or wait on completion.') if sysv_args: # Only load subparsers if subcommand is specified to avoid load cost if sysv_args[0] == 'analyze': from cloudinit.analyze.__main__ import get_parser as analyze_parser # Construct analyze subcommand parser analyze_parser(parser_analyze) elif sysv_args[0] == 'devel': from cloudinit.cmd.devel.parser import get_parser as devel_parser # Construct devel subcommand parser devel_parser(parser_devel) elif sysv_args[0] == 'collect-logs': from cloudinit.cmd.devel.logs import ( get_parser as logs_parser, handle_collect_logs_args) logs_parser(parser_collect_logs) parser_collect_logs.set_defaults( action=('collect-logs', handle_collect_logs_args)) elif sysv_args[0] == 'clean': from cloudinit.cmd.clean import ( get_parser as clean_parser, handle_clean_args) clean_parser(parser_clean) parser_clean.set_defaults( action=('clean', handle_clean_args)) elif sysv_args[0] == 'status': from cloudinit.cmd.status import ( get_parser as status_parser, handle_status_args) status_parser(parser_status) parser_status.set_defaults( action=('status', handle_status_args)) args = parser.parse_args(args=sysv_args) # Subparsers.required = True and each subparser sets action=(name, functor) (name, functor) = args.action # Setup basic logging to start (until reinitialized) # iff in debug mode. if args.debug: logging.setupBasicLogging() # Setup signal handlers before running signal_handler.attach_handlers() if name in ("modules", "init"): functor = status_wrapper rname = None report_on = True if name == "init": if args.local: rname, rdesc = ("init-local", "searching for local datasources") else: rname, rdesc = ("init-network", "searching for network datasources") elif name == "modules": rname, rdesc = ("modules-%s" % args.mode, "running modules for %s" % args.mode) elif name == "single": rname, rdesc = ("single/%s" % args.name, "running single module %s" % args.name) report_on = args.report else: rname = name rdesc = "running 'cloud-init %s'" % name report_on = False args.reporter = events.ReportEventStack( rname, rdesc, reporting_enabled=report_on) with args.reporter: return util.log_time( logfunc=LOG.debug, msg="cloud-init mode '%s'" % name, get_uptime=True, func=functor, args=(name, args)) if __name__ == '__main__': if 'TZ' not in os.environ: os.environ['TZ'] = ":/etc/localtime" main(sys.argv) # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/cmd/status.py000066400000000000000000000126011326573344200216330ustar00rootroot00000000000000# Copyright (C) 2017 Canonical Ltd. # # This file is part of cloud-init. See LICENSE file for license information. """Define 'status' utility and handler as part of cloud-init commandline.""" import argparse import os import sys from time import gmtime, strftime, sleep from cloudinit.distros import uses_systemd from cloudinit.stages import Init from cloudinit.util import get_cmdline, load_file, load_json CLOUDINIT_DISABLED_FILE = '/etc/cloud/cloud-init.disabled' # customer visible status messages STATUS_ENABLED_NOT_RUN = 'not run' STATUS_RUNNING = 'running' STATUS_DONE = 'done' STATUS_ERROR = 'error' STATUS_DISABLED = 'disabled' def get_parser(parser=None): """Build or extend an arg parser for status utility. @param parser: Optional existing ArgumentParser instance representing the status subcommand which will be extended to support the args of this utility. @returns: ArgumentParser with proper argument configuration. """ if not parser: parser = argparse.ArgumentParser( prog='status', description='Report run status of cloud init') parser.add_argument( '-l', '--long', action='store_true', default=False, help=('Report long format of statuses including run stage name and' ' error messages')) parser.add_argument( '-w', '--wait', action='store_true', default=False, help='Block waiting on cloud-init to complete') return parser def handle_status_args(name, args): """Handle calls to 'cloud-init status' as a subcommand.""" # Read configured paths init = Init(ds_deps=[]) init.read_cfg() status, status_detail, time = _get_status_details(init.paths) if args.wait: while status in (STATUS_ENABLED_NOT_RUN, STATUS_RUNNING): sys.stdout.write('.') sys.stdout.flush() status, status_detail, time = _get_status_details(init.paths) sleep(0.25) sys.stdout.write('\n') if args.long: print('status: {0}'.format(status)) if time: print('time: {0}'.format(time)) print('detail:\n{0}'.format(status_detail)) else: print('status: {0}'.format(status)) return 1 if status == STATUS_ERROR else 0 def _is_cloudinit_disabled(disable_file, paths): """Report whether cloud-init is disabled. @param disable_file: The path to the cloud-init disable file. @param paths: An initialized cloudinit.helpers.Paths object. @returns: A tuple containing (bool, reason) about cloud-init's status and why. """ is_disabled = False cmdline_parts = get_cmdline().split() if not uses_systemd(): reason = 'Cloud-init enabled on sysvinit' elif 'cloud-init=enabled' in cmdline_parts: reason = 'Cloud-init enabled by kernel command line cloud-init=enabled' elif os.path.exists(disable_file): is_disabled = True reason = 'Cloud-init disabled by {0}'.format(disable_file) elif 'cloud-init=disabled' in cmdline_parts: is_disabled = True reason = 'Cloud-init disabled by kernel parameter cloud-init=disabled' elif not os.path.exists(os.path.join(paths.run_dir, 'enabled')): is_disabled = True reason = 'Cloud-init disabled by cloud-init-generator' else: reason = 'Cloud-init enabled by systemd cloud-init-generator' return (is_disabled, reason) def _get_status_details(paths): """Return a 3-tuple of status, status_details and time of last event. @param paths: An initialized cloudinit.helpers.paths object. Values are obtained from parsing paths.run_dir/status.json. """ status = STATUS_ENABLED_NOT_RUN status_detail = '' status_v1 = {} status_file = os.path.join(paths.run_dir, 'status.json') result_file = os.path.join(paths.run_dir, 'result.json') (is_disabled, reason) = _is_cloudinit_disabled( CLOUDINIT_DISABLED_FILE, paths) if is_disabled: status = STATUS_DISABLED status_detail = reason if os.path.exists(status_file): if not os.path.exists(result_file): status = STATUS_RUNNING status_v1 = load_json(load_file(status_file)).get('v1', {}) errors = [] latest_event = 0 for key, value in sorted(status_v1.items()): if key == 'stage': if value: status = STATUS_RUNNING status_detail = 'Running in stage: {0}'.format(value) elif key == 'datasource': status_detail = value elif isinstance(value, dict): errors.extend(value.get('errors', [])) start = value.get('start') or 0 finished = value.get('finished') or 0 if finished == 0 and start != 0: status = STATUS_RUNNING event_time = max(start, finished) if event_time > latest_event: latest_event = event_time if errors: status = STATUS_ERROR status_detail = '\n'.join(errors) elif status == STATUS_ENABLED_NOT_RUN and latest_event > 0: status = STATUS_DONE if latest_event: time = strftime('%a, %d %b %Y %H:%M:%S %z', gmtime(latest_event)) else: time = '' return status, status_detail, time def main(): """Tool to report status of cloud-init.""" parser = get_parser() sys.exit(handle_status_args('status', parser.parse_args())) if __name__ == '__main__': main() # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/cmd/tests/000077500000000000000000000000001326573344200211005ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/cmd/tests/__init__.py000066400000000000000000000000001326573344200231770ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/cmd/tests/test_clean.py000066400000000000000000000156171326573344200236050ustar00rootroot00000000000000# This file is part of cloud-init. See LICENSE file for license information. from cloudinit.cmd import clean from cloudinit.util import ensure_dir, sym_link, write_file from cloudinit.tests.helpers import CiTestCase, wrap_and_call, mock from collections import namedtuple import os from six import StringIO mypaths = namedtuple('MyPaths', 'cloud_dir') class TestClean(CiTestCase): def setUp(self): super(TestClean, self).setUp() self.new_root = self.tmp_dir() self.artifact_dir = self.tmp_path('artifacts', self.new_root) self.log1 = self.tmp_path('cloud-init.log', self.new_root) self.log2 = self.tmp_path('cloud-init-output.log', self.new_root) class FakeInit(object): cfg = {'def_log_file': self.log1, 'output': {'all': '|tee -a {0}'.format(self.log2)}} paths = mypaths(cloud_dir=self.artifact_dir) def __init__(self, ds_deps): pass def read_cfg(self): pass self.init_class = FakeInit def test_remove_artifacts_removes_logs(self): """remove_artifacts removes logs when remove_logs is True.""" write_file(self.log1, 'cloud-init-log') write_file(self.log2, 'cloud-init-output-log') self.assertFalse( os.path.exists(self.artifact_dir), 'Unexpected artifacts dir') retcode = wrap_and_call( 'cloudinit.cmd.clean', {'Init': {'side_effect': self.init_class}}, clean.remove_artifacts, remove_logs=True) self.assertFalse(os.path.exists(self.log1), 'Unexpected file') self.assertFalse(os.path.exists(self.log2), 'Unexpected file') self.assertEqual(0, retcode) def test_remove_artifacts_preserves_logs(self): """remove_artifacts leaves logs when remove_logs is False.""" write_file(self.log1, 'cloud-init-log') write_file(self.log2, 'cloud-init-output-log') retcode = wrap_and_call( 'cloudinit.cmd.clean', {'Init': {'side_effect': self.init_class}}, clean.remove_artifacts, remove_logs=False) self.assertTrue(os.path.exists(self.log1), 'Missing expected file') self.assertTrue(os.path.exists(self.log2), 'Missing expected file') self.assertEqual(0, retcode) def test_remove_artifacts_removes_unlinks_symlinks(self): """remove_artifacts cleans artifacts dir unlinking any symlinks.""" dir1 = os.path.join(self.artifact_dir, 'dir1') ensure_dir(dir1) symlink = os.path.join(self.artifact_dir, 'mylink') sym_link(dir1, symlink) retcode = wrap_and_call( 'cloudinit.cmd.clean', {'Init': {'side_effect': self.init_class}}, clean.remove_artifacts, remove_logs=False) self.assertEqual(0, retcode) for path in (dir1, symlink): self.assertFalse( os.path.exists(path), 'Unexpected {0} dir'.format(path)) def test_remove_artifacts_removes_artifacts_skipping_seed(self): """remove_artifacts cleans artifacts dir with exception of seed dir.""" dirs = [ self.artifact_dir, os.path.join(self.artifact_dir, 'seed'), os.path.join(self.artifact_dir, 'dir1'), os.path.join(self.artifact_dir, 'dir2')] for _dir in dirs: ensure_dir(_dir) retcode = wrap_and_call( 'cloudinit.cmd.clean', {'Init': {'side_effect': self.init_class}}, clean.remove_artifacts, remove_logs=False) self.assertEqual(0, retcode) for expected_dir in dirs[:2]: self.assertTrue( os.path.exists(expected_dir), 'Missing {0} dir'.format(expected_dir)) for deleted_dir in dirs[2:]: self.assertFalse( os.path.exists(deleted_dir), 'Unexpected {0} dir'.format(deleted_dir)) def test_remove_artifacts_removes_artifacts_removes_seed(self): """remove_artifacts removes seed dir when remove_seed is True.""" dirs = [ self.artifact_dir, os.path.join(self.artifact_dir, 'seed'), os.path.join(self.artifact_dir, 'dir1'), os.path.join(self.artifact_dir, 'dir2')] for _dir in dirs: ensure_dir(_dir) retcode = wrap_and_call( 'cloudinit.cmd.clean', {'Init': {'side_effect': self.init_class}}, clean.remove_artifacts, remove_logs=False, remove_seed=True) self.assertEqual(0, retcode) self.assertTrue( os.path.exists(self.artifact_dir), 'Missing artifact dir') for deleted_dir in dirs[1:]: self.assertFalse( os.path.exists(deleted_dir), 'Unexpected {0} dir'.format(deleted_dir)) def test_remove_artifacts_returns_one_on_errors(self): """remove_artifacts returns non-zero on failure and prints an error.""" ensure_dir(self.artifact_dir) ensure_dir(os.path.join(self.artifact_dir, 'dir1')) with mock.patch('sys.stderr', new_callable=StringIO) as m_stderr: retcode = wrap_and_call( 'cloudinit.cmd.clean', {'del_dir': {'side_effect': OSError('oops')}, 'Init': {'side_effect': self.init_class}}, clean.remove_artifacts, remove_logs=False) self.assertEqual(1, retcode) self.assertEqual( 'ERROR: Could not remove dir1: oops\n', m_stderr.getvalue()) def test_handle_clean_args_reboots(self): """handle_clean_args_reboots when reboot arg is provided.""" called_cmds = [] def fake_subp(cmd, capture): called_cmds.append((cmd, capture)) return '', '' myargs = namedtuple('MyArgs', 'remove_logs remove_seed reboot') cmdargs = myargs(remove_logs=False, remove_seed=False, reboot=True) retcode = wrap_and_call( 'cloudinit.cmd.clean', {'subp': {'side_effect': fake_subp}, 'Init': {'side_effect': self.init_class}}, clean.handle_clean_args, name='does not matter', args=cmdargs) self.assertEqual(0, retcode) self.assertEqual( [(['shutdown', '-r', 'now'], False)], called_cmds) def test_status_main(self): '''clean.main can be run as a standalone script.''' write_file(self.log1, 'cloud-init-log') with self.assertRaises(SystemExit) as context_manager: wrap_and_call( 'cloudinit.cmd.clean', {'Init': {'side_effect': self.init_class}, 'sys.exit': {'side_effect': self.sys_exit}, 'sys.argv': {'new': ['clean', '--logs']}}, clean.main) self.assertEqual(0, context_manager.exception.code) self.assertFalse( os.path.exists(self.log1), 'Unexpected log {0}'.format(self.log1)) # vi: ts=4 expandtab syntax=python cloud-init-18.2-14-g6d48d265/cloudinit/cmd/tests/test_main.py000066400000000000000000000150261326573344200234410ustar00rootroot00000000000000# This file is part of cloud-init. See LICENSE file for license information. from collections import namedtuple import copy import os from six import StringIO from cloudinit.cmd import main from cloudinit.util import ( ensure_dir, load_file, write_file, yaml_dumps) from cloudinit.tests.helpers import ( FilesystemMockingTestCase, wrap_and_call) mypaths = namedtuple('MyPaths', 'run_dir') myargs = namedtuple('MyArgs', 'debug files force local reporter subcommand') class TestMain(FilesystemMockingTestCase): with_logs = True def setUp(self): super(TestMain, self).setUp() self.new_root = self.tmp_dir() self.cloud_dir = self.tmp_path('var/lib/cloud/', dir=self.new_root) os.makedirs(self.cloud_dir) self.replicateTestRoot('simple_ubuntu', self.new_root) self.cfg = { 'datasource_list': ['None'], 'runcmd': ['ls /etc'], # test ALL_DISTROS 'system_info': {'paths': {'cloud_dir': self.cloud_dir, 'run_dir': self.new_root}}, 'write_files': [ { 'path': '/etc/blah.ini', 'content': 'blah', 'permissions': 0o755, }, ], 'cloud_init_modules': ['write-files', 'runcmd'], } cloud_cfg = yaml_dumps(self.cfg) ensure_dir(os.path.join(self.new_root, 'etc', 'cloud')) self.cloud_cfg_file = os.path.join( self.new_root, 'etc', 'cloud', 'cloud.cfg') write_file(self.cloud_cfg_file, cloud_cfg) self.patchOS(self.new_root) self.patchUtils(self.new_root) self.stderr = StringIO() self.patchStdoutAndStderr(stderr=self.stderr) def test_main_init_run_net_stops_on_file_no_net(self): """When no-net file is present, main_init does not process modules.""" stop_file = os.path.join(self.cloud_dir, 'data', 'no-net') # stop file write_file(stop_file, '') cmdargs = myargs( debug=False, files=None, force=False, local=False, reporter=None, subcommand='init') (item1, item2) = wrap_and_call( 'cloudinit.cmd.main', {'util.close_stdin': True, 'netinfo.debug_info': 'my net debug info', 'util.fixup_output': ('outfmt', 'errfmt')}, main.main_init, 'init', cmdargs) # We should not run write_files module self.assertFalse( os.path.exists(os.path.join(self.new_root, 'etc/blah.ini')), 'Unexpected run of write_files module produced blah.ini') self.assertEqual([], item2) # Instancify is called instance_id_path = 'var/lib/cloud/data/instance-id' self.assertFalse( os.path.exists(os.path.join(self.new_root, instance_id_path)), 'Unexpected call to datasource.instancify produced instance-id') expected_logs = [ "Exiting. stop file ['{stop_file}'] existed\n".format( stop_file=stop_file), 'my net debug info' # netinfo.debug_info ] for log in expected_logs: self.assertIn(log, self.stderr.getvalue()) def test_main_init_run_net_runs_modules(self): """Modules like write_files are run in 'net' mode.""" cmdargs = myargs( debug=False, files=None, force=False, local=False, reporter=None, subcommand='init') (item1, item2) = wrap_and_call( 'cloudinit.cmd.main', {'util.close_stdin': True, 'netinfo.debug_info': 'my net debug info', 'util.fixup_output': ('outfmt', 'errfmt')}, main.main_init, 'init', cmdargs) self.assertEqual([], item2) # Instancify is called instance_id_path = 'var/lib/cloud/data/instance-id' self.assertEqual( 'iid-datasource-none\n', os.path.join(load_file( os.path.join(self.new_root, instance_id_path)))) # modules are run (including write_files) self.assertEqual( 'blah', load_file(os.path.join(self.new_root, 'etc/blah.ini'))) expected_logs = [ 'network config is disabled by fallback', # apply_network_config 'my net debug info', # netinfo.debug_info 'no previous run detected' ] for log in expected_logs: self.assertIn(log, self.stderr.getvalue()) def test_main_init_run_net_calls_set_hostname_when_metadata_present(self): """When local-hostname metadata is present, call cc_set_hostname.""" self.cfg['datasource'] = { 'None': {'metadata': {'local-hostname': 'md-hostname'}}} cloud_cfg = yaml_dumps(self.cfg) write_file(self.cloud_cfg_file, cloud_cfg) cmdargs = myargs( debug=False, files=None, force=False, local=False, reporter=None, subcommand='init') def set_hostname(name, cfg, cloud, log, args): self.assertEqual('set-hostname', name) updated_cfg = copy.deepcopy(self.cfg) updated_cfg.update( {'def_log_file': '/var/log/cloud-init.log', 'log_cfgs': [], 'syslog_fix_perms': ['syslog:adm', 'root:adm', 'root:wheel'], 'vendor_data': {'enabled': True, 'prefix': []}}) updated_cfg.pop('system_info') self.assertEqual(updated_cfg, cfg) self.assertEqual(main.LOG, log) self.assertIsNone(args) (item1, item2) = wrap_and_call( 'cloudinit.cmd.main', {'util.close_stdin': True, 'netinfo.debug_info': 'my net debug info', 'cc_set_hostname.handle': {'side_effect': set_hostname}, 'util.fixup_output': ('outfmt', 'errfmt')}, main.main_init, 'init', cmdargs) self.assertEqual([], item2) # Instancify is called instance_id_path = 'var/lib/cloud/data/instance-id' self.assertEqual( 'iid-datasource-none\n', os.path.join(load_file( os.path.join(self.new_root, instance_id_path)))) # modules are run (including write_files) self.assertEqual( 'blah', load_file(os.path.join(self.new_root, 'etc/blah.ini'))) expected_logs = [ 'network config is disabled by fallback', # apply_network_config 'my net debug info', # netinfo.debug_info 'no previous run detected' ] for log in expected_logs: self.assertIn(log, self.stderr.getvalue()) # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/cmd/tests/test_status.py000066400000000000000000000425141326573344200240420ustar00rootroot00000000000000# This file is part of cloud-init. See LICENSE file for license information. from collections import namedtuple import os from six import StringIO from textwrap import dedent from cloudinit.atomic_helper import write_json from cloudinit.cmd import status from cloudinit.util import ensure_file from cloudinit.tests.helpers import CiTestCase, wrap_and_call, mock mypaths = namedtuple('MyPaths', 'run_dir') myargs = namedtuple('MyArgs', 'long wait') class TestStatus(CiTestCase): def setUp(self): super(TestStatus, self).setUp() self.new_root = self.tmp_dir() self.status_file = self.tmp_path('status.json', self.new_root) self.disable_file = self.tmp_path('cloudinit-disable', self.new_root) self.paths = mypaths(run_dir=self.new_root) class FakeInit(object): paths = self.paths def __init__(self, ds_deps): pass def read_cfg(self): pass self.init_class = FakeInit def test__is_cloudinit_disabled_false_on_sysvinit(self): '''When not in an environment using systemd, return False.''' ensure_file(self.disable_file) # Create the ignored disable file (is_disabled, reason) = wrap_and_call( 'cloudinit.cmd.status', {'uses_systemd': False}, status._is_cloudinit_disabled, self.disable_file, self.paths) self.assertFalse( is_disabled, 'expected enabled cloud-init on sysvinit') self.assertEqual('Cloud-init enabled on sysvinit', reason) def test__is_cloudinit_disabled_true_on_disable_file(self): '''When using systemd and disable_file is present return disabled.''' ensure_file(self.disable_file) # Create observed disable file (is_disabled, reason) = wrap_and_call( 'cloudinit.cmd.status', {'uses_systemd': True}, status._is_cloudinit_disabled, self.disable_file, self.paths) self.assertTrue(is_disabled, 'expected disabled cloud-init') self.assertEqual( 'Cloud-init disabled by {0}'.format(self.disable_file), reason) def test__is_cloudinit_disabled_false_on_kernel_cmdline_enable(self): '''Not disabled when using systemd and enabled via commandline.''' ensure_file(self.disable_file) # Create ignored disable file (is_disabled, reason) = wrap_and_call( 'cloudinit.cmd.status', {'uses_systemd': True, 'get_cmdline': 'something cloud-init=enabled else'}, status._is_cloudinit_disabled, self.disable_file, self.paths) self.assertFalse(is_disabled, 'expected enabled cloud-init') self.assertEqual( 'Cloud-init enabled by kernel command line cloud-init=enabled', reason) def test__is_cloudinit_disabled_true_on_kernel_cmdline(self): '''When using systemd and disable_file is present return disabled.''' (is_disabled, reason) = wrap_and_call( 'cloudinit.cmd.status', {'uses_systemd': True, 'get_cmdline': 'something cloud-init=disabled else'}, status._is_cloudinit_disabled, self.disable_file, self.paths) self.assertTrue(is_disabled, 'expected disabled cloud-init') self.assertEqual( 'Cloud-init disabled by kernel parameter cloud-init=disabled', reason) def test__is_cloudinit_disabled_true_when_generator_disables(self): '''When cloud-init-generator doesn't write enabled file return True.''' enabled_file = os.path.join(self.paths.run_dir, 'enabled') self.assertFalse(os.path.exists(enabled_file)) (is_disabled, reason) = wrap_and_call( 'cloudinit.cmd.status', {'uses_systemd': True, 'get_cmdline': 'something'}, status._is_cloudinit_disabled, self.disable_file, self.paths) self.assertTrue(is_disabled, 'expected disabled cloud-init') self.assertEqual('Cloud-init disabled by cloud-init-generator', reason) def test__is_cloudinit_disabled_false_when_enabled_in_systemd(self): '''Report enabled when systemd generator creates the enabled file.''' enabled_file = os.path.join(self.paths.run_dir, 'enabled') ensure_file(enabled_file) (is_disabled, reason) = wrap_and_call( 'cloudinit.cmd.status', {'uses_systemd': True, 'get_cmdline': 'something ignored'}, status._is_cloudinit_disabled, self.disable_file, self.paths) self.assertFalse(is_disabled, 'expected enabled cloud-init') self.assertEqual( 'Cloud-init enabled by systemd cloud-init-generator', reason) def test_status_returns_not_run(self): '''When status.json does not exist yet, return 'not run'.''' self.assertFalse( os.path.exists(self.status_file), 'Unexpected status.json found') cmdargs = myargs(long=False, wait=False) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(0, retcode) self.assertEqual('status: not run\n', m_stdout.getvalue()) def test_status_returns_disabled_long_on_presence_of_disable_file(self): '''When cloudinit is disabled, return disabled reason.''' checked_files = [] def fakeexists(filepath): checked_files.append(filepath) status_file = os.path.join(self.paths.run_dir, 'status.json') return bool(not filepath == status_file) cmdargs = myargs(long=True, wait=False) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'os.path.exists': {'side_effect': fakeexists}, '_is_cloudinit_disabled': (True, 'disabled for some reason'), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(0, retcode) self.assertEqual( [os.path.join(self.paths.run_dir, 'status.json')], checked_files) expected = dedent('''\ status: disabled detail: disabled for some reason ''') self.assertEqual(expected, m_stdout.getvalue()) def test_status_returns_running_on_no_results_json(self): '''Report running when status.json exists but result.json does not.''' result_file = self.tmp_path('result.json', self.new_root) write_json(self.status_file, {}) self.assertFalse( os.path.exists(result_file), 'Unexpected result.json found') cmdargs = myargs(long=False, wait=False) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(0, retcode) self.assertEqual('status: running\n', m_stdout.getvalue()) def test_status_returns_running(self): '''Report running when status exists with an unfinished stage.''' ensure_file(self.tmp_path('result.json', self.new_root)) write_json(self.status_file, {'v1': {'init': {'start': 1, 'finished': None}}}) cmdargs = myargs(long=False, wait=False) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(0, retcode) self.assertEqual('status: running\n', m_stdout.getvalue()) def test_status_returns_done(self): '''Report done results.json exists no stages are unfinished.''' ensure_file(self.tmp_path('result.json', self.new_root)) write_json( self.status_file, {'v1': {'stage': None, # No current stage running 'datasource': ( 'DataSourceNoCloud [seed=/var/.../seed/nocloud-net]' '[dsmode=net]'), 'blah': {'finished': 123.456}, 'init': {'errors': [], 'start': 124.567, 'finished': 125.678}, 'init-local': {'start': 123.45, 'finished': 123.46}}}) cmdargs = myargs(long=False, wait=False) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(0, retcode) self.assertEqual('status: done\n', m_stdout.getvalue()) def test_status_returns_done_long(self): '''Long format of done status includes datasource info.''' ensure_file(self.tmp_path('result.json', self.new_root)) write_json( self.status_file, {'v1': {'stage': None, 'datasource': ( 'DataSourceNoCloud [seed=/var/.../seed/nocloud-net]' '[dsmode=net]'), 'init': {'start': 124.567, 'finished': 125.678}, 'init-local': {'start': 123.45, 'finished': 123.46}}}) cmdargs = myargs(long=True, wait=False) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(0, retcode) expected = dedent('''\ status: done time: Thu, 01 Jan 1970 00:02:05 +0000 detail: DataSourceNoCloud [seed=/var/.../seed/nocloud-net][dsmode=net] ''') self.assertEqual(expected, m_stdout.getvalue()) def test_status_on_errors(self): '''Reports error when any stage has errors.''' write_json( self.status_file, {'v1': {'stage': None, 'blah': {'errors': [], 'finished': 123.456}, 'init': {'errors': ['error1'], 'start': 124.567, 'finished': 125.678}, 'init-local': {'start': 123.45, 'finished': 123.46}}}) cmdargs = myargs(long=False, wait=False) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(1, retcode) self.assertEqual('status: error\n', m_stdout.getvalue()) def test_status_on_errors_long(self): '''Long format of error status includes all error messages.''' write_json( self.status_file, {'v1': {'stage': None, 'datasource': ( 'DataSourceNoCloud [seed=/var/.../seed/nocloud-net]' '[dsmode=net]'), 'init': {'errors': ['error1'], 'start': 124.567, 'finished': 125.678}, 'init-local': {'errors': ['error2', 'error3'], 'start': 123.45, 'finished': 123.46}}}) cmdargs = myargs(long=True, wait=False) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(1, retcode) expected = dedent('''\ status: error time: Thu, 01 Jan 1970 00:02:05 +0000 detail: error1 error2 error3 ''') self.assertEqual(expected, m_stdout.getvalue()) def test_status_returns_running_long_format(self): '''Long format reports the stage in which we are running.''' write_json( self.status_file, {'v1': {'stage': 'init', 'init': {'start': 124.456, 'finished': None}, 'init-local': {'start': 123.45, 'finished': 123.46}}}) cmdargs = myargs(long=True, wait=False) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(0, retcode) expected = dedent('''\ status: running time: Thu, 01 Jan 1970 00:02:04 +0000 detail: Running in stage: init ''') self.assertEqual(expected, m_stdout.getvalue()) def test_status_wait_blocks_until_done(self): '''Specifying wait will poll every 1/4 second until done state.''' running_json = { 'v1': {'stage': 'init', 'init': {'start': 124.456, 'finished': None}, 'init-local': {'start': 123.45, 'finished': 123.46}}} done_json = { 'v1': {'stage': None, 'init': {'start': 124.456, 'finished': 125.678}, 'init-local': {'start': 123.45, 'finished': 123.46}}} self.sleep_calls = 0 def fake_sleep(interval): self.assertEqual(0.25, interval) self.sleep_calls += 1 if self.sleep_calls == 2: write_json(self.status_file, running_json) elif self.sleep_calls == 3: write_json(self.status_file, done_json) result_file = self.tmp_path('result.json', self.new_root) ensure_file(result_file) cmdargs = myargs(long=False, wait=True) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'sleep': {'side_effect': fake_sleep}, '_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(0, retcode) self.assertEqual(4, self.sleep_calls) self.assertEqual('....\nstatus: done\n', m_stdout.getvalue()) def test_status_wait_blocks_until_error(self): '''Specifying wait will poll every 1/4 second until error state.''' running_json = { 'v1': {'stage': 'init', 'init': {'start': 124.456, 'finished': None}, 'init-local': {'start': 123.45, 'finished': 123.46}}} error_json = { 'v1': {'stage': None, 'init': {'errors': ['error1'], 'start': 124.456, 'finished': 125.678}, 'init-local': {'start': 123.45, 'finished': 123.46}}} self.sleep_calls = 0 def fake_sleep(interval): self.assertEqual(0.25, interval) self.sleep_calls += 1 if self.sleep_calls == 2: write_json(self.status_file, running_json) elif self.sleep_calls == 3: write_json(self.status_file, error_json) cmdargs = myargs(long=False, wait=True) with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: retcode = wrap_and_call( 'cloudinit.cmd.status', {'sleep': {'side_effect': fake_sleep}, '_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.handle_status_args, 'ignored', cmdargs) self.assertEqual(1, retcode) self.assertEqual(4, self.sleep_calls) self.assertEqual('....\nstatus: error\n', m_stdout.getvalue()) def test_status_main(self): '''status.main can be run as a standalone script.''' write_json(self.status_file, {'v1': {'init': {'start': 1, 'finished': None}}}) with self.assertRaises(SystemExit) as context_manager: with mock.patch('sys.stdout', new_callable=StringIO) as m_stdout: wrap_and_call( 'cloudinit.cmd.status', {'sys.argv': {'new': ['status']}, 'sys.exit': {'side_effect': self.sys_exit}, '_is_cloudinit_disabled': (False, ''), 'Init': {'side_effect': self.init_class}}, status.main) self.assertEqual(0, context_manager.exception.code) self.assertEqual('status: running\n', m_stdout.getvalue()) # vi: ts=4 expandtab syntax=python cloud-init-18.2-14-g6d48d265/cloudinit/config/000077500000000000000000000000001326573344200204405ustar00rootroot00000000000000cloud-init-18.2-14-g6d48d265/cloudinit/config/__init__.py000066400000000000000000000026351326573344200225570ustar00rootroot00000000000000# Copyright (C) 2008-2010 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # # Author: Chuck Short # Author: Juerg Haefliger # # This file is part of cloud-init. See LICENSE file for license information. from cloudinit.settings import (PER_INSTANCE, FREQUENCIES) from cloudinit import log as logging LOG = logging.getLogger(__name__) # This prefix is used to make it less # of a chance that when importing # we will not find something else with the same # name in the lookup path... MOD_PREFIX = "cc_" def form_module_name(name): canon_name = name.replace("-", "_") if canon_name.lower().endswith(".py"): canon_name = canon_name[0:(len(canon_name) - 3)] canon_name = canon_name.strip() if not canon_name: return None if not canon_name.startswith(MOD_PREFIX): canon_name = '%s%s' % (MOD_PREFIX, canon_name) return canon_name def fixup_module(mod, def_freq=PER_INSTANCE): if not hasattr(mod, 'frequency'): setattr(mod, 'frequency', def_freq) else: freq = mod.frequency if freq and freq not in FREQUENCIES: LOG.warning("Module %s has an unknown frequency %s", mod, freq) if not hasattr(mod, 'distros'): setattr(mod, 'distros', []) if not hasattr(mod, 'osfamilies'): setattr(mod, 'osfamilies', []) return mod # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/config/cc_apt_configure.py000066400000000000000000001014031326573344200243030ustar00rootroot00000000000000# Copyright (C) 2009-2010 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # # Author: Scott Moser # Author: Juerg Haefliger # # This file is part of cloud-init. See LICENSE file for license information. """ Apt Configure ------------- **Summary:** configure apt This module handles both configuration of apt options and adding source lists. There are configuration options such as ``apt_get_wrapper`` and ``apt_get_command`` that control how cloud-init invokes apt-get. These configuration options are handled on a per-distro basis, so consult documentation for cloud-init's distro support for instructions on using these config options. .. note:: To ensure that apt configuration is valid yaml, any strings containing special characters, especially ``:`` should be quoted. .. note:: For more information about apt configuration, see the ``Additional apt configuration`` example. **Preserve sources.list:** By default, cloud-init will generate a new sources list in ``/etc/apt/sources.list.d`` based on any changes specified in cloud config. To disable this behavior and preserve the sources list from the pristine image, set ``preserve_sources_list`` to ``true``. .. note:: The ``preserve_sources_list`` option overrides all other config keys that would alter ``sources.list`` or ``sources.list.d``, **except** for additional sources to be added to ``sources.list.d``. **Disable source suites:** Entries in the sources list can be disabled using ``disable_suites``, which takes a list of suites to be disabled. If the string ``$RELEASE`` is present in a suite in the ``disable_suites`` list, it will be replaced with the release name. If a suite specified in ``disable_suites`` is not present in ``sources.list`` it will be ignored. For convenience, several aliases are provided for ``disable_suites``: - ``updates`` => ``$RELEASE-updates`` - ``backports`` => ``$RELEASE-backports`` - ``security`` => ``$RELEASE-security`` - ``proposed`` => ``$RELEASE-proposed`` - ``release`` => ``$RELEASE`` .. note:: When a suite is disabled using ``disable_suites``, its entry in ``sources.list`` is not deleted; it is just commented out. **Configure primary and security mirrors:** The primary and security archive mirrors can be specified using the ``primary`` and ``security`` keys, respectively. Both the ``primary`` and ``security`` keys take a list of configs, allowing mirrors to be specified on a per-architecture basis. Each config is a dictionary which must have an entry for ``arches``, specifying which architectures that config entry is for. The keyword ``default`` applies to any architecture not explicitly listed. The mirror url can be specified with the ``uri`` key, or a list of mirrors to check can be provided in order, with the first mirror that can be resolved being selected. This allows the same configuration to be used in different environment, with different hosts used for a local apt mirror. If no mirror is provided by ``uri`` or ``search``, ``search_dns`` may be used to search for dns names in the format ``-mirror`` in each of the following: - fqdn of this host per cloud metadata - localdomain - domains listed in ``/etc/resolv.conf`` If there is a dns entry for ``-mirror``, then it is assumed that there is a distro mirror at ``http://-mirror./``. If the ``primary`` key is defined, but not the ``security`` key, then then configuration for ``primary`` is also used for ``security``. If ``search_dns`` is used for the ``security`` key, the search pattern will be. ``-security-mirror``. If no mirrors are specified, or all lookups fail, then default mirrors defined in the datasource are used. If none are present in the datasource either the following defaults are used: - primary: ``http://archive.ubuntu.com/ubuntu`` - security: ``http://security.ubuntu.com/ubuntu`` **Specify sources.list template:** A custom template for rendering ``sources.list`` can be specefied with ``sources_list``. If no ``sources_list`` template is given, cloud-init will use sane default. Within this template, the following strings will be replaced with the appropriate values: - ``$MIRROR`` - ``$RELEASE`` - ``$PRIMARY`` - ``$SECURITY`` **Pass configuration to apt:** Apt configuration can be specified using ``conf``. Configuration is specified as a string. For multiline apt configuration, make sure to follow yaml syntax. **Configure apt proxy:** Proxy configuration for apt can be specified using ``conf``, but proxy config keys also exist for convenience. The proxy config keys, ``http_proxy``, ``ftp_proxy``, and ``https_proxy`` may be used to specify a proxy for http, ftp and https protocols respectively. The ``proxy`` key also exists as an alias for ``http_proxy``. Proxy url is specified in the format ``://[[user][:pass]@]host[:port]/``. **Add apt repos by regex:** All source entries in ``apt-sources`` that match regex in ``add_apt_repo_match`` will be added to the system using ``add-apt-repository``. If ``add_apt_repo_match`` is not specified, it defaults to ``^[\\w-]+:\\w`` **Add source list entries:** Source list entries can be specified as a dictionary under the ``sources`` config key, with key in the dict representing a different source file. The key The key of each source entry will be used as an id that can be referenced in other config entries, as well as the filename for the source's configuration under ``/etc/apt/sources.list.d``. If the name does not end with ``.list``, it will be appended. If there is no configuration for a key in ``sources``, no file will be written, but the key may still be referred to as an id in other ``sources`` entries. Each entry under ``sources`` is a dictionary which may contain any of the following optional keys: - ``source``: a sources.list entry (some variable replacements apply) - ``keyid``: a key to import via shortid or fingerprint - ``key``: a raw PGP key - ``keyserver``: alternate keyserver to pull ``keyid`` key from The ``source`` key supports variable replacements for the following strings: - ``$MIRROR`` - ``$PRIMARY`` - ``$SECURITY`` - ``$RELEASE`` **Internal name:** ``cc_apt_configure`` **Module frequency:** per instance **Supported distros:** ubuntu, debian **Config keys**:: apt: preserve_sources_list: disable_suites: - $RELEASE-updates - backports - $RELEASE - mysuite primary: - arches: - amd64 - i386 - default uri: "http://us.archive.ubuntu.com/ubuntu" search: - "http://cool.but-sometimes-unreachable.com/ubuntu" - "http://us.archive.ubuntu.com/ubuntu" search_dns: - arches: - s390x - arm64 uri: "http://archive-to-use-for-arm64.example.com/ubuntu" security: - arches: - default search_dns: true sources_list: | deb $MIRROR $RELEASE main restricted deb-src $MIRROR $RELEASE main restricted deb $PRIMARY $RELEASE universe restricted deb $SECURITY $RELEASE-security multiverse debconf_selections: set1: the-package the-package/some-flag boolean true conf: | APT { Get { Assume-Yes "true"; Fix-Broken "true"; } } proxy: "http://[[user][:pass]@]host[:port]/" http_proxy: "http://[[user][:pass]@]host[:port]/" ftp_proxy: "ftp://[[user][:pass]@]host[:port]/" https_proxy: "https://[[user][:pass]@]host[:port]/" sources: source1: keyid: "keyid" keyserver: "keyserverurl" source: "deb http:/// xenial main" source2: source: "ppa:" source3: source: "deb $MIRROR $RELEASE multiverse" key: | ------BEGIN PGP PUBLIC KEY BLOCK------- ------END PGP PUBLIC KEY BLOCK------- """ import glob import os import re from cloudinit import gpg from cloudinit import log as logging from cloudinit import templater from cloudinit import util LOG = logging.getLogger(__name__) # this will match 'XXX:YYY' (ie, 'cloud-archive:foo' or 'ppa:bar') ADD_APT_REPO_MATCH = r"^[\w-]+:\w" # place where apt stores cached repository data APT_LISTS = "/var/lib/apt/lists" # Files to store proxy information APT_CONFIG_FN = "/etc/apt/apt.conf.d/94cloud-init-config" APT_PROXY_FN = "/etc/apt/apt.conf.d/90cloud-init-aptproxy" # Default keyserver to use DEFAULT_KEYSERVER = "keyserver.ubuntu.com" # Default archive mirrors PRIMARY_ARCH_MIRRORS = {"PRIMARY": "http://archive.ubuntu.com/ubuntu/", "SECURITY": "http://security.ubuntu.com/ubuntu/"} PORTS_MIRRORS = {"PRIMARY": "http://ports.ubuntu.com/ubuntu-ports", "SECURITY": "http://ports.ubuntu.com/ubuntu-ports"} PRIMARY_ARCHES = ['amd64', 'i386'] PORTS_ARCHES = ['s390x', 'arm64', 'armhf', 'powerpc', 'ppc64el'] def get_default_mirrors(arch=None, target=None): """returns the default mirrors for the target. These depend on the architecture, for more see: https://wiki.ubuntu.com/UbuntuDevelopment/PackageArchive#Ports""" if arch is None: arch = util.get_architecture(target) if arch in PRIMARY_ARCHES: return PRIMARY_ARCH_MIRRORS.copy() if arch in PORTS_ARCHES: return PORTS_MIRRORS.copy() raise ValueError("No default mirror known for arch %s" % arch) def handle(name, ocfg, cloud, log, _): """process the config for apt_config. This can be called from curthooks if a global apt config was provided or via the "apt" standalone command.""" # keeping code close to curtin codebase via entry handler target = None if log is not None: global LOG LOG = log # feed back converted config, but only work on the subset under 'apt' ocfg = convert_to_v3_apt_format(ocfg) cfg = ocfg.get('apt', {}) if not isinstance(cfg, dict): raise ValueError( "Expected dictionary for 'apt' config, found {config_type}".format( config_type=type(cfg))) apply_debconf_selections(cfg, target) apply_apt(cfg, cloud, target) def _should_configure_on_empty_apt(): # if no config was provided, should apt configuration be done? if util.system_is_snappy(): return False, "system is snappy." if not (util.which('apt-get') or util.which('apt')): return False, "no apt commands." return True, "Apt is available." def apply_apt(cfg, cloud, target): # cfg is the 'apt' top level dictionary already in 'v3' format. if not cfg: should_config, msg = _should_configure_on_empty_apt() if not should_config: LOG.debug("Nothing to do: No apt config and %s", msg) return LOG.debug("handling apt config: %s", cfg) release = util.lsb_release(target=target)['codename'] arch = util.get_architecture(target) mirrors = find_apt_mirror_info(cfg, cloud, arch=arch) LOG.debug("Apt Mirror info: %s", mirrors) if util.is_false(cfg.get('preserve_sources_list', False)): generate_sources_list(cfg, release, mirrors, cloud) rename_apt_lists(mirrors, target) try: apply_apt_config(cfg, APT_PROXY_FN, APT_CONFIG_FN) except (IOError, OSError): LOG.exception("Failed to apply proxy or apt config info:") # Process 'apt_source -> sources {dict}' if 'sources' in cfg: params = mirrors params['RELEASE'] = release params['MIRROR'] = mirrors["MIRROR"] matcher = None matchcfg = cfg.get('add_apt_repo_match', ADD_APT_REPO_MATCH) if matchcfg: matcher = re.compile(matchcfg).search add_apt_sources(cfg['sources'], cloud, target=target, template_params=params, aa_repo_match=matcher) def debconf_set_selections(selections, target=None): util.subp(['debconf-set-selections'], data=selections, target=target, capture=True) def dpkg_reconfigure(packages, target=None): # For any packages that are already installed, but have preseed data # we populate the debconf database, but the filesystem configuration # would be preferred on a subsequent dpkg-reconfigure. # so, what we have to do is "know" information about certain packages # to unconfigure them. unhandled = [] to_config = [] for pkg in packages: if pkg in CONFIG_CLEANERS: LOG.debug("unconfiguring %s", pkg) CONFIG_CLEANERS[pkg](target) to_config.append(pkg) else: unhandled.append(pkg) if len(unhandled): LOG.warning("The following packages were installed and preseeded, " "but cannot be unconfigured: %s", unhandled) if len(to_config): util.subp(['dpkg-reconfigure', '--frontend=noninteractive'] + list(to_config), data=None, target=target, capture=True) def apply_debconf_selections(cfg, target=None): """apply_debconf_selections - push content to debconf""" # debconf_selections: # set1: | # cloud-init cloud-init/datasources multiselect MAAS # set2: pkg pkg/value string bar selsets = cfg.get('debconf_selections') if not selsets: LOG.debug("debconf_selections was not set in config") return selections = '\n'.join( [selsets[key] for key in sorted(selsets.keys())]) debconf_set_selections(selections.encode() + b"\n", target=target) # get a complete list of packages listed in input pkgs_cfgd = set() for key, content in selsets.items(): for line in content.splitlines(): if line.startswith("#"): continue pkg = re.sub(r"[:\s].*", "", line) pkgs_cfgd.add(pkg) pkgs_installed = util.get_installed_packages(target) LOG.debug("pkgs_cfgd: %s", pkgs_cfgd) need_reconfig = pkgs_cfgd.intersection(pkgs_installed) if len(need_reconfig) == 0: LOG.debug("no need for reconfig") return dpkg_reconfigure(need_reconfig, target=target) def clean_cloud_init(target): """clean out any local cloud-init config""" flist = glob.glob( util.target_path(target, "/etc/cloud/cloud.cfg.d/*dpkg*")) LOG.debug("cleaning cloud-init config from: %s", flist) for dpkg_cfg in flist: os.unlink(dpkg_cfg) def mirrorurl_to_apt_fileprefix(mirror): """mirrorurl_to_apt_fileprefix Convert a mirror url to the file prefix used by apt on disk to store cache information for that mirror. To do so do: - take off ???:// - drop tailing / - convert in string / to _""" string = mirror if string.endswith("/"): string = string[0:-1] pos = string.find("://") if pos >= 0: string = string[pos + 3:] string = string.replace("/", "_") return string def rename_apt_lists(new_mirrors, target=None): """rename_apt_lists - rename apt lists to preserve old cache data""" default_mirrors = get_default_mirrors(util.get_architecture(target)) pre = util.target_path(target, APT_LISTS) for (name, omirror) in default_mirrors.items(): nmirror = new_mirrors.get(name) if not nmirror: continue oprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(omirror) nprefix = pre + os.path.sep + mirrorurl_to_apt_fileprefix(nmirror) if oprefix == nprefix: continue olen = len(oprefix) for filename in glob.glob("%s_*" % oprefix): newname = "%s%s" % (nprefix, filename[olen:]) LOG.debug("Renaming apt list %s to %s", filename, newname) try: os.rename(filename, newname) except OSError: # since this is a best effort task, warn with but don't fail LOG.warning("Failed to rename apt list:", exc_info=True) def mirror_to_placeholder(tmpl, mirror, placeholder): """mirror_to_placeholder replace the specified mirror in a template with a placeholder string Checks for existance of the expected mirror and warns if not found""" if mirror not in tmpl: LOG.warning("Expected mirror '%s' not found in: %s", mirror, tmpl) return tmpl.replace(mirror, placeholder) def map_known_suites(suite): """there are a few default names which will be auto-extended. This comes at the inability to use those names literally as suites, but on the other hand increases readability of the cfg quite a lot""" mapping = {'updates': '$RELEASE-updates', 'backports': '$RELEASE-backports', 'security': '$RELEASE-security', 'proposed': '$RELEASE-proposed', 'release': '$RELEASE'} try: retsuite = mapping[suite] except KeyError: retsuite = suite return retsuite def disable_suites(disabled, src, release): """reads the config for suites to be disabled and removes those from the template""" if not disabled: return src retsrc = src for suite in disabled: suite = map_known_suites(suite) releasesuite = templater.render_string(suite, {'RELEASE': release}) LOG.debug("Disabling suite %s as %s", suite, releasesuite) newsrc = "" for line in retsrc.splitlines(True): if line.startswith("#"): newsrc += line continue # sources.list allow options in cols[1] which can have spaces # so the actual suite can be [2] or later. example: # deb [ arch=amd64,armel k=v ] http://example.com/debian cols = line.split() if len(cols) > 1: pcol = 2 if cols[1].startswith("["): for col in cols[1:]: pcol += 1 if col.endswith("]"): break if cols[pcol] == releasesuite: line = '# suite disabled by cloud-init: %s' % line newsrc += line retsrc = newsrc return retsrc def generate_sources_list(cfg, release, mirrors, cloud): """generate_sources_list create a source.list file based on a custom or default template by replacing mirrors and release in the template""" aptsrc = "/etc/apt/sources.list" params = {'RELEASE': release, 'codename': release} for k in mirrors: params[k] = mirrors[k] params[k.lower()] = mirrors[k] tmpl = cfg.get('sources_list', None) if tmpl is None: LOG.info("No custom template provided, fall back to builtin") template_fn = cloud.get_template_filename('sources.list.%s' % (cloud.distro.name)) if not template_fn: template_fn = cloud.get_template_filename('sources.list') if not template_fn: LOG.warning("No template found, " "not rendering /etc/apt/sources.list") return tmpl = util.load_file(template_fn) rendered = templater.render_string(tmpl, params) disabled = disable_suites(cfg.get('disable_suites'), rendered, release) util.write_file(aptsrc, disabled, mode=0o644) def add_apt_key_raw(key, target=None): """ actual adding of a key as defined in key argument to the system """ LOG.debug("Adding key:\n'%s'", key) try: util.subp(['apt-key', 'add', '-'], data=key.encode(), target=target) except util.ProcessExecutionError: LOG.exception("failed to add apt GPG Key to apt keyring") raise def add_apt_key(ent, target=None): """ Add key to the system as defined in ent (if any). Supports raw keys or keyid's The latter will as a first step fetched to get the raw key """ if 'keyid' in ent and 'key' not in ent: keyserver = DEFAULT_KEYSERVER if 'keyserver' in ent: keyserver = ent['keyserver'] ent['key'] = gpg.getkeybyid(ent['keyid'], keyserver) if 'key' in ent: add_apt_key_raw(ent['key'], target) def update_packages(cloud): cloud.distro.update_package_sources() def add_apt_sources(srcdict, cloud, target=None, template_params=None, aa_repo_match=None): """ add entries in /etc/apt/sources.list.d for each abbreviated sources.list entry in 'srcdict'. When rendering template, also include the values in dictionary searchList """ if template_params is None: template_params = {} if aa_repo_match is None: raise ValueError('did not get a valid repo matcher') if not isinstance(srcdict, dict): raise TypeError('unknown apt format: %s' % (srcdict)) for filename in srcdict: ent = srcdict[filename] LOG.debug("adding source/key '%s'", ent) if 'filename' not in ent: ent['filename'] = filename add_apt_key(ent, target) if 'source' not in ent: continue source = ent['source'] source = templater.render_string(source, template_params) if not ent['filename'].startswith("/"): ent['filename'] = os.path.join("/etc/apt/sources.list.d/", ent['filename']) if not ent['filename'].endswith(".list"): ent['filename'] += ".list" if aa_repo_match(source): try: util.subp(["add-apt-repository", source], target=target) except util.ProcessExecutionError: LOG.exception("add-apt-repository failed.") raise continue sourcefn = util.target_path(target, ent['filename']) try: contents = "%s\n" % (source) util.write_file(sourcefn, contents, omode="a") except IOError as detail: LOG.exception("failed write to file %s: %s", sourcefn, detail) raise update_packages(cloud) return def convert_v1_to_v2_apt_format(srclist): """convert v1 apt format to v2 (dict in apt_sources)""" srcdict = {} if isinstance(srclist, list): LOG.debug("apt config: convert V1 to V2 format (source list to dict)") for srcent in srclist: if 'filename' not in srcent: # file collides for multiple !filename cases for compatibility # yet we need them all processed, so not same dictionary key srcent['filename'] = "cloud_config_sources.list" key = util.rand_dict_key(srcdict, "cloud_config_sources.list") else: # all with filename use that as key (matching new format) key = srcent['filename'] srcdict[key] = srcent elif isinstance(srclist, dict): srcdict = srclist else: raise ValueError("unknown apt_sources format") return srcdict def convert_key(oldcfg, aptcfg, oldkey, newkey): """convert an old key to the new one if the old one exists returns true if a key was found and converted""" if oldcfg.get(oldkey, None) is not None: aptcfg[newkey] = oldcfg.get(oldkey) del oldcfg[oldkey] return True return False def convert_mirror(oldcfg, aptcfg): """convert old apt_mirror keys into the new more advanced mirror spec""" keymap = [('apt_mirror', 'uri'), ('apt_mirror_search', 'search'), ('apt_mirror_search_dns', 'search_dns')] converted = False newmcfg = {'arches': ['default']} for oldkey, newkey in keymap: if convert_key(oldcfg, newmcfg, oldkey, newkey): converted = True # only insert new style config if anything was converted if converted: aptcfg['primary'] = [newmcfg] def convert_v2_to_v3_apt_format(oldcfg): """convert old to new keys and adapt restructured mirror spec""" mapoldkeys = {'apt_sources': 'sources', 'apt_mirror': None, 'apt_mirror_search': None, 'apt_mirror_search_dns': None, 'apt_proxy': 'proxy', 'apt_http_proxy': 'http_proxy', 'apt_ftp_proxy': 'https_proxy', 'apt_https_proxy': 'ftp_proxy', 'apt_preserve_sources_list': 'preserve_sources_list', 'apt_custom_sources_list': 'sources_list', 'add_apt_repo_match': 'add_apt_repo_match'} needtoconvert = [] for oldkey in mapoldkeys: if oldkey in oldcfg: if oldcfg[oldkey] in (None, ""): del oldcfg[oldkey] else: needtoconvert.append(oldkey) # no old config, so no new one to be created if not needtoconvert: return oldcfg LOG.debug("apt config: convert V2 to V3 format for keys '%s'", ", ".join(needtoconvert)) # if old AND new config are provided, prefer the new one (LP #1616831) newaptcfg = oldcfg.get('apt', None) if newaptcfg is not None: LOG.debug("apt config: V1/2 and V3 format specified, preferring V3") for oldkey in needtoconvert: newkey = mapoldkeys[oldkey] verify = oldcfg[oldkey] # drop, but keep a ref for verification del oldcfg[oldkey] if newkey is None or newaptcfg.get(newkey, None) is None: # no simple mapping or no collision on this particular key continue if verify != newaptcfg[newkey]: raise ValueError("Old and New apt format defined with unequal " "values %s vs %s @ %s" % (verify, newaptcfg[newkey], oldkey)) # return conf after clearing conflicting V1/2 keys return oldcfg # create new format from old keys aptcfg = {} # simple renames / moves under the apt key for oldkey in mapoldkeys: if mapoldkeys[oldkey] is not None: convert_key(oldcfg, aptcfg, oldkey, mapoldkeys[oldkey]) # mirrors changed in a more complex way convert_mirror(oldcfg, aptcfg) for oldkey in mapoldkeys: if oldcfg.get(oldkey, None) is not None: raise ValueError("old apt key '%s' left after conversion" % oldkey) # insert new format into config and return full cfg with only v3 content oldcfg['apt'] = aptcfg return oldcfg def convert_to_v3_apt_format(cfg): """convert the old list based format to the new dict based one. After that convert the old dict keys/format to v3 a.k.a 'new apt config'""" # V1 -> V2, the apt_sources entry from list to dict apt_sources = cfg.get('apt_sources', None) if apt_sources is not None: cfg['apt_sources'] = convert_v1_to_v2_apt_format(apt_sources) # V2 -> V3, move all former globals under the "apt" key # Restructure into new key names and mirror hierarchy cfg = convert_v2_to_v3_apt_format(cfg) return cfg def search_for_mirror(candidates): """ Search through a list of mirror urls for one that works This needs to return quickly. """ if candidates is None: return None LOG.debug("search for mirror in candidates: '%s'", candidates) for cand in candidates: try: if util.is_resolvable_url(cand): LOG.debug("found working mirror: '%s'", cand) return cand except Exception: pass return None def search_for_mirror_dns(configured, mirrortype, cfg, cloud): """ Try to resolve a list of predefines DNS names to pick mirrors """ mirror = None if configured: mydom = "" doms = [] if mirrortype == "primary": mirrordns = "mirror" elif mirrortype == "security": mirrordns = "security-mirror" else: raise ValueError("unknown mirror type") # if we have a fqdn, then search its domain portion first (_, fqdn) = util.get_hostname_fqdn(cfg, cloud) mydom = ".".join(fqdn.split(".")[1:]) if mydom: doms.append(".%s" % mydom) doms.extend((".localdomain", "",)) mirror_list = [] distro = cloud.distro.name mirrorfmt = "http://%s-%s%s/%s" % (distro, mirrordns, "%s", distro) for post in doms: mirror_list.append(mirrorfmt % (post)) mirror = search_for_mirror(mirror_list) return mirror def update_mirror_info(pmirror, smirror, arch, cloud): """sets security mirror to primary if not defined. returns defaults if no mirrors are defined""" if pmirror is not None: if smirror is None: smirror = pmirror return {'PRIMARY': pmirror, 'SECURITY': smirror} # None specified at all, get default mirrors from cloud mirror_info = cloud.datasource.get_package_mirror_info() if mirror_info: # get_package_mirror_info() returns a dictionary with # arbitrary key/value pairs including 'primary' and 'security' keys. # caller expects dict with PRIMARY and SECURITY. m = mirror_info.copy() m['PRIMARY'] = m['primary'] m['SECURITY'] = m['security'] return m # if neither apt nor cloud configured mirrors fall back to return get_default_mirrors(arch) def get_arch_mirrorconfig(cfg, mirrortype, arch): """out of a list of potential mirror configurations select and return the one matching the architecture (or default)""" # select the mirror specification (if-any) mirror_cfg_list = cfg.get(mirrortype, None) if mirror_cfg_list is None: return None # select the specification matching the target arch default = None for mirror_cfg_elem in mirror_cfg_list: arches = mirror_cfg_elem.get("arches") if arch in arches: return mirror_cfg_elem if "default" in arches: default = mirror_cfg_elem return default def get_mirror(cfg, mirrortype, arch, cloud): """pass the three potential stages of mirror specification returns None is neither of them found anything otherwise the first hit is returned""" mcfg = get_arch_mirrorconfig(cfg, mirrortype, arch) if mcfg is None: return None # directly specified mirror = mcfg.get("uri", None) # fallback to search if specified if mirror is None: # list of mirrors to try to resolve mirror = search_for_mirror(mcfg.get("search", None)) # fallback to search_dns if specified if mirror is None: # list of mirrors to try to resolve mirror = search_for_mirror_dns(mcfg.get("search_dns", None), mirrortype, cfg, cloud) return mirror def find_apt_mirror_info(cfg, cloud, arch=None): """find_apt_mirror_info find an apt_mirror given the cfg provided. It can check for separate config of primary and security mirrors If only primary is given security is assumed to be equal to primary If the generic apt_mirror is given that is defining for both """ if arch is None: arch = util.get_architecture() LOG.debug("got arch for mirror selection: %s", arch) pmirror = get_mirror(cfg, "primary", arch, cloud) LOG.debug("got primary mirror: %s", pmirror) smirror = get_mirror(cfg, "security", arch, cloud) LOG.debug("got security mirror: %s", smirror) mirror_info = update_mirror_info(pmirror, smirror, arch, cloud) # less complex replacements use only MIRROR, derive from primary mirror_info["MIRROR"] = mirror_info["PRIMARY"] return mirror_info def apply_apt_config(cfg, proxy_fname, config_fname): """apply_apt_config Applies any apt*proxy config from if specified """ # Set up any apt proxy cfgs = (('proxy', 'Acquire::http::Proxy "%s";'), ('http_proxy', 'Acquire::http::Proxy "%s";'), ('ftp_proxy', 'Acquire::ftp::Proxy "%s";'), ('https_proxy', 'Acquire::https::Proxy "%s";')) proxies = [fmt % cfg.get(name) for (name, fmt) in cfgs if cfg.get(name)] if len(proxies): LOG.debug("write apt proxy info to %s", proxy_fname) util.write_file(proxy_fname, '\n'.join(proxies) + '\n') elif os.path.isfile(proxy_fname): util.del_file(proxy_fname) LOG.debug("no apt proxy configured, removed %s", proxy_fname) if cfg.get('conf', None): LOG.debug("write apt config info to %s", config_fname) util.write_file(config_fname, cfg.get('conf')) elif os.path.isfile(config_fname): util.del_file(config_fname) LOG.debug("no apt config configured, removed %s", config_fname) CONFIG_CLEANERS = { 'cloud-init': clean_cloud_init, } # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/config/cc_apt_pipelining.py000066400000000000000000000046731326573344200244730ustar00rootroot00000000000000# Copyright (C) 2011 Canonical Ltd. # # Author: Ben Howard # # This file is part of cloud-init. See LICENSE file for license information. """ Apt Pipelining -------------- **Summary:** configure apt pipelining This module configures apt's ``Acquite::http::Pipeline-Depth`` option, whcih controls how apt handles HTTP pipelining. It may be useful for pipelining to be disabled, because some web servers, such as S3 do not pipeline properly (LP: #948461). The ``apt_pipelining`` config key may be set to ``false`` to disable pipelining altogether. This is the default behavior. If it is set to ``none``, ``unchanged``, or ``os``, no change will be made to apt configuration and the default setting for the distro will be used. The pipeline depth can also be manually specified by setting ``apt_pipelining`` to a number. However, this is not recommended. **Internal name:** ``cc_apt_pipelining`` **Module frequency:** per instance **Supported distros:** ubuntu, debian **Config keys**:: apt_pipelining: """ from cloudinit.settings import PER_INSTANCE from cloudinit import util frequency = PER_INSTANCE distros = ['ubuntu', 'debian'] DEFAULT_FILE = "/etc/apt/apt.conf.d/90cloud-init-pipelining" APT_PIPE_TPL = ("//Written by cloud-init per 'apt_pipelining'\n" 'Acquire::http::Pipeline-Depth "%s";\n') # Acquire::http::Pipeline-Depth can be a value # from 0 to 5 indicating how many outstanding requests APT should send. # A value of zero MUST be specified if the remote host does not properly linger # on TCP connections - otherwise data corruption will occur. def handle(_name, cfg, _cloud, log, _args): apt_pipe_value = util.get_cfg_option_str(cfg, "apt_pipelining", False) apt_pipe_value_s = str(apt_pipe_value).lower().strip() if apt_pipe_value_s == "false": write_apt_snippet("0", log, DEFAULT_FILE) elif apt_pipe_value_s in ("none", "unchanged", "os"): return elif apt_pipe_value_s in [str(b) for b in range(0, 6)]: write_apt_snippet(apt_pipe_value_s, log, DEFAULT_FILE) else: log.warn("Invalid option for apt_pipeling: %s", apt_pipe_value) def write_apt_snippet(setting, log, f_name): """Writes f_name with apt pipeline depth 'setting'.""" file_contents = APT_PIPE_TPL % (setting) util.write_file(f_name, file_contents) log.debug("Wrote %s with apt pipeline depth setting %s", f_name, setting) # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/config/cc_bootcmd.py000066400000000000000000000065771326573344200231250ustar00rootroot00000000000000# Copyright (C) 2009-2011 Canonical Ltd. # Copyright (C) 2012, 2013 Hewlett-Packard Development Company, L.P. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Chad Smith # # This file is part of cloud-init. See LICENSE file for license information. """Bootcmd: run arbitrary commands early in the boot process.""" import os from textwrap import dedent from cloudinit.config.schema import ( get_schema_doc, validate_cloudconfig_schema) from cloudinit.settings import PER_ALWAYS from cloudinit import temp_utils from cloudinit import util frequency = PER_ALWAYS # The schema definition for each cloud-config module is a strict contract for # describing supported configuration parameters for each cloud-config section. # It allows cloud-config to validate and alert users to invalid or ignored # configuration options before actually attempting to deploy with said # configuration. distros = ['all'] schema = { 'id': 'cc_bootcmd', 'name': 'Bootcmd', 'title': 'Run arbitrary commands early in the boot process', 'description': dedent("""\ This module runs arbitrary commands very early in the boot process, only slightly after a boothook would run. This is very similar to a boothook, but more user friendly. The environment variable ``INSTANCE_ID`` will be set to the current instance id for all run commands. Commands can be specified either as lists or strings. For invocation details, see ``runcmd``. .. note:: bootcmd should only be used for things that could not be done later in the boot process."""), 'distros': distros, 'examples': [dedent("""\ bootcmd: - echo 192.168.1.130 us.archive.ubuntu.com > /etc/hosts - [ cloud-init-per, once, mymkfs, mkfs, /dev/vdb ] """)], 'frequency': PER_ALWAYS, 'type': 'object', 'properties': { 'bootcmd': { 'type': 'array', 'items': { 'oneOf': [ {'type': 'array', 'items': {'type': 'string'}}, {'type': 'string'}] }, 'additionalItems': False, # Reject items of non-string non-list 'additionalProperties': False, 'minItems': 1, 'required': [], 'uniqueItems': True } } } __doc__ = get_schema_doc(schema) # Supplement python help() def handle(name, cfg, cloud, log, _args): if "bootcmd" not in cfg: log.debug(("Skipping module named %s," " no 'bootcmd' key in configuration"), name) return validate_cloudconfig_schema(cfg, schema) with temp_utils.ExtendedTemporaryFile(suffix=".sh") as tmpf: try: content = util.shellify(cfg["bootcmd"]) tmpf.write(util.encode_text(content)) tmpf.flush() except Exception as e: util.logexc(log, "Failed to shellify bootcmd: %s", str(e)) raise try: env = os.environ.copy() iid = cloud.get_instance_id() if iid: env['INSTANCE_ID'] = str(iid) cmd = ['/bin/sh', tmpf.name] util.subp(cmd, env=env, capture=False) except Exception: util.logexc(log, "Failed to run bootcmd module %s", name) raise # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/config/cc_byobu.py000077500000000000000000000061341326573344200226060ustar00rootroot00000000000000# Copyright (C) 2009-2010 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # # Author: Scott Moser # Author: Juerg Haefliger # # This file is part of cloud-init. See LICENSE file for license information. """ Byobu ----- **Summary:** enable/disable byobu system wide and for default user This module controls whether byobu is enabled or disabled system wide and for the default system user. If byobu is to be enabled, this module will ensure it is installed. Likewise, if it is to be disabled, it will be removed if installed. Valid configuration options for this module are: - ``enable-system``: enable byobu system wide - ``enable-user``: enable byobu for the default user - ``disable-system``: disable byobu system wide - ``disable-user``: disable byobu for the default user - ``enable``: enable byobu both system wide and for default user - ``disable``: disable byobu for all users - ``user``: alias for ``enable-user`` - ``system``: alias for ``enable-system`` **Internal name:** ``cc_byobu`` **Module frequency:** per instance **Supported distros:** ubuntu, debian **Config keys**:: byobu_by_default: """ from cloudinit.distros import ug_util from cloudinit import util distros = ['ubuntu', 'debian'] def handle(name, cfg, cloud, log, args): if len(args) != 0: value = args[0] else: value = util.get_cfg_option_str(cfg, "byobu_by_default", "") if not value: log.debug("Skipping module named %s, no 'byobu' values found", name) return if value == "user" or value == "system": value = "enable-%s" % value valid = ("enable-user", "enable-system", "enable", "disable-user", "disable-system", "disable") if value not in valid: log.warn("Unknown value %s for byobu_by_default", value) mod_user = value.endswith("-user") mod_sys = value.endswith("-system") if value.startswith("enable"): bl_inst = "install" dc_val = "byobu byobu/launch-by-default boolean true" mod_sys = True else: if value == "disable": mod_user = True mod_sys = True bl_inst = "uninstall" dc_val = "byobu byobu/launch-by-default boolean false" shcmd = "" if mod_user: (users, _groups) = ug_util.normalize_users_groups(cfg, cloud.distro) (user, _user_config) = ug_util.extract_default(users) if not user: log.warn(("No default byobu user provided, " "can not launch %s for the default user"), bl_inst) else: shcmd += " sudo -Hu \"%s\" byobu-launcher-%s" % (user, bl_inst) shcmd += " || X=$(($X+1)); " if mod_sys: shcmd += "echo \"%s\" | debconf-set-selections" % dc_val shcmd += " && dpkg-reconfigure byobu --frontend=noninteractive" shcmd += " || X=$(($X+1)); " if len(shcmd): cmd = ["/bin/sh", "-c", "%s %s %s" % ("X=0;", shcmd, "exit $X")] log.debug("Setting byobu to %s", value) util.subp(cmd, capture=False) # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/config/cc_ca_certs.py000066400000000000000000000101361326573344200232430ustar00rootroot00000000000000# Author: Mike Milner # # This file is part of cloud-init. See LICENSE file for license information. """ CA Certs -------- **Summary:** add ca certificates This module adds CA certificates to ``/etc/ca-certificates.conf`` and updates the ssl cert cache using ``update-ca-certificates``. The default certificates can be removed from the system with the configuration option ``remove-defaults``. .. note:: certificates must be specified using valid yaml. in order to specify a multiline certificate, the yaml multiline list syntax must be used **Internal name:** ``cc_ca_certs`` **Module frequency:** per instance **Supported distros:** ubuntu, debian **Config keys**:: ca-certs: remove-defaults: trusted: - - | -----BEGIN CERTIFICATE----- YOUR-ORGS-TRUSTED-CA-CERT-HERE -----END CERTIFICATE----- """ import os from cloudinit import util CA_CERT_PATH = "/usr/share/ca-certificates/" CA_CERT_FILENAME = "cloud-init-ca-certs.crt" CA_CERT_CONFIG = "/etc/ca-certificates.conf" CA_CERT_SYSTEM_PATH = "/etc/ssl/certs/" CA_CERT_FULL_PATH = os.path.join(CA_CERT_PATH, CA_CERT_FILENAME) distros = ['ubuntu', 'debian'] def update_ca_certs(): """ Updates the CA certificate cache on the current machine. """ util.subp(["update-ca-certificates"], capture=False) def add_ca_certs(certs): """ Adds certificates to the system. To actually apply the new certificates you must also call L{update_ca_certs}. @param certs: A list of certificate strings. """ if certs: # First ensure they are strings... cert_file_contents = "\n".join([str(c) for c in certs]) util.write_file(CA_CERT_FULL_PATH, cert_file_contents, mode=0o644) # Append cert filename to CA_CERT_CONFIG file. # We have to strip the content because blank lines in the file # causes subsequent entries to be ignored. (LP: #1077020) orig = util.load_file(CA_CERT_CONFIG) cur_cont = '\n'.join([line for line in orig.splitlines() if line != CA_CERT_FILENAME]) out = "%s\n%s\n" % (cur_cont.rstrip(), CA_CERT_FILENAME) util.write_file(CA_CERT_CONFIG, out, omode="wb") def remove_default_ca_certs(): """ Removes all default trusted CA certificates from the system. To actually apply the change you must also call L{update_ca_certs}. """ util.delete_dir_contents(CA_CERT_PATH) util.delete_dir_contents(CA_CERT_SYSTEM_PATH) util.write_file(CA_CERT_CONFIG, "", mode=0o644) debconf_sel = "ca-certificates ca-certificates/trust_new_crts select no" util.subp(('debconf-set-selections', '-'), debconf_sel) def handle(name, cfg, _cloud, log, _args): """ Call to handle ca-cert sections in cloud-config file. @param name: The module name "ca-cert" from cloud.cfg @param cfg: A nested dict containing the entire cloud config contents. @param cloud: The L{CloudInit} object in use. @param log: Pre-initialized Python logger object to use for logging. @param args: Any module arguments from cloud.cfg """ # If there isn't a ca-certs section in the configuration don't do anything if "ca-certs" not in cfg: log.debug(("Skipping module named %s," " no 'ca-certs' key in configuration"), name) return ca_cert_cfg = cfg['ca-certs'] # If there is a remove-defaults option set to true, remove the system # default trusted CA certs first. if ca_cert_cfg.get("remove-defaults", False): log.debug("Removing default certificates") remove_default_ca_certs() # If we are given any new trusted CA certs to add, add them. if "trusted" in ca_cert_cfg: trusted_certs = util.get_cfg_option_list(ca_cert_cfg, "trusted") if trusted_certs: log.debug("Adding %d certificates" % len(trusted_certs)) add_ca_certs(trusted_certs) # Update the system with the new cert configuration. log.debug("Updating certificates") update_ca_certs() # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/config/cc_chef.py000066400000000000000000000320711326573344200223670ustar00rootroot00000000000000# Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # # Author: Avishai Ish-Shalom # Author: Mike Moulton # Author: Juerg Haefliger # # This file is part of cloud-init. See LICENSE file for license information. """ Chef ---- **Summary:** module that configures, starts and installs chef. This module enables chef to be installed (from packages or from gems, or from omnibus). Before this occurs chef configurations are written to disk (validation.pem, client.pem, firstboot.json, client.rb), and needed chef folders/directories are created (/etc/chef and /var/log/chef and so-on). Then once installing proceeds correctly if configured chef will be started (in daemon mode or in non-daemon mode) and then once that has finished (if ran in non-daemon mode this will be when chef finishes converging, if ran in daemon mode then no further actions are possible since chef will have forked into its own process) then a post run function can run that can do finishing activities (such as removing the validation pem file). **Internal name:** ``cc_chef`` **Module frequency:** per always **Supported distros:** all **Config keys**:: chef: directories: (defaulting to /etc/chef, /var/log/chef, /var/lib/chef, /var/cache/chef, /var/backups/chef, /var/run/chef) validation_cert: (optional string to be written to file validation_key) special value 'system' means set use existing file validation_key: (optional the path for validation_cert. default /etc/chef/validation.pem) firstboot_path: (path to write run_list and initial_attributes keys that should also be present in this configuration, defaults to /etc/chef/firstboot.json) exec: boolean to run or not run chef (defaults to false, unless a gem installed is requested where this will then default to true) chef.rb template keys (if falsey, then will be skipped and not written to /etc/chef/client.rb) chef: client_key: environment: file_backup_path: file_cache_path: json_attribs: log_level: log_location: node_name: omnibus_url: omnibus_url_retries: omnibus_version: pid_file: server_url: show_time: ssl_verify_mode: validation_cert: validation_key: validation_name: """ import itertools import json import os from cloudinit import templater from cloudinit import url_helper from cloudinit import util import six RUBY_VERSION_DEFAULT = "1.8" CHEF_DIRS = tuple([ '/etc/chef', '/var/log/chef', '/var/lib/chef', '/var/cache/chef', '/var/backups/chef', '/var/run/chef', ]) REQUIRED_CHEF_DIRS = tuple([ '/etc/chef', ]) # Used if fetching chef from a omnibus style package OMNIBUS_URL = "https://www.chef.io/chef/install.sh" OMNIBUS_URL_RETRIES = 5 CHEF_VALIDATION_PEM_PATH = '/etc/chef/validation.pem' CHEF_FB_PATH = '/etc/chef/firstboot.json' CHEF_RB_TPL_DEFAULTS = { # These are ruby symbols... 'ssl_verify_mode': ':verify_none', 'log_level': ':info', # These are not symbols... 'log_location': '/var/log/chef/client.log', 'validation_key': CHEF_VALIDATION_PEM_PATH, 'validation_cert': None, 'client_key': "/etc/chef/client.pem", 'json_attribs': CHEF_FB_PATH, 'file_cache_path': "/var/cache/chef", 'file_backup_path': "/var/backups/chef", 'pid_file': "/var/run/chef/client.pid", 'show_time': True, } CHEF_RB_TPL_BOOL_KEYS = frozenset(['show_time']) CHEF_RB_TPL_PATH_KEYS = frozenset([ 'log_location', 'validation_key', 'client_key', 'file_cache_path', 'json_attribs', 'file_cache_path', 'pid_file', ]) CHEF_RB_TPL_KEYS = list(CHEF_RB_TPL_DEFAULTS.keys()) CHEF_RB_TPL_KEYS.extend(CHEF_RB_TPL_BOOL_KEYS) CHEF_RB_TPL_KEYS.extend(CHEF_RB_TPL_PATH_KEYS) CHEF_RB_TPL_KEYS.extend([ 'server_url', 'node_name', 'environment', 'validation_name', ]) CHEF_RB_TPL_KEYS = frozenset(CHEF_RB_TPL_KEYS) CHEF_RB_PATH = '/etc/chef/client.rb' CHEF_EXEC_PATH = '/usr/bin/chef-client' CHEF_EXEC_DEF_ARGS = tuple(['-d', '-i', '1800', '-s', '20']) def is_installed(): if not os.path.isfile(CHEF_EXEC_PATH): return False if not os.access(CHEF_EXEC_PATH, os.X_OK): return False return True def post_run_chef(chef_cfg, log): delete_pem = util.get_cfg_option_bool(chef_cfg, 'delete_validation_post_exec', default=False) if delete_pem and os.path.isfile(CHEF_VALIDATION_PEM_PATH): os.unlink(CHEF_VALIDATION_PEM_PATH) def get_template_params(iid, chef_cfg, log): params = CHEF_RB_TPL_DEFAULTS.copy() # Allow users to overwrite any of the keys they want (if they so choose), # when a value is None, then the value will be set to None and no boolean # or string version will be populated... for (k, v) in chef_cfg.items(): if k not in CHEF_RB_TPL_KEYS: log.debug("Skipping unknown chef template key '%s'", k) continue if v is None: params[k] = None else: # This will make the value a boolean or string... if k in CHEF_RB_TPL_BOOL_KEYS: params[k] = util.get_cfg_option_bool(chef_cfg, k) else: params[k] = util.get_cfg_option_str(chef_cfg, k) # These ones are overwritten to be exact values... params.update({ 'generated_by': util.make_header(), 'node_name': util.get_cfg_option_str(chef_cfg, 'node_name', default=iid), 'environment': util.get_cfg_option_str(chef_cfg, 'environment', default='_default'), # These two are mandatory... 'server_url': chef_cfg['server_url'], 'validation_name': chef_cfg['validation_name'], }) return params def handle(name, cfg, cloud, log, _args): """Handler method activated by cloud-init.""" # If there isn't a chef key in the configuration don't do anything if 'chef' not in cfg: log.debug(("Skipping module named %s," " no 'chef' key in configuration"), name) return chef_cfg = cfg['chef'] # Ensure the chef directories we use exist chef_dirs = util.get_cfg_option_list(chef_cfg, 'directories') if not chef_dirs: chef_dirs = list(CHEF_DIRS) for d in itertools.chain(chef_dirs, REQUIRED_CHEF_DIRS): util.ensure_dir(d) vkey_path = chef_cfg.get('validation_key', CHEF_VALIDATION_PEM_PATH) vcert = chef_cfg.get('validation_cert') # special value 'system' means do not overwrite the file # but still render the template to contain 'validation_key' if vcert: if vcert != "system": util.write_file(vkey_path, vcert) elif not os.path.isfile(vkey_path): log.warn("chef validation_cert provided as 'system', but " "validation_key path '%s' does not exist.", vkey_path) # Create the chef config from template template_fn = cloud.get_template_filename('chef_client.rb') if template_fn: iid = str(cloud.datasource.get_instance_id()) params = get_template_params(iid, chef_cfg, log) # Do a best effort attempt to ensure that the template values that # are associated with paths have there parent directory created # before they are used by the chef-client itself. param_paths = set() for (k, v) in params.items(): if k in CHEF_RB_TPL_PATH_KEYS and v: param_paths.add(os.path.dirname(v)) util.ensure_dirs(param_paths) templater.render_to_file(template_fn, CHEF_RB_PATH, params) else: log.warn("No template found, not rendering to %s", CHEF_RB_PATH) # Set the firstboot json fb_filename = util.get_cfg_option_str(chef_cfg, 'firstboot_path', default=CHEF_FB_PATH) if not fb_filename: log.info("First boot path empty, not writing first boot json file") else: initial_json = {} if 'run_list' in chef_cfg: initial_json['run_list'] = chef_cfg['run_list'] if 'initial_attributes' in chef_cfg: initial_attributes = chef_cfg['initial_attributes'] for k in list(initial_attributes.keys()): initial_json[k] = initial_attributes[k] util.write_file(fb_filename, json.dumps(initial_json)) # Try to install chef, if its not already installed... force_install = util.get_cfg_option_bool(chef_cfg, 'force_install', default=False) if not is_installed() or force_install: run = install_chef(cloud, chef_cfg, log) elif is_installed(): run = util.get_cfg_option_bool(chef_cfg, 'exec', default=False) else: run = False if run: run_chef(chef_cfg, log) post_run_chef(chef_cfg, log) def run_chef(chef_cfg, log): log.debug('Running chef-client') cmd = [CHEF_EXEC_PATH] if 'exec_arguments' in chef_cfg: cmd_args = chef_cfg['exec_arguments'] if isinstance(cmd_args, (list, tuple)): cmd.extend(cmd_args) elif isinstance(cmd_args, six.string_types): cmd.append(cmd_args) else: log.warn("Unknown type %s provided for chef" " 'exec_arguments' expected list, tuple," " or string", type(cmd_args)) cmd.extend(CHEF_EXEC_DEF_ARGS) else: cmd.extend(CHEF_EXEC_DEF_ARGS) util.subp(cmd, capture=False) def install_chef_from_omnibus(url=None, retries=None, omnibus_version=None): """Install an omnibus unified package from url. @param url: URL where blob of chef content may be downloaded. Defaults to OMNIBUS_URL. @param retries: Number of retries to perform when attempting to read url. Defaults to OMNIBUS_URL_RETRIES @param omnibus_version: Optional version string to require for omnibus install. """ if url is None: url = OMNIBUS_URL if retries is None: retries = OMNIBUS_URL_RETRIES if omnibus_version is None: args = [] else: args = ['-v', omnibus_version] content = url_helper.readurl(url=url, retries=retries).contents return util.subp_blob_in_tempfile( blob=content, args=args, basename='chef-omnibus-install', capture=False) def install_chef(cloud, chef_cfg, log): # If chef is not installed, we install chef based on 'install_type' install_type = util.get_cfg_option_str(chef_cfg, 'install_type', 'packages') run = util.get_cfg_option_bool(chef_cfg, 'exec', default=False) if install_type == "gems": # This will install and run the chef-client from gems chef_version = util.get_cfg_option_str(chef_cfg, 'version', None) ruby_version = util.get_cfg_option_str(chef_cfg, 'ruby_version', RUBY_VERSION_DEFAULT) install_chef_from_gems(ruby_version, chef_version, cloud.distro) # Retain backwards compat, by preferring True instead of False # when not provided/overriden... run = util.get_cfg_option_bool(chef_cfg, 'exec', default=True) elif install_type == 'packages': # This will install and run the chef-client from packages cloud.distro.install_packages(('chef',)) elif install_type == 'omnibus': omnibus_version = util.get_cfg_option_str(chef_cfg, "omnibus_version") install_chef_from_omnibus( url=util.get_cfg_option_str(chef_cfg, "omnibus_url"), retries=util.get_cfg_option_int(chef_cfg, "omnibus_url_retries"), omnibus_version=omnibus_version) else: log.warn("Unknown chef install type '%s'", install_type) run = False return run def get_ruby_packages(version): # return a list of packages needed to install ruby at version pkgs = ['ruby%s' % version, 'ruby%s-dev' % version] if version == "1.8": pkgs.extend(('libopenssl-ruby1.8', 'rubygems1.8')) return pkgs def install_chef_from_gems(ruby_version, chef_version, distro): distro.install_packages(get_ruby_packages(ruby_version)) if not os.path.exists('/usr/bin/gem'): util.sym_link('/usr/bin/gem%s' % ruby_version, '/usr/bin/gem') if not os.path.exists('/usr/bin/ruby'): util.sym_link('/usr/bin/ruby%s' % ruby_version, '/usr/bin/ruby') if chef_version: util.subp(['/usr/bin/gem', 'install', 'chef', '-v %s' % chef_version, '--no-ri', '--no-rdoc', '--bindir', '/usr/bin', '-q'], capture=False) else: util.subp(['/usr/bin/gem', 'install', 'chef', '--no-ri', '--no-rdoc', '--bindir', '/usr/bin', '-q'], capture=False) # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/config/cc_debug.py000066400000000000000000000060611326573344200225500ustar00rootroot00000000000000# Copyright (C) 2013 Yahoo! Inc. # # This file is part of cloud-init. See LICENSE file for license information. """ Debug ----- **Summary:** helper to debug cloud-init *internal* datastructures. This module will enable for outputting various internal information that cloud-init sources provide to either a file or to the output console/log location that this cloud-init has been configured with when running. .. note:: Log configurations are not output. **Internal name:** ``cc_debug`` **Module frequency:** per instance **Supported distros:** all **Config keys**:: debug: verbose: true/false (defaulting to true) output: (location to write output, defaulting to console + log) """ import copy from six import StringIO from cloudinit import type_utils from cloudinit import util SKIP_KEYS = frozenset(['log_cfgs']) def _make_header(text): header = StringIO() header.write("-" * 80) header.write("\n") header.write(text.center(80, ' ')) header.write("\n") header.write("-" * 80) header.write("\n") return header.getvalue() def _dumps(obj): text = util.yaml_dumps(obj, explicit_start=False, explicit_end=False) return text.rstrip() def handle(name, cfg, cloud, log, args): """Handler method activated by cloud-init.""" verbose = util.get_cfg_by_path(cfg, ('debug', 'verbose'), default=True) if args: # if args are provided (from cmdline) then explicitly set verbose out_file = args[0] verbose = True else: out_file = util.get_cfg_by_path(cfg, ('debug', 'output')) if not verbose: log.debug(("Skipping module named %s," " verbose printing disabled"), name) return # Clean out some keys that we just don't care about showing... dump_cfg = copy.deepcopy(cfg) for k in SKIP_KEYS: dump_cfg.pop(k, None) all_keys = list(dump_cfg) for k in all_keys: if k.startswith("_"): dump_cfg.pop(k, None) # Now dump it... to_print = StringIO() to_print.write(_make_header("Config")) to_print.write(_dumps(dump_cfg)) to_print.write("\n") to_print.write(_make_header("MetaData")) to_print.write(_dumps(cloud.datasource.metadata)) to_print.write("\n") to_print.write(_make_header("Misc")) to_print.write("Datasource: %s\n" % (type_utils.obj_name(cloud.datasource))) to_print.write("Distro: %s\n" % (type_utils.obj_name(cloud.distro))) to_print.write("Hostname: %s\n" % (cloud.get_hostname(True))) to_print.write("Instance ID: %s\n" % (cloud.get_instance_id())) to_print.write("Locale: %s\n" % (cloud.get_locale())) to_print.write("Launch IDX: %s\n" % (cloud.launch_index)) contents = to_print.getvalue() content_to_file = [] for line in contents.splitlines(): line = "ci-info: %s\n" % (line) content_to_file.append(line) if out_file: util.write_file(out_file, "".join(content_to_file), 0o644, "w") else: util.multi_log("".join(content_to_file), console=True, stderr=False) # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/config/cc_disable_ec2_metadata.py000066400000000000000000000031021326573344200254470ustar00rootroot00000000000000# Copyright (C) 2009-2010 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # # Author: Scott Moser # Author: Juerg Haefliger # # This file is part of cloud-init. See LICENSE file for license information. """ Disable EC2 Metadata -------------------- **Summary:** disable aws ec2 metadata This module can disable the ec2 datasource by rejecting the route to ``169.254.169.254``, the usual route to the datasource. This module is disabled by default. **Internal name:** ``cc_disable_ec2_metadata`` **Module frequency:** per always **Supported distros:** all **Config keys**:: disable_ec2_metadata: """ from cloudinit import util from cloudinit.settings import PER_ALWAYS frequency = PER_ALWAYS REJECT_CMD_IF = ['route', 'add', '-host', '169.254.169.254', 'reject'] REJECT_CMD_IP = ['ip', 'route', 'add', 'prohibit', '169.254.169.254'] def handle(name, cfg, _cloud, log, _args): disabled = util.get_cfg_option_bool(cfg, "disable_ec2_metadata", False) if disabled: reject_cmd = None if util.which('ip'): reject_cmd = REJECT_CMD_IP elif util.which('ifconfig'): reject_cmd = REJECT_CMD_IF else: log.error(('Neither "route" nor "ip" command found, unable to ' 'manipulate routing table')) return util.subp(reject_cmd, capture=False) else: log.debug(("Skipping module named %s," " disabling the ec2 route not enabled"), name) # vi: ts=4 expandtab cloud-init-18.2-14-g6d48d265/cloudinit/config/cc_disk_setup.py000066400000000000000000001015371326573344200236400ustar00rootroot00000000000000# Copyright (C) 2009-2010 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # # Author: Ben Howard # # This file is part of cloud-init. See LICENSE file for license information. """ Disk Setup ---------- **Summary:** configure partitions and filesystems This module is able to configure simple partition tables and filesystems. .. note:: for more detail about configuration options for disk setup, see the disk setup example For convenience, aliases can be specified for disks using the ``device_aliases`` config key, which takes a dictionary of alias: path mappings. There are automatic aliases for ``swap`` and ``ephemeral``, where ``swap`` will always refer to the active swap partition and ``ephemeral`` will refer to the block device of the ephemeral image. Disk partitioning is done using the ``disk_setup`` directive. This config directive accepts a dictionary where each key is either a path to a block device or an alias specified in ``device_aliases``, and each value is the configuration options for the device. The ``table_type`` option specifies the partition table type, either ``mbr`` or ``gpt``. The ``layout`` option specifies how partitions on the device are to be arranged. If ``layout`` is set to ``true``, a single partition using all the space on the device will be created. If set to ``false``, no partitions will be created. Partitions can be specified by providing a list to ``layout``, where each entry in the list is either a size or a list containing a size and the numerical value for a partition type. The size for partitions is specified in **percentage** of disk space, not in bytes (e.g. a size of 33 would take up 1/3 of the disk space). The ``overwrite`` option controls whether this module tries to be safe about writing partition talbes or not. If ``overwrite: false`` is set, the device will be checked for a partition table and for a file system and if either is found, the operation will be skipped. If ``overwrite: true`` is set, no checks will be performed. .. note:: Using ``overwrite: true`` is dangerous and can lead to data loss, so double check that the correct device has been specified if using this option. File system configuration is done using the ``fs_setup`` directive. This config directive accepts a list of filesystem configs. The device to create the filesystem on may be specified either as a path or as an alias in the format ``.`` where ```` denotes the partition number on the device. The partition can also be specified by setting ``partition`` to the desired partition number. The ``partition`` option may also be set to ``auto``, in which this module will search for the existance of a filesystem matching the ``label``, ``type`` and ``device`` of the ``fs_setup`` entry and will skip creating the filesystem if one is found. The ``partition`` option may also be set to ``any``, in which case any file system that matches ``type`` and ``device`` will cause this module to skip filesystem creation for the ``fs_setup`` entry, regardless of ``label`` matching or not. To write a filesystem directly to a device, use ``partition: none``. A label can be specified for the filesystem using ``label``, and the filesystem type can be specified using ``filesystem``. .. note:: If specifying device using the ``.`` format, the value of ``partition`` will be overwritten. .. note:: Using ``overwrite: true`` for filesystems is dangerous and can lead to data loss, so double check the entry in ``fs_setup``. .. note:: ``replace_fs`` is ignored unless ``partition`` is ``auto`` or ``any``. **Internal name:** ``cc_disk_setup`` **Module frequency:** per instance **Supported distros:** all **Config keys**:: device_aliases: : disk_setup: : table_type: <'mbr'/'gpt'> layout: - [33,82] - 66 overwrite: fs_setup: - label: