cloud-init-0.7.7~bzr1212/ChangeLog0000644000000000000000000012762512704246716014744 0ustar 000000000000000.7.7: - open 0.7.7 - Digital Ocean: add datasource for Digital Ocean. [Neal Shrader] - expose uses_systemd as a distro function (fix rhel7) - fix broken 'output' config (LP: #1387340) - begin adding cloud config module docs to config modules (LP: #1383510) - retain trailing eol from template files (sources.list) when rendered with jinja (LP: #1355343) - Only use datafiles and initsys addon outside virtualenvs - Fix the digital ocean test case on python 2.6 - Increase the usefulness, robustness, configurability of the chef module so that it is more useful, more documented and better for users - Fix how '=' signs are not handled that well in ssh_utils (LP: #1391303) - Be more tolerant of ssh keys passed into 'ssh_authorized_keys'; allowing for list, tuple, set, dict, string types and warning on other unexpected types - Update to use newer/better OMNIBUS_URL for chef module - GCE: Allow base64 encoded user-data (LP: #1404311) [Wayne Witzell III] - GCE: use short hostname rather than fqdn (LP: #1383794) [Ben Howard] - systemd: make init stage run before login prompts shown [Steve Langasek] - hostname: on first boot apply hostname to be same as is written for persistent hostname. (LP: #1246485) - remove usage of dmidecode on linux in favor of /sys interface [Ben Howard] - python3 support [Barry Warsaw, Daniel Watkins, Josh Harlow] (LP: #1247132) - support managing gpt partitions in disk config [Daniel Watkins] - Azure: utilze gpt support for ephemeral formating [Daniel Watkins] - CloudStack: support fetching password from virtual router [Daniel Watkins] (LP: #1422388) - readurl, read_file_or_url returns bytes, user must convert as necessary - SmartOS: use v2 metadata service (LP: #1436417) [Daniel Watkins] - NoCloud: fix local datasource claiming found without explicit dsmode - Snappy: add support for installing snappy packages and configuring. - systemd: use network-online instead of network.target (LP: #1440180) [Steve Langasek] - Add functionality to fixate the uid of a newly added user. - Don't overwrite the hostname if the user has changed it after we set it. - GCE datasource does not handle instance ssh keys (LP: 1403617) - sysvinit: make cloud-init-local run before network (LP: #1275098) [Surojit Pathak] - Azure: do not re-set hostname if user has changed it (LP: #1375252) - Fix exception when running with no arguments on Python 3. [Daniel Watkins] - Centos: detect/expect use of systemd on centos 7. [Brian Rak] - Azure: remove dependency on walinux-agent [Daniel Watkins] - EC2: know about eu-central-1 availability-zone (LP: #1456684) - Azure: remove password from on-disk ovf-env.xml (LP: #1443311) [Ben Howard] - Doc: include information on user-data in OpenStack [Daniel Watkins] - Systemd: check for systemd using sd_booted symantics (LP: #1461201) [Lars Kellogg-Stedman] - Add an rh_subscription module to handle registration of Red Hat instances. [Brent Baude] - cc_apt_configure: fix importing keys under python3 (LP: #1463373) - cc_growpart: fix specification of 'devices' list (LP: #1465436) - CloudStack: fix password setting on cloudstack > 4.5.1 (LP: #1464253) - GCE: fix determination of availability zone (LP: #1470880) - ssh: generate ed25519 host keys (LP: #1461242) - distro mirrors: provide datasource to mirror selection code to support GCE regional mirrors. (LP: #1470890) - add udev rules that identify ephemeral device on Azure (LP: #1411582) - _read_dmi_syspath: fix bad log message causing unintended exception - rsyslog: add additional configuration mode (LP: #1478103) - status_wrapper in main: fix use of print_exc when handling exception - reporting: add reporting module for web hook or logging of events. - NoCloud: fix consumption of vendordata (LP: #1493453) - power_state_change: support 'condition' to disable or enable poweroff - ubuntu fan: support for config and installing of ubuntu fan (LP: #1504604) - Azure: support extracting SSH key values from ovf-env.xml (LP: #1506244) - AltCloud: fix call to udevadm settle (LP: #1507526) - Ubuntu templates: modify sources.list template to provide same sources as install from server or desktop ISO. (LP: #1177432) - cc_mounts: use 'nofail' if system uses systemd. (LP: #1514485) - Azure: get instance id from dmi instead of SharedConfig (LP: #1506187) - systemd/power_state: fix power_state to work even if cloud-final exited non-zero (LP: #1449318) - SmartOS: Add support for Joyent LX-Brand Zones (LP: #1540965) [Robert C Jennings] - systemd: support using systemd-detect-virt to detect container (LP: #1539016) [Martin Pitt] - docs: fix lock_passwd documentation [Robert C Jennings] - Azure: Handle escaped quotes in WALinuxAgentShim.find_endpoint. (LP: #1488891) [Dan Watkins] - lxd: add support for setting up lxd using 'lxd init' (LP: #1522879) - Add Image Customization Parser for VMware vSphere Hypervisor Support. [Sankar Tanguturi] - timezone: use a symlink rather than copy for /etc/localtime unless it is already a file (LP: #1543025). - Enable password changing via a hashed string [Alex Sirbu] - Added BigStep datasource [Alex Sirbu] - No longer run pollinate in seed_random (LP: #1554152) - groups: add defalt user to 'lxd' group. Create groups listed for a user if they do not exist. (LP: #1539317) - dmi data: fix failure of reading dmi data for unset dmi values - doc: mention label for nocloud datasource must be 'cidata' [Peter Hurley] - ssh_pwauth: fix module to support 'unchanged' and match behavior described in documentation [Chris Cosby] - quickly check to see if the previous instance id is still valid to avoid dependency on network metadata service on every boot (LP: #1553815) - support network configuration in cloud-init --local with support device naming via systemd.link. - FreeBSD: add support for installing packages, setting password and timezone. Change default user to 'freebsd'. [Ben Arblaster] - locale: list unsupported environment settings in warning (LP: #1558069) - disk_setup: correctly send --force to mkfs on block devices (LP: #1548772) - chef: fix chef install from gems (LP: #1553345) - systemd: do not specify After of obsolete syslog.target (LP: #1536964) - centos: Ensure that resolve conf object is written as a str (LP: #1479988) - chef: straighten out validation_cert and validation_key (LP: #1568940) - phone_home: allow usage of fqdn (LP: #1566824) [Ollie Armstrong] 0.7.6: - open 0.7.6 - Enable vendordata on CloudSigma datasource (LP: #1303986) - Poll on /dev/ttyS1 in CloudSigma datasource only if dmidecode says we're running on cloudsigma (LP: #1316475) [Kiril Vladimiroff] - SmartOS test: do not require existance of /dev/ttyS1. [LP: #1316597] - doc: fix user-groups doc to reference plural ssh-authorized-keys (LP: #1327065) [Joern Heissler] - fix 'make test' in python 2.6 - support jinja2 as a templating engine. Drop the hard requirement on cheetah. This helps in python3 effort. (LP: #1219223) - change install path for systemd files to /lib/systemd/system [Dimitri John Ledkov] - change trunk debian packaging to use pybuild and drop cdbs. [Dimitri John Ledkov] - SeLinuxGuard: remove invalid check that looked for stat.st_mode in os.lstat. - do not write comments in /etc/timezone (LP: #1341710) - ubuntu: provide 'ubuntu-init-switch' module to aid in systemd testing. - status/result json: remove 'end' entry which was always null - systemd: make cloud-init block ssh service startup to guarantee keys are generated. [Jordan Evans] (LP: #1333920) - default settings: fix typo resulting in OpenStack and GCE not working unless config explicitly provided (LP: #1329583) [Garrett Holmstrom]) - fix rendering resolv.conf if no 'options' are provided (LP: #1328953) - docs: fix disk-setup to reference 'table_type' [Rail Aliiev] (LP: #1313114) - ssh_authkey_fingerprints: fix bug that prevented disabling the module. (LP: #1340903) [Patrick Lucas] - no longer use pylint as a checker, fix pep8 [Jay Faulkner]. - Openstack: do not load some urls twice. - FreeBsd: fix initscripts and add working config file [Harm Weites] - Datasource: fix broken logic to provide hostname if datasource does not provide one - Improved and less verbose logging. - resizefs: first check that device is writable. - configdrive: fix reading of vendor data to be like metadata service reader. [Jay Faulkner] - resizefs: fix broken background resizing [Jay Faulkner] (LP: #1338614) - cc_grub_dpkg: fix EC2 hvm instances to avoid prompt on grub update. (LP: #1336855) - FreeBsd: support config drive datasource [Joseph bajin] - cc_mounts: support creating a swap file - DigitalOcean & GCE: fix get_hostname consistency 0.7.5: - open 0.7.5 - Add a debug log message around import failures - add a 'debug' module for easily printing out some information about datasource and cloud-init [Shraddha Pandhe] - support running apt with 'eatmydata' via configuration token apt_get_wrapper (LP: #1236531). - convert paths provided in config-drive 'files' to string before writing (LP: #1260072). - Azure: minor changes in logging output. ensure filenames are strings (not unicode). - config/cloud.cfg.d/05_logging.cfg: provide a default 'output' setting, to redirect cloud-init stderr and stdout /var/log/cloud-init-output.log. - drop support for resizing partitions with parted entirely (LP: #1212492). This was broken as it was anyway. - add support for vendordata in SmartOS and NoCloud datasources. - drop dependency on boto for crawling ec2 metadata service. - add 'Requires' on sudo (for OpenNebula datasource) in rpm specs, and 'Recommends' in the debian/control.in [Vlastimil Holer] - if mount_info reports /dev/root is a device path for /, then convert that to a device via help of kernel cmdline. - configdrive: consider partitions as possible datasources if they have theh correct filesystem label. [Paul Querna] - initial freebsd support [Harm Weites] - fix in is_ipv4 to accept IP addresses with a '0' in them. - Azure: fix issue when stale data in /var/lib/waagent (LP: #1269626) - skip config_modules that declare themselves only verified on a set of distros. Add them to 'unverified_modules' list to run anyway. - Add CloudSigma datasource [Kiril Vladimiroff] - Add initial support for Gentoo and Arch distributions [Nate House] - Add GCE datasource [Vaidas Jablonskis] - Add native Openstack datasource which reads openstack metadata rather than relying on EC2 data in openstack metadata service. - SmartOS, AltCloud: disable running on arm systems due to bug (LP: #1243287, #1285686) [Oleg Strikov] - Allow running a command to seed random, default is 'pollinate -q' (LP: #1286316) [Dustin Kirkland] - Write status to /run/cloud-init/status.json for consumption by other programs (LP: #1284439) - Azure: if a reboot causes ephemeral storage to be re-provisioned Then we need to re-format it. (LP: #1292648) - OpenNebula: support base64 encoded user-data [Enol Fernandez, Peter Kotcauer] 0.7.4: - fix issue mounting 'ephemeral0' if ephemeral0 was an alias for a partitioned block device with target filesystem on ephemeral0.1. (LP: #1236594) - fix DataSourceAzure incompatibility with 2.6 (LP: #1232175) - fix power_state_change config module so that example works. Improve its documentation and add reference to 'timeout' - support apt-add-archive with 'cloud-archive:' format. (LP: #1244355) - Change SmartOS verb for availability zone (LP: #1249124) - documentation fix for boothooks to use 'cloud-init-per' - fix resizefs module by supporting kernels that do not have /proc/PID/mountinfo. (LP: #1248625) [Tim Daly Jr.] - fix 'make rpm' by removing 0.6.4 entry from ChangeLog (LP: #1241834) 0.7.3: - fix omnibus chef installer (LP: #1182265) [Chris Wing] - small fix for OVF datasource for iso transport on non-iso9660 filesystem - determine if upstart version is suitable for 'initctl reload-configuration' (LP: #1124384). If so, then invoke it. supports setting up instance-store disk with partition table and filesystem. - add Azure datasource. - add support for SuSE / SLES [Juerg Haefliger] - add a trailing carriage return to chpasswd input, which reportedly caused a problem on rhel5 if missing. - support individual MIME segments to be gzip compressed (LP: #1203203) - always finalize handlers even if processing failed (LP: #1203368) - support merging into cloud-config via jsonp. (LP: #1200476) - add datasource 'SmartOS' for Joyent Cloud. Adds a dependency on serial. - add 'log_time' helper to util for timing how long things take which also reads from uptime. uptime is useful as clock may change during boot due to ntp. - prefer growpart resizer to 'parted resizepart' (LP: #1212492) - support random data seed from config drive or azure, and a module 'seed_random' to read that and write it to /dev/urandom. - add OpenNebula Datasource [Vlastimil Holer] - add 'cc_disk_setup' config module for paritioning disks and creating filesystems. Useful if attached disks are not formatted (LP: #1218506) - Fix usage of libselinux-python when selinux is disabled. [Garrett Holmstrom] - multi_log: only write to /dev/console if it exists [Garrett Holmstrom] - config/cloud.cfg: add 'sudo' to list groups for the default user (LP: #1228228) - documentation fix for use of 'mkpasswd' [Eric Nordlund] - respect /etc/growroot-disabled file (LP: #1234331) 0.7.2: - add a debian watch file - add 'sudo' entry to ubuntu's default user (LP: #1080717) - fix resizefs module when 'noblock' was provided (LP: #1080985) - make sure there is no blank line before cloud-init entry in there are no blank lines in /etc/ca-certificates.conf (LP: #1077020) - fix sudoers writing when entry is a string (LP: #1079002) - tools/write-ssh-key-fingerprints: use '-s' rather than '--stderr' option (LP: #1083715) - make install of puppet configurable (LP: #1090205) [Craig Tracey] - support omnibus installer for chef [Anatoliy Dobrosynets] - fix bug where cloud-config in user-data could not modify system_info settings (LP: #1090482) - fix CloudStack DataSource to use Virtual Router as described by CloudStack documentation if it is available by searching through dhclient lease files. If it is not available, then fall back to the default gateway. (LP: #1089989) - fix redaction of password field in log (LP: #1096417) - fix to cloud-config user setup. Previously, lock_passwd was broken and all accounts would be locked unless 'system' was given (LP: #1096423). - Allow 'sr0' (or sr[0-9]) to be specified without /dev/ as a source for mounts. [Vlastimil Holer] - allow config-drive-data to come from a CD device by more correctly filtering out partitions. (LP: #1100545) - setup docs to be available on read-the-docs https://cloudinit.readthedocs.org/en/latest/ (LP: #1093039) - add HACKING file for information on contributing - handle the legacy 'user:' configuration better, making it affect the configured OS default user (LP: #1100920) - Adding a resolv.conf configuration module (LP: #1100434). Currently only working on redhat systems (no support for resolvconf) - support grouping linux distros into "os_families". This allows a module to operate on the family (redhat or debian) rather than the distro (ubuntu, debian, fedora, rhel) (LP: #1100029) - fix /etc/hosts writing when templates are used (LP: #1100036) - add package versioning logic to package installation functionality (LP: #1108047) - fix documentation for write_files to correctly list 'permissions' rather than 'perms' (LP: #1111205) - cloud-init-container.conf: ensure /run/network before running ifquery - DataSourceNoCloud: allow user-data and meta-data to be specified in config (LP: #1115833). - improve debian support in sysvinit scripts, package build scripts, and split sources.list template to be distro specific. - support for resizing btrfs root filesystems [Blair Zajac] - fix issue when writing ssh keys to .ssh/authorized_keys (LP: #1136343) - upstart: cloud-init-nonet.conf trap the TERM signal, so that dmesg or other output does not get a 'killed by TERM signal' message. - support resizing partitions via growpart or parted (LP: #1136936) - allow specifying apt-get command in distro config ('apt_get_command') - support different and user-suppliable merging algorithms for cloud-config (LP: #1023179) - use python-requests rather than urllib2. By using recent versions of python-requests, we get https support (LP: #1067888). - make apt-get invoke 'dist-upgrade' rather than 'upgrade' for package_upgrade. (LP: #1164147) - improvements for systemd with Fedora 18 - workaround 2.6 kernel issue that stopped blkid from showing /dev/sr0 - add new, backwards compatible merging syntax so merging of cloud-config can be more useful. 0.7.1: - sysvinit: fix missing dependency in cloud-init job for RHEL 5.6 - config-drive: map hostname to local-hostname (LP: #1061964) - landscape: install landscape-client package if not installed. only take action if cloud-config is present (LP: #1066115) - cc_landscape: restart landscape after install or config (LP: #1070345) - multipart/archive. do not fail on unknown headers in multipart mime or cloud-archive config (LP: #1065116). - tools/Z99-cloud-locale-test.sh: avoid warning when user's shell is zsh (LP: #1073077) - fix stack trace when unknown user-data input had unicode (LP: #1075756) - split 'apt-update-upgrade' config module into 'apt-configure' and 'package-update-upgrade-install'. The 'package-update-upgrade-install' will be a cross distro module. - Cleanups: - Remove usage of paths.join, as all code should run through util helpers - Fix pylint complaining about tests folder 'helpers.py' not being found - Add a pylintrc file that is used instead options hidden in 'run_pylint' - fix bug where cloud-config from user-data could not affect system_info settings [revno 703] (LP: #1076811) - for write fqdn to system config for rh/fedora [revno 704] - add yaml/cloud config examples checking tool [revno 706] - Fix the merging of group configuration when that group configuration is a dict => members. [revno 707] - add yum_add_repo configuration module for adding additional yum repos - fix public key importing with config-drive-v2 datasource (LP: #1077700) - handle renaming and fixing up of marker names (LP: 1075980) [revno 710] this relieves that burden from the distro/packaging. - group config: fix how group members weren't being translated correctly when the group: [member, member...] format was used (LP: #1077245) - sysconfig: fix how the /etc/sysconfig/network should be using the fully qualified domain name instead of the partially qualified domain name which is used in the ubuntu/debian case (LP: #1076759) - fix how string escaping was not working when the string was a unicode string which was causing the warning message not to be written out (LP: #1075756) - for boto > 0.6.0 there was a lazy load of the metadata added, when cloud-init runs the usage of this lazy loading is hidden and since that lazy loading will be performed on future attribute access we must traverse the lazy loaded dictionary and force it to full expand so that if cloud-init blocks the ec2 metadata port the lazy loaded dictionary will continue working properly instead of trying to make additional url calls which will fail (LP: #1068801) - use a set of helper/parsing classes to perform system configuration for easier test. (/etc/sysconfig, /etc/hostname, resolv.conf, /etc/hosts) - add power_state_change config module for shutting down stystem after cloud-init finishes. (LP: #1064665) 0.7.0: - add a 'exception_cb' argument to 'wait_for_url'. If provided, this method will be called back with the exception received and the message. - utilize the 'exception_cb' above to modify the oauth timestamp in DataSourceMAAS requests if a 401 or 403 is received. (LP: #978127) - catch signals and exit rather than stack tracing - if logging fails, enable a fallback logger by patching the logging module - do not 'start networking' in cloud-init-nonet, but add cloud-init-container job that runs only if in container and emits net-device-added (LP: #1031065) - search only top level dns for 'instance-data' in DataSourceEc2 (LP: #1040200) - add support for config-drive-v2 (LP:#1037567) - support creating users, including the default user. [Ben Howard] (LP: #1028503) - add apt_reboot_if_required to reboot if an upgrade or package installation forced the need for one (LP: #1038108) - allow distro mirror selection to include availability-zone (LP: #1037727) - allow arch specific mirror selection (select ports.ubuntu.com on arm) LP: #1028501 - allow specification of security mirrors (LP: #1006963) - add the 'None' datasource (LP: #906669), which will allow jobs to run even if there is no "real" datasource found. - write ssh authorized keys to console, ssh_authkey_fingerprints config module [Joshua Harlow] (LP: #1010582) - Added RHEVm and vSphere support as source AltCloud [Joseph VLcek] - add write-files module (LP: #1012854) - Add setuptools + cheetah to debian package build dependencies (LP: #1022101) - Adjust the sysvinit local script to provide 'cloud-init-local' and have the cloud-config script depend on that as well. - Add the 'bzr' name to all packages built - Reduce logging levels for certain non-critical cases to DEBUG instead of the previous level of WARNING - unified binary that activates the various stages - Now using argparse + subcommands to specify the various CLI options - a stage module that clearly separates the stages of the different components (also described how they are used and in what order in the new unified binary) - user_data is now a module that just does user data processing while the actual activation and 'handling' of the processed user data is done via a separate set of files (and modules) with the main 'init' stage being the controller of this - creation of boot_hook, cloud_config, shell_script, upstart_job version 2 modules (with classes that perform there functionality) instead of those having functionality that is attached to the cloudinit object (which reduces reuse and limits future functionality, and makes testing harder) - removal of global config that defined paths, shared config, now this is via objects making unit testing testing and global side-effects a non issue - creation of a 'helpers.py' - this contains an abstraction for the 'lock' like objects that the various module/handler running stages use to avoid re-running a given module/handler for a given frequency. this makes it separated from the actual usage of that object (thus helpful for testing and clear lines usage and how the actual job is accomplished) - a common 'runner' class is the main entrypoint using these locks to run function objects passed in (along with there arguments) and there frequency - add in a 'paths' object that provides access to the previously global and/or config based paths (thus providing a single entrypoint object/type that provides path information) - this also adds in the ability to change the path when constructing that path 'object' and adding in additional config that can be used to alter the root paths of 'joins' (useful for testing or possibly useful in chroots?) - config options now avaiable that can alter the 'write_root' and the 'read_root' when backing code uses the paths join() function - add a config parser subclass that will automatically add unknown sections and return default values (instead of throwing exceptions for these cases) - a new config merging class that will be the central object that knows how to do the common configuration merging from the various configuration sources. The order is the following: - cli config files override environment config files which override instance configs which override datasource configs which override base configuration which overrides default configuration. - remove the passing around of the 'cloudinit' object as a 'cloud' variable and instead pass around an 'interface' object that can be given to modules and handlers as there cloud access layer while the backing of that object can be varied (good for abstraction and testing) - use a single set of functions to do importing of modules - add a function in which will search for a given set of module names with a given set of attributes and return those which are found - refactor logging so that instead of using a single top level 'log' that instead each component/module can use its own logger (if desired), this should be backwards compatible with handlers and config modules that used the passed in logger (its still passed in) - ensure that all places where exception are caught and where applicable that the util logexc() is called, so that no exceptions that may occur are dropped without first being logged (where it makes sense for this to happen) - add a 'requires' file that lists cloud-init dependencies - applying it in package creation (bdeb and brpm) as well as using it in the modified setup.py to ensure dependencies are installed when using that method of packaging - add a 'version.py' that lists the active version (in code) so that code inside cloud-init can report the version in messaging and other config files - cleanup of subprocess usage so that all subprocess calls go through the subp() utility method, which now has an exception type that will provide detailed information on python 2.6 and 2.7 - forced all code loading, moving, chmod, writing files and other system level actions to go through standard set of util functions, this greatly helps in debugging and determining exactly which system actions cloud-init is performing - adjust url fetching and url trying to go through a single function that reads urls in the new 'url helper' file, this helps in tracing, debugging and knowing which urls are being called and/or posted to from with-in cloud-init code - add in the sending of a 'User-Agent' header for all urls fetched that do not provide there own header mapping, derive this user-agent from the following template, 'Cloud-Init/{version}' where the version is the cloud-init version number - using prettytable for netinfo 'debug' printing since it provides a standard and defined output that should be easier to parse than a custom format - add a set of distro specific classes, that handle distro specific actions that modules and or handler code can use as needed, this is organized into a base abstract class with child classes that implement the shared functionality. config determines exactly which subclass to load, so it can be easily extended as needed. - current functionality - network interface config file writing - hostname setting/updating - locale/timezone/ setting - updating of /etc/hosts (with templates or generically) - package commands (ie installing, removing)/mirror finding - interface up/down activating - implemented a debian + ubuntu subclass - implemented a redhat + fedora subclass - adjust the root 'cloud.cfg' file to now have distrobution/path specific configuration values in it. these special configs are merged as the normal config is, but the system level config is not passed into modules/handlers - modules/handlers must go through the path and distro object instead - have the cloudstack datasource test the url before calling into boto to avoid the long wait for boto to finish retrying and finally fail when the gateway meta-data address is unavailable - add a simple mock ec2 meta-data python based http server that can serve a very simple set of ec2 meta-data back to callers - useful for testing or for understanding what the ec2 meta-data service can provide in terms of data or functionality - for ssh key and authorized key file parsing add in classes and util functions that maintain the state of individual lines, allowing for a clearer separation of parsing and modification (useful for testing and tracing) - add a set of 'base' init.d scripts that can be used on systems that do not have full upstart or systemd support (or support that does not match the standard fedora/ubuntu implementation) - currently these are being tested on RHEL 6.2 - separate the datasources into there own subdirectory (instead of being a top-level item), this matches how config 'modules' and user-data 'handlers' are also in there own subdirectory (thus helping new developers and others understand the code layout in a quicker manner) - add the building of rpms based off a new cli tool and template 'spec' file that will templatize and perform the necessary commands to create a source and binary package to be used with a cloud-init install on a 'rpm' supporting system - uses the new standard set of requires and converts those pypi requirements into a local set of package requirments (that are known to exist on RHEL systems but should also exist on fedora systems) - adjust the bdeb builder to be a python script (instead of a shell script) and make its 'control' file a template that takes in the standard set of pypi dependencies and uses a local mapping (known to work on ubuntu) to create the packages set of dependencies (that should also work on ubuntu-like systems) - pythonify a large set of various pieces of code - remove wrapping return statements with () when it has no effect - upper case all constants used - correctly 'case' class and method names (where applicable) - use os.path.join (and similar commands) instead of custom path creation - use 'is None' instead of the frowned upon '== None' which picks up a large set of 'true' cases than is typically desired (ie for objects that have there own equality) - use context managers on locks, tempdir, chdir, file, selinux, umask, unmounting commands so that these actions do not have to be closed and/or cleaned up manually in finally blocks, which is typically not done and will eventually be a bug in the future - use the 'abc' module for abstract classes base where possible - applied in the datasource root class, the distro root class, and the user-data v2 root class - when loading yaml, check that the 'root' type matches a predefined set of valid types (typically just 'dict') and throw a type error if a mismatch occurs, this seems to be a good idea to do when loading user config files - when forking a long running task (ie resizing a filesytem) use a new util function that will fork and then call a callback, instead of having to implement all that code in a non-shared location (thus allowing it to be used by others in the future) - when writing out filenames, go through a util function that will attempt to ensure that the given filename is 'filesystem' safe by replacing '/' with '_' and removing characters which do not match a given whitelist of allowed filename characters - for the varying usages of the 'blkid' command make a function in the util module that can be used as the single point of entry for interaction with that command (and its results) instead of having X separate implementations - place the rfc 8222 time formatting and uptime repeated pieces of code in the util module as a set of function with the name 'time_rfc2822'/'uptime' - separate the pylint+pep8 calling from one tool into two indivudal tools so that they can be called independently, add make file sections that can be used to call these independently - remove the support for the old style config that was previously located in '/etc/ec2-init/ec2-config.cfg', no longer supported! - instead of using a altered config parser that added its own 'dummy' section on in the 'mcollective' module, use configobj which handles the parsing of config without sections better (and it also maintains comments instead of removing them) - use the new defaulting config parser (that will not raise errors on sections that do not exist or return errors when values are fetched that do not exist) in the 'puppet' module - for config 'modules' add in the ability for the module to provide a list of distro names which it is known to work with, if when ran and the distro being used name does not match one of those in this list, a warning will be written out saying that this module may not work correctly on this distrobution - for all dynamically imported modules ensure that they are fixed up before they are used by ensuring that they have certain attributes, if they do not have those attributes they will be set to a sensible set of defaults instead - adjust all 'config' modules and handlers to use the adjusted util functions and the new distro objects where applicable so that those pieces of code can benefit from the unified and enhanced functionality being provided in that util module - fix a potential bug whereby when a #includeonce was encountered it would enable checking of urls against a cache, if later a #include was encountered it would continue checking against that cache, instead of refetching (which would likely be the expected case) - add a openstack/nova based pep8 extension utility ('hacking.py') that allows for custom checks (along with the standard pep8 checks) to occur when running 'make pep8' and its derivatives - support relative path in AuthorizedKeysFile (LP: #970071). - make apt-get update run with --quiet (suitable for logging) (LP: #1012613) - cc_salt_minion: use package 'salt-minion' rather than 'salt' (LP: #996166) - use yaml.safe_load rather than yaml.load (LP: #1015818) 0.6.3: - add sample systemd config files [Garrett Holmstrom] - add Fedora support [Garrent Holstrom] (LP: #883286) - fix bug in netinfo.debug_info if no net devices available (LP: #883367) - use python module hashlib rather than md5 to avoid deprecation warnings. - support configuration of mirror based on dns name ubuntu-mirror in local domain. - support setting of Acquire::HTTP::Proxy via 'apt_proxy' - DataSourceEc2: more resilliant to slow metadata service - config change: 'retries' dropped, 'max_wait' added, timeout increased - close stdin in all cloud-init programs that are launched at boot (LP: #903993) - revert management of /etc/hosts to 0.6.1 style (LP: #890501, LP: #871966) - write full ssh keys to console for easy machine consumption (LP: #893400) - put INSTANCE_ID environment variable in bootcmd scripts - add 'cloud-init-per' script for easily running things with a given frequency - replace cloud-init-run-module with cloud-init-per - support configuration of landscape-client via cloud-config (LP: #857366) - part-handlers now get base64 decoded content rather than 2xbase64 encoded in the payload parameter. (LP: #874342) - add test case framework [Mike Milner] (LP: #890851) - fix pylint warnings [Juerg Haefliger] (LP: #914739) - add support for adding and deleting CA Certificates [Mike Milner] (LP: #915232) - in ci-info lines, use '.' to indicate empty field for easier machine reading - support empty lines in "#include" files (LP: #923043) - support configuration of salt minions (Jeff Bauer) (LP: #927795) - DataSourceOVF: only search for OVF data on ISO9660 filesystems (LP: #898373) - DataSourceConfigDrive: support getting data from openstack config drive (LP: #857378) - DataSourceNoCloud: support seed from external disk of ISO or vfat (LP: #857378) - DataSourceNoCloud: support inserting /etc/network/interfaces - DataSourceMaaS: add data source for Ubuntu Machines as a Service (MaaS) (LP: #942061) - DataSourceCloudStack: add support for CloudStack datasource [Cosmin Luta] - add option 'apt_pipelining' to address issue with S3 mirrors (LP: #948461) [Ben Howard] - warn on non-multipart, non-handled user-data [Martin Packman] - run resizefs in the background in order to not block boot (LP: #961226) - Fix bug in Chef support where validation_key was present in config, but 'validation_cert' was not (LP: #960547) - Provide user friendly message when an invalid locale is set [Ben Howard] (LP: #859814) - Support reading cloud-config from kernel command line parameter and populating local file with it, which can then provide data for DataSources - improve chef examples for working configurations on 11.10 and 12.04 [Lorin Hochstein] (LP: #960564) 0.6.2: - fix bug where update was not done unless update was explicitly set. It would not be run if 'upgrade' or packages were set to be installed - fix bug in part-handler code, that prevented working part-handlers (LP: #739694) - fix bug in resizefs cloud-config that would cause trace based on failure of 'blkid /dev/root' (LP: #726938) - convert dos formated files to unix for user-scripts, boothooks, and upstart jobs (LP: #744965) - fix bug in seeding of grub dpkg configuration (LP: #752361) due to renamed devices in newer (natty) kernels (/dev/sda1 -> /dev/xvda1) - make metadata urls configurable, to support eucalyptus in STATIC or SYSTEM modes (LP: #761847) - support disabling byobu in cloud-config - run cc_ssh as a cloud-init module so it is guaranteed to run before ssh starts (LP: #781101) - make prefix for keys added to /root/.ssh/authorized_keys configurable and add 'no-port-forwarding,no-agent-forwarding,no-X11-forwarding' to the default (LP: #798505) - make 'cloud-config ready' command configurable (LP: #785551) - make fstab fields used to 'fill in' shorthand entries configurable This means you do not have to have 'nobootwait' in the values (LP: #785542) - read /etc/ssh/sshd_config for AuthorizedKeysFile rather than assuming ~/.ssh/authorized_keys (LP: #731849) - fix cloud-init in ubuntu lxc containers (LP: #800824) - sanitize hosts file for system's hostname to 127.0.1.1 (LP: #802637) - add chef support (cloudinit/CloudConfig/cc_chef.py) (LP: ##798844) - do not give trace on failure to resize in lxc container (LP: #800856) - increase the timeout on url gets for "seedfrom" values (LP: #812646) - do not write entries for ephemeral0 on t1.micro (LP: #744019) - support 'include-once' so that expiring or one-time use urls can be used for '#include' to provide sensitive data. - support for passing public and private keys to mcollective via cloud-config - support multiple staticly configured network devices, as long as all of them come up early (LP: #810044) - Changes to handling user data mean that: * boothooks will now run more than once as they were intended (and as bootcmd commands do) * cloud-config and user-scripts will be updated from user data every boot - Fix issue where 'isatty' would return true for apt-add-repository. apt-add-repository would get stdin which was attached to a terminal (/dev/console) and would thus hang when running during boot. (LP: 831505) This was done by changing all users of util.subp to have None input unless specified - Add some debug info to the console when cloud-init runs. This is useful if debugging, IP and route information is printed to the console. - change the mechanism for handling .ssh/authorized_keys, to update entries rather than appending. This ensures that the authorized_keys that are being inserted actually do something (LP: #434076, LP: #833499) - log warning on failure to set hostname (LP: #832175) - upstart/cloud-init-nonet.conf: wait for all network interfaces to be up allow for the possibility of /var/run != /run. - DataSourceNoCloud, DataSourceOVF : do not provide a default hostname. This way the configured hostname of the system will be used if not provided by metadata (LP: #838280) - DataSourceOVF: change the default instance id to 'iid-dsovf' from 'nocloud' - Improve the OVF documentation, and provide a simple command line tool for creating a useful ISO file. 0.6.1: - fix bug in fixing permission on /var/log/cloud-init.log (LP: #704509) - improve comment strings in rsyslog file tools/21-cloudinit.conf - add previous-instance-id and previous-datasource files to datadir - add 'datasource' file to instance dir - add setting of passwords and enabling/disabling of PasswordAuthentication for sshd. By default no changes are done to sshd. - fix for puppet configuration options (LP: #709946) [Ryan Lane] - fix pickling of DataSource, which broke seeding. - turn resize_rootfs default to True - avoid mounts in DataSourceOVF if 'read' on device fails 'mount /dev/sr0' for an empty virtual cdrom device was taking 18 seconds - add 'manual_cache_clean' option to select manual cleaning of the /var/lib/cloud/instance/ link, for a data source that might not be present on every boot - make DataSourceEc2 retries and timeout configurable - add helper routines for apt-get update and install - add 'bootcmd' like 'runcmd' to cloud-config syntax for running things early - move from '#opt_include' in config file format to conf_d. ie, now files in /etc/cloud.cfg.d/ is read rather than reading '#opt_include ' or '#include ' in cloud.cfg - allow /etc/hosts to be written from hosts.tmpl. which allows getting local-hostname into /etc/hosts (LP: #720440) - better handle startup if there is no eth0 (LP: #714807) - update rather than append in puppet config [Marc Cluet] - add cloud-config for mcollective [Marc Cluet] 0.6.0: - change permissions of /var/log/cloud-init.log to accomodate syslog writing to it (LP: #704509) - rework of /var/lib/cloud layout - remove updates-check (LP: #653220) - support resizing / on first boot (enabled by default) - added support for running CloudConfig modules at cloud-init time rather than cloud-config time, and the new 'cloud_init_modules' entry in cloud.cfg to indicate which should run then. The driving force behind this was to have the rsyslog module able to run before rsyslog even runs so that a restart would not be needed (rsyslog on ubuntu runs on 'filesystem') - moved setting and updating of hostname to cloud_init_modules this allows the user to easily disable these from running. This also means: - the semaphore name for 'set_hostname' and 'update_hostname' changes to 'config_set_hostname' and 'config_update_hostname' - added cloud-config option 'hostname' for setting hostname - moved upstart/cloud-run-user-script.conf to upstart/cloud-final.conf - cloud-final.conf now runs runs cloud-config modules similar to cloud-config and cloud-init. - LP: #653271 - added writing of "boot-finished" to /var/lib/cloud/instance/boot-finished this is the last thing done, indicating cloud-init is finished booting - writes message to console with timestamp and uptime - write ssh keys to console as one of the last things done this is to ensure they don't get run off the 'get-console-ouptut' buffer - user_scripts run via cloud-final and thus semaphore renamed from user_scripts to config_user_scripts - add support for redirecting output of cloud-init, cloud-config, cloud-final via the config file, or user data config file - add support for posting data about the instance to a url (phone_home) - add minimal OVF transport (iso) support - make DataSources that are attempted dynamic and configurable from system config. changen "cloud_type: auto" as configuration for this to 'datasource_list: [ "Ec2" ]'. Each of the items in that list must be modules that can be loaded by "DataSource" - add 'timezone' option to cloud-config (LP: #645458) - Added an additional archive format, that can be used for multi-part input to cloud-init. This may be more user friendly then mime-multipart See example in doc/examples/cloud-config-archive.txt (LP: #641504) - add support for reading Rightscale style user data (LP: #668400) and acting on it in cloud-config (cc_rightscale_userdata.py) - make the message on 'disable_root' more clear (LP: #672417) - do not require public key if private is given in ssh cloud-config (LP: #648905) # vi: syntax=text textwidth=79 cloud-init-0.7.7~bzr1212/HACKING.rst0000644000000000000000000000305412704246716014755 0ustar 00000000000000===================== Hacking on cloud-init ===================== To get changes into cloud-init, the process to follow is: * If you have not already, be sure to sign the CCA: - `Canonical Contributor Agreement`_ * Get your changes into a local bzr branch. Initialize a repo, and checkout trunk (init repo is to share bzr info across multiple checkouts, its different than git): - ``bzr init-repo cloud-init`` - ``bzr branch lp:cloud-init trunk.dist`` - ``bzr branch trunk.dist my-topic-branch`` * Commit your changes (note, you can make multiple commits, fixes, more commits.): - ``bzr commit`` * Check pep8 and test, and address any issues: - ``make test pep8`` * Push to launchpad to a personal branch: - ``bzr push lp:~/cloud-init/`` * Propose that for a merge into lp:cloud-init via web browser. - Open the branch in `Launchpad`_ - It will typically be at ``https://code.launchpad.net///`` - ie. https://code.launchpad.net/~smoser/cloud-init/mybranch * Click 'Propose for merging' * Select 'lp:cloud-init' as the target branch Then, someone on cloud-init-dev (currently `Scott Moser`_ and `Joshua Harlow`_) will review your changes and follow up in the merge request. Feel free to ping and/or join #cloud-init on freenode (irc) if you have any questions. .. _Launchpad: https://launchpad.net .. _Canonical Contributor Agreement: http://www.canonical.com/contributors .. _Scott Moser: https://launchpad.net/~smoser .. _Joshua Harlow: https://launchpad.net/~harlowja cloud-init-0.7.7~bzr1212/LICENSE0000644000000000000000000010451312704246716014166 0ustar 00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . cloud-init-0.7.7~bzr1212/MANIFEST.in0000644000000000000000000000024712704246716014716 0ustar 00000000000000include *.py MANIFEST.in ChangeLog global-include *.txt *.rst *.ini *.in *.conf *.cfg *.sh graft tools prune build prune dist prune .tox prune .bzr exclude .bzrignore cloud-init-0.7.7~bzr1212/Makefile0000644000000000000000000000334512704246716014622 0ustar 00000000000000CWD=$(shell pwd) PYVER ?= 3 noseopts ?= -v YAML_FILES=$(shell find cloudinit bin tests tools -name "*.yaml" -type f ) YAML_FILES+=$(shell find doc/examples -name "cloud-config*.txt" -type f ) CHANGELOG_VERSION=$(shell $(CWD)/tools/read-version) CODE_VERSION=$(shell python -c "from cloudinit import version; print version.version_string()") PIP_INSTALL := pip install ifeq ($(PYVER),3) pyflakes = pyflakes3 unittests = unittest3 yaml = yaml else ifeq ($(PYVER),2) pyflakes = pyflakes unittests = unittest else pyflakes = pyflakes pyflakes3 unittests = unittest unittest3 endif endif ifeq ($(distro),) distro = redhat endif all: check check: check_version pep8 $(pyflakes) test $(yaml) pep8: @$(CWD)/tools/run-pep8 pyflakes: @$(CWD)/tools/run-pyflakes pyflakes3: @$(CWD)/tools/run-pyflakes3 unittest: clean_pyc nosetests $(noseopts) tests/unittests unittest3: clean_pyc nosetests3 $(noseopts) tests/unittests pip-requirements: @echo "Installing cloud-init dependencies..." $(PIP_INSTALL) -r "$@.txt" -q pip-test-requirements: @echo "Installing cloud-init test dependencies..." $(PIP_INSTALL) -r "$@.txt" -q test: $(unittests) check_version: @if [ "$(CHANGELOG_VERSION)" != "$(CODE_VERSION)" ]; then \ echo "Error: ChangeLog version $(CHANGELOG_VERSION)" \ "not equal to code version $(CODE_VERSION)"; exit 2; \ else true; fi clean_pyc: @find . -type f -name "*.pyc" -delete clean: clean_pyc rm -rf /var/log/cloud-init.log /var/lib/cloud/ yaml: @$(CWD)/tools/validate-yaml.py $(YAML_FILES) rpm: ./packages/brpm --distro $(distro) deb: ./packages/bddeb .PHONY: test pyflakes pyflakes3 clean pep8 rpm deb yaml check_version .PHONY: pip-test-requirements pip-requirements clean_pyc unittest unittest3 cloud-init-0.7.7~bzr1212/TODO.rst0000644000000000000000000000430512704246716014456 0ustar 00000000000000============================================== Things that cloud-init may do (better) someday ============================================== - Consider making ``failsafe`` ``DataSource`` - sets the user password, writing it to console - Consider a ``previous`` ``DataSource``, if no other data source is found, fall back to the ``previous`` one that worked. - Rewrite ``cloud-init-query`` (currently not implemented) - Possibly have a ``DataSource`` expose explicit fields: - instance-id - hostname - mirror - release - ssh public keys - Remove the conversion of the ubuntu network interface format conversion to a RH/fedora format and replace it with a top level format that uses the netcf libraries format instead (which itself knows how to translate into the specific formats). See for example `netcf`_ which seems to be an active project that has this capability. - Replace the ``apt*`` modules with variants that now use the distro classes to perform distro independent packaging commands (wherever possible). - Replace some the LOG.debug calls with a LOG.info where appropriate instead of how right now there is really only 2 levels (``WARN`` and ``DEBUG``) - Remove the ``cc_`` prefix for config modules, either have them fully specified (ie ``cloudinit.config.resizefs``) or by default only look in the ``cloudinit.config`` namespace for these modules (or have a combination of the above), this avoids having to understand where your modules are coming from (which can be altered by the current python inclusion path) - Instead of just warning when a module is being ran on a ``unknown`` distribution perhaps we should not run that module in that case? Or we might want to start reworking those modules so they will run on all distributions? Or if that is not the case, then maybe we want to allow fully specified python paths for modules and start encouraging packages of ``ubuntu`` modules, packages of ``rhel`` specific modules that people can add instead of having them all under the cloud-init ``root`` tree? This might encourage more development of other modules instead of having to go edit the cloud-init code to accomplish this. .. _netcf: https://fedorahosted.org/netcf/ cloud-init-0.7.7~bzr1212/bin/0000755000000000000000000000000012704246716013725 5ustar 00000000000000cloud-init-0.7.7~bzr1212/cloudinit/0000755000000000000000000000000012704246716015147 5ustar 00000000000000cloud-init-0.7.7~bzr1212/config/0000755000000000000000000000000012704246716014422 5ustar 00000000000000cloud-init-0.7.7~bzr1212/doc/0000755000000000000000000000000012704246716013722 5ustar 00000000000000cloud-init-0.7.7~bzr1212/packages/0000755000000000000000000000000012704246716014733 5ustar 00000000000000cloud-init-0.7.7~bzr1212/requirements.txt0000644000000000000000000000166412704246716016450 0ustar 00000000000000# Pypi requirements for cloud-init to work # Used for untemplating any files or strings with parameters. jinja2 # This is used for any pretty printing of tabular data. PrettyTable # This one is currently only used by the MAAS datasource. If that # datasource is removed, this is no longer needed oauthlib # This one is currently used only by the CloudSigma and SmartOS datasources. # If these datasources are removed, this is no longer needed pyserial # This is only needed for places where we need to support configs in a manner # that the built-in config parser is not sufficent (ie # when we need to preserve comments, or do not have a top-level # section)... configobj # All new style configurations are in the yaml format pyyaml # The new main entrypoint uses argparse instead of optparse argparse # Requests handles ssl correctly! requests # For patching pieces of cloud-config together jsonpatch # For Python 2/3 compatibility six cloud-init-0.7.7~bzr1212/setup.py0000755000000000000000000001542012704246716014674 0ustar 00000000000000# vi: ts=4 expandtab # # Distutils magic for ec2-init # # Copyright (C) 2009 Canonical Ltd. # Copyright (C) 2012 Yahoo! Inc. # # Author: Soren Hansen # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . from glob import glob import os import sys import setuptools from setuptools.command.install import install from distutils.errors import DistutilsArgError import subprocess def is_f(p): return os.path.isfile(p) def tiny_p(cmd, capture=True): # Darn python 2.6 doesn't have check_output (argggg) stdout = subprocess.PIPE stderr = subprocess.PIPE if not capture: stdout = None stderr = None sp = subprocess.Popen(cmd, stdout=stdout, stderr=stderr, stdin=None, universal_newlines=True) (out, err) = sp.communicate() ret = sp.returncode if ret not in [0]: raise RuntimeError("Failed running %s [rc=%s] (%s, %s)" % (cmd, ret, out, err)) return (out, err) def pkg_config_read(library, var): fallbacks = { 'systemd': { 'systemdsystemunitdir': '/lib/systemd/system', 'systemdsystemgeneratordir': '/lib/systemd/system-generators', } } cmd = ['pkg-config', '--variable=%s' % var, library] try: (path, err) = tiny_p(cmd) except: return fallbacks[library][var] return str(path).strip() INITSYS_FILES = { 'sysvinit': [f for f in glob('sysvinit/redhat/*') if is_f(f)], 'sysvinit_freebsd': [f for f in glob('sysvinit/freebsd/*') if is_f(f)], 'sysvinit_deb': [f for f in glob('sysvinit/debian/*') if is_f(f)], 'systemd': [f for f in (glob('systemd/*.service') + glob('systemd/*.target')) if is_f(f)], 'systemd.generators': [f for f in glob('systemd/*-generator') if is_f(f)], 'upstart': [f for f in glob('upstart/*') if is_f(f)], } INITSYS_ROOTS = { 'sysvinit': '/etc/rc.d/init.d', 'sysvinit_freebsd': '/usr/local/etc/rc.d', 'sysvinit_deb': '/etc/init.d', 'systemd': pkg_config_read('systemd', 'systemdsystemunitdir'), 'systemd.generators': pkg_config_read('systemd', 'systemdsystemgeneratordir'), 'upstart': '/etc/init/', } INITSYS_TYPES = sorted([f.partition(".")[0] for f in INITSYS_ROOTS.keys()]) # Install everything in the right location and take care of Linux (default) and # FreeBSD systems. USR = "/usr" ETC = "/etc" USR_LIB_EXEC = "/usr/lib" LIB = "/lib" if os.uname()[0] == 'FreeBSD': USR = "/usr/local" USR_LIB_EXEC = "/usr/local/lib" ETC = "/usr/local/etc" elif os.path.isfile('/etc/redhat-release'): USR_LIB_EXEC = "/usr/libexec" # Avoid having datafiles installed in a virtualenv... def in_virtualenv(): try: if sys.real_prefix == sys.prefix: return False else: return True except AttributeError: return False def get_version(): cmd = ['tools/read-version'] (ver, _e) = tiny_p(cmd) return str(ver).strip() def read_requires(): cmd = ['tools/read-dependencies'] (deps, _e) = tiny_p(cmd) return str(deps).splitlines() # TODO: Is there a better way to do this?? class InitsysInstallData(install): init_system = None user_options = install.user_options + [ # This will magically show up in member variable 'init_sys' ('init-system=', None, ('init system(s) to configure (%s) [default: None]' % (", ".join(INITSYS_TYPES)))), ] def initialize_options(self): install.initialize_options(self) self.init_system = "" def finalize_options(self): install.finalize_options(self) if self.init_system and isinstance(self.init_system, str): self.init_system = self.init_system.split(",") if len(self.init_system) == 0: raise DistutilsArgError( ("You must specify one of (%s) when" " specifying init system(s)!") % (", ".join(INITSYS_TYPES))) bad = [f for f in self.init_system if f not in INITSYS_TYPES] if len(bad) != 0: raise DistutilsArgError( "Invalid --init-system: %s" % (','.join(bad))) for system in self.init_system: # add data files for anything that starts with '.' datakeys = [k for k in INITSYS_ROOTS if k.partition(".")[0] == system] for k in datakeys: self.distribution.data_files.append( (INITSYS_ROOTS[k], INITSYS_FILES[k])) # Force that command to reinitalize (with new file list) self.distribution.reinitialize_command('install_data', True) if in_virtualenv(): data_files = [] cmdclass = {} else: data_files = [ (ETC + '/cloud', glob('config/*.cfg')), (ETC + '/cloud/cloud.cfg.d', glob('config/cloud.cfg.d/*')), (ETC + '/cloud/templates', glob('templates/*')), (USR_LIB_EXEC + '/cloud-init', ['tools/uncloud-init', 'tools/write-ssh-key-fingerprints']), (USR + '/share/doc/cloud-init', [f for f in glob('doc/*') if is_f(f)]), (USR + '/share/doc/cloud-init/examples', [f for f in glob('doc/examples/*') if is_f(f)]), (USR + '/share/doc/cloud-init/examples/seed', [f for f in glob('doc/examples/seed/*') if is_f(f)]), (LIB + '/udev/rules.d', [f for f in glob('udev/*.rules')]), (LIB + '/udev', ['udev/cloud-init-wait']), ] # Use a subclass for install that handles # adding on the right init system configuration files cmdclass = { 'install': InitsysInstallData, } requirements = read_requires() if sys.version_info < (3,): requirements.append('cheetah') setuptools.setup( name='cloud-init', version=get_version(), description='EC2 initialisation magic', author='Scott Moser', author_email='scott.moser@canonical.com', url='http://launchpad.net/cloud-init/', packages=setuptools.find_packages(exclude=['tests']), scripts=['bin/cloud-init', 'tools/cloud-init-per'], license='GPLv3', data_files=data_files, install_requires=requirements, cmdclass=cmdclass, ) cloud-init-0.7.7~bzr1212/systemd/0000755000000000000000000000000012704246716014645 5ustar 00000000000000cloud-init-0.7.7~bzr1212/sysvinit/0000755000000000000000000000000012704246716015045 5ustar 00000000000000cloud-init-0.7.7~bzr1212/templates/0000755000000000000000000000000012704246716015153 5ustar 00000000000000cloud-init-0.7.7~bzr1212/test-requirements.txt0000644000000000000000000000010712704246716017414 0ustar 00000000000000httpretty>=0.7.1 mock nose pep8==1.5.7 pyflakes contextlib2 setuptools cloud-init-0.7.7~bzr1212/tests/0000755000000000000000000000000012704246716014317 5ustar 00000000000000cloud-init-0.7.7~bzr1212/tools/0000755000000000000000000000000012704246716014315 5ustar 00000000000000cloud-init-0.7.7~bzr1212/tox.ini0000644000000000000000000000121312704246716014465 0ustar 00000000000000[tox] envlist = py27,py3,pyflakes recreate = True [testenv] commands = python -m nose {posargs:tests} deps = -r{toxinidir}/test-requirements.txt -r{toxinidir}/requirements.txt [testenv:py3] basepython = python3 [testenv:pyflakes] basepython = python3 commands = {envpython} -m pyflakes {posargs:cloudinit/ tests/ tools/} {envpython} -m pep8 {posargs:cloudinit/ tests/ tools/} # https://github.com/gabrielfalcao/HTTPretty/issues/223 setenv = LC_ALL = en_US.utf-8 [testenv:py26] commands = nosetests {posargs:tests} deps = contextlib2 httpretty>=0.7.1 mock nose pep8==1.5.7 pyflakes setenv = LC_ALL = C cloud-init-0.7.7~bzr1212/udev/0000755000000000000000000000000012704246716014120 5ustar 00000000000000cloud-init-0.7.7~bzr1212/upstart/0000755000000000000000000000000012704246716014657 5ustar 00000000000000cloud-init-0.7.7~bzr1212/bin/cloud-init0000755000000000000000000005724112704246716015733 0ustar 00000000000000#!/usr/bin/python # vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import argparse import json import os import sys import time import tempfile import traceback # This is more just for running from the bin folder so that # cloud-init binary can find the cloudinit module possible_topdir = os.path.normpath(os.path.join(os.path.abspath( sys.argv[0]), os.pardir, os.pardir)) if os.path.exists(os.path.join(possible_topdir, "cloudinit", "__init__.py")): sys.path.insert(0, possible_topdir) from cloudinit import patcher patcher.patch() from cloudinit import log as logging from cloudinit import netinfo from cloudinit import signal_handler from cloudinit import sources from cloudinit import stages from cloudinit import templater from cloudinit import util from cloudinit import reporting from cloudinit.reporting import events from cloudinit import version from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE, CLOUD_CONFIG) # Pretty little cheetah formatted welcome message template WELCOME_MSG_TPL = ("Cloud-init v. ${version} running '${action}' at " "${timestamp}. Up ${uptime} seconds.") # Module section template MOD_SECTION_TPL = "cloud_%s_modules" # Things u can query on QUERY_DATA_TYPES = [ 'data', 'data_raw', 'instance_id', ] # Frequency shortname to full name # (so users don't have to remember the full name...) FREQ_SHORT_NAMES = { 'instance': PER_INSTANCE, 'always': PER_ALWAYS, 'once': PER_ONCE, } LOG = logging.getLogger() # Used for when a logger may not be active # and we still want to print exceptions... def print_exc(msg=''): if msg: sys.stderr.write("%s\n" % (msg)) sys.stderr.write('-' * 60) sys.stderr.write("\n") traceback.print_exc(file=sys.stderr) sys.stderr.write('-' * 60) sys.stderr.write("\n") def welcome(action, msg=None): if not msg: msg = welcome_format(action) util.multi_log("%s\n" % (msg), console=False, stderr=True, log=LOG) return msg def welcome_format(action): tpl_params = { 'version': version.version_string(), 'uptime': util.uptime(), 'timestamp': util.time_rfc2822(), 'action': action, } return templater.render_string(WELCOME_MSG_TPL, tpl_params) def extract_fns(args): # Files are already opened so lets just pass that along # since it would of broke if it couldn't have # read that file already... fn_cfgs = [] if args.files: for fh in args.files: # The realpath is more useful in logging # so lets resolve to that... fn_cfgs.append(os.path.realpath(fh.name)) return fn_cfgs def run_module_section(mods, action_name, section): full_section_name = MOD_SECTION_TPL % (section) (which_ran, failures) = mods.run_section(full_section_name) total_attempted = len(which_ran) + len(failures) if total_attempted == 0: msg = ("No '%s' modules to run" " under section '%s'") % (action_name, full_section_name) sys.stderr.write("%s\n" % (msg)) LOG.debug(msg) return [] else: LOG.debug("Ran %s modules with %s failures", len(which_ran), len(failures)) return failures def apply_reporting_cfg(cfg): if cfg.get('reporting'): reporting.update_configuration(cfg.get('reporting')) def main_init(name, args): deps = [sources.DEP_FILESYSTEM, sources.DEP_NETWORK] if args.local: deps = [sources.DEP_FILESYSTEM] if not args.local: # See doc/kernel-cmdline.txt # # This is used in maas datasource, in "ephemeral" (read-only root) # environment where the instance netboots to iscsi ro root. # and the entity that controls the pxe config has to configure # the maas datasource. # # Could be used elsewhere, only works on network based (not local). root_name = "%s.d" % (CLOUD_CONFIG) target_fn = os.path.join(root_name, "91_kernel_cmdline_url.cfg") util.read_write_cmdline_url(target_fn) # Cloud-init 'init' stage is broken up into the following sub-stages # 1. Ensure that the init object fetches its config without errors # 2. Setup logging/output redirections with resultant config (if any) # 3. Initialize the cloud-init filesystem # 4. Check if we can stop early by looking for various files # 5. Fetch the datasource # 6. Connect to the current instance location + update the cache # 7. Consume the userdata (handlers get activated here) # 8. Construct the modules object # 9. Adjust any subsequent logging/output redirections using the modules # objects config as it may be different from init object # 10. Run the modules for the 'init' stage # 11. Done! if not args.local: w_msg = welcome_format(name) else: w_msg = welcome_format("%s-local" % (name)) init = stages.Init(ds_deps=deps, reporter=args.reporter) # Stage 1 init.read_cfg(extract_fns(args)) # Stage 2 outfmt = None errfmt = None try: LOG.debug("Closing stdin") util.close_stdin() (outfmt, errfmt) = util.fixup_output(init.cfg, name) except: util.logexc(LOG, "Failed to setup output redirection!") print_exc("Failed to setup output redirection!") if args.debug: # Reset so that all the debug handlers are closed out LOG.debug(("Logging being reset, this logger may no" " longer be active shortly")) logging.resetLogging() logging.setupLogging(init.cfg) apply_reporting_cfg(init.cfg) # Any log usage prior to setupLogging above did not have local user log # config applied. We send the welcome message now, as stderr/out have # been redirected and log now configured. welcome(name, msg=w_msg) # Stage 3 try: init.initialize() except Exception: util.logexc(LOG, "Failed to initialize, likely bad things to come!") # Stage 4 path_helper = init.paths if not args.local: existing = "trust" sys.stderr.write("%s\n" % (netinfo.debug_info())) LOG.debug(("Checking to see if files that we need already" " exist from a previous run that would allow us" " to stop early.")) stop_files = [ os.path.join(path_helper.get_cpath("data"), "no-net"), path_helper.get_ipath_cur("obj_pkl"), ] existing_files = [] for fn in stop_files: try: c = util.load_file(fn) if len(c): existing_files.append((fn, len(c))) except Exception: pass if existing_files: LOG.debug("Exiting early due to the existence of %s files", existing_files) return (None, []) else: LOG.debug("Execution continuing, no previous run detected that" " would allow us to stop early.") else: existing = "check" if util.get_cfg_option_bool(init.cfg, 'manual_cache_clean', False): existing = "trust" init.purge_cache() # Delete the non-net file as well util.del_file(os.path.join(path_helper.get_cpath("data"), "no-net")) # Stage 5 try: init.fetch(existing=existing) except sources.DataSourceNotFoundException: # In the case of 'cloud-init init' without '--local' it is a bit # more likely that the user would consider it failure if nothing was # found. When using upstart it will also mentions job failure # in console log if exit code is != 0. if args.local: LOG.debug("No local datasource found") else: util.logexc(LOG, ("No instance datasource found!" " Likely bad things to come!")) if not args.force: init.apply_network_config() if args.local: return (None, []) else: return (None, ["No instance datasource found."]) if args.local: if not init.ds_restored: # if local mode and the datasource was not restored from cache # (this is not first boot) then apply networking. init.apply_network_config() else: LOG.debug("skipping networking config from restored datasource.") # Stage 6 iid = init.instancify() LOG.debug("%s will now be targeting instance id: %s", name, iid) init.update() # Stage 7 try: # Attempt to consume the data per instance. # This may run user-data handlers and/or perform # url downloads and such as needed. (ran, _results) = init.cloudify().run('consume_data', init.consume_data, args=[PER_INSTANCE], freq=PER_INSTANCE) if not ran: # Just consume anything that is set to run per-always # if nothing ran in the per-instance code # # See: https://bugs.launchpad.net/bugs/819507 for a little # reason behind this... init.consume_data(PER_ALWAYS) except Exception: util.logexc(LOG, "Consuming user data failed!") return (init.datasource, ["Consuming user data failed!"]) apply_reporting_cfg(init.cfg) # Stage 8 - re-read and apply relevant cloud-config to include user-data mods = stages.Modules(init, extract_fns(args), reporter=args.reporter) # Stage 9 try: outfmt_orig = outfmt errfmt_orig = errfmt (outfmt, errfmt) = util.get_output_cfg(mods.cfg, name) if outfmt_orig != outfmt or errfmt_orig != errfmt: LOG.warn("Stdout, stderr changing to (%s, %s)", outfmt, errfmt) (outfmt, errfmt) = util.fixup_output(mods.cfg, name) except: util.logexc(LOG, "Failed to re-adjust output redirection!") logging.setupLogging(mods.cfg) # Stage 10 return (init.datasource, run_module_section(mods, name, name)) def main_modules(action_name, args): name = args.mode # Cloud-init 'modules' stages are broken up into the following sub-stages # 1. Ensure that the init object fetches its config without errors # 2. Get the datasource from the init object, if it does # not exist then that means the main_init stage never # worked, and thus this stage can not run. # 3. Construct the modules object # 4. Adjust any subsequent logging/output redirections using # the modules objects configuration # 5. Run the modules for the given stage name # 6. Done! w_msg = welcome_format("%s:%s" % (action_name, name)) init = stages.Init(ds_deps=[], reporter=args.reporter) # Stage 1 init.read_cfg(extract_fns(args)) # Stage 2 try: init.fetch(existing="trust") except sources.DataSourceNotFoundException: # There was no datasource found, theres nothing to do msg = ('Can not apply stage %s, no datasource found! Likely bad ' 'things to come!' % name) util.logexc(LOG, msg) print_exc(msg) if not args.force: return [(msg)] # Stage 3 mods = stages.Modules(init, extract_fns(args), reporter=args.reporter) # Stage 4 try: LOG.debug("Closing stdin") util.close_stdin() util.fixup_output(mods.cfg, name) except: util.logexc(LOG, "Failed to setup output redirection!") if args.debug: # Reset so that all the debug handlers are closed out LOG.debug(("Logging being reset, this logger may no" " longer be active shortly")) logging.resetLogging() logging.setupLogging(mods.cfg) apply_reporting_cfg(init.cfg) # now that logging is setup and stdout redirected, send welcome welcome(name, msg=w_msg) # Stage 5 return run_module_section(mods, name, name) def main_query(name, _args): raise NotImplementedError(("Action '%s' is not" " currently implemented") % (name)) def main_single(name, args): # Cloud-init single stage is broken up into the following sub-stages # 1. Ensure that the init object fetches its config without errors # 2. Attempt to fetch the datasource (warn if it doesn't work) # 3. Construct the modules object # 4. Adjust any subsequent logging/output redirections using # the modules objects configuration # 5. Run the single module # 6. Done! mod_name = args.name w_msg = welcome_format(name) init = stages.Init(ds_deps=[], reporter=args.reporter) # Stage 1 init.read_cfg(extract_fns(args)) # Stage 2 try: init.fetch(existing="trust") except sources.DataSourceNotFoundException: # There was no datasource found, # that might be bad (or ok) depending on # the module being ran (so continue on) util.logexc(LOG, ("Failed to fetch your datasource," " likely bad things to come!")) print_exc(("Failed to fetch your datasource," " likely bad things to come!")) if not args.force: return 1 # Stage 3 mods = stages.Modules(init, extract_fns(args), reporter=args.reporter) mod_args = args.module_args if mod_args: LOG.debug("Using passed in arguments %s", mod_args) mod_freq = args.frequency if mod_freq: LOG.debug("Using passed in frequency %s", mod_freq) mod_freq = FREQ_SHORT_NAMES.get(mod_freq) # Stage 4 try: LOG.debug("Closing stdin") util.close_stdin() util.fixup_output(mods.cfg, None) except: util.logexc(LOG, "Failed to setup output redirection!") if args.debug: # Reset so that all the debug handlers are closed out LOG.debug(("Logging being reset, this logger may no" " longer be active shortly")) logging.resetLogging() logging.setupLogging(mods.cfg) apply_reporting_cfg(init.cfg) # now that logging is setup and stdout redirected, send welcome welcome(name, msg=w_msg) # Stage 5 (which_ran, failures) = mods.run_single(mod_name, mod_args, mod_freq) if failures: LOG.warn("Ran %s but it failed!", mod_name) return 1 elif not which_ran: LOG.warn("Did not run %s, does it exist?", mod_name) return 1 else: # Guess it worked return 0 def atomic_write_file(path, content, mode='w'): tf = None try: tf = tempfile.NamedTemporaryFile(dir=os.path.dirname(path), delete=False, mode=mode) tf.write(content) tf.close() os.rename(tf.name, path) except Exception as e: if tf is not None: os.unlink(tf.name) raise e def atomic_write_json(path, data): return atomic_write_file(path, json.dumps(data, indent=1) + "\n") def status_wrapper(name, args, data_d=None, link_d=None): if data_d is None: data_d = os.path.normpath("/var/lib/cloud/data") if link_d is None: link_d = os.path.normpath("/run/cloud-init") status_path = os.path.join(data_d, "status.json") status_link = os.path.join(link_d, "status.json") result_path = os.path.join(data_d, "result.json") result_link = os.path.join(link_d, "result.json") util.ensure_dirs((data_d, link_d,)) (_name, functor) = args.action if name == "init": if args.local: mode = "init-local" else: mode = "init" elif name == "modules": mode = "modules-%s" % args.mode else: raise ValueError("unknown name: %s" % name) modes = ('init', 'init-local', 'modules-config', 'modules-final') status = None if mode == 'init-local': for f in (status_link, result_link, status_path, result_path): util.del_file(f) else: try: status = json.loads(util.load_file(status_path)) except: pass if status is None: nullstatus = { 'errors': [], 'start': None, 'finished': None, } status = {'v1': {}} for m in modes: status['v1'][m] = nullstatus.copy() status['v1']['datasource'] = None v1 = status['v1'] v1['stage'] = mode v1[mode]['start'] = time.time() atomic_write_json(status_path, status) util.sym_link(os.path.relpath(status_path, link_d), status_link, force=True) try: ret = functor(name, args) if mode in ('init', 'init-local'): (datasource, errors) = ret if datasource is not None: v1['datasource'] = str(datasource) else: errors = ret v1[mode]['errors'] = [str(e) for e in errors] except Exception as e: util.logexc(LOG, "failed of stage %s", mode) print_exc("failed run of stage %s" % mode) v1[mode]['errors'] = [str(e)] v1[mode]['finished'] = time.time() v1['stage'] = None atomic_write_json(status_path, status) if mode == "modules-final": # write the 'finished' file errors = [] for m in modes: if v1[m]['errors']: errors.extend(v1[m].get('errors', [])) atomic_write_json(result_path, {'v1': {'datasource': v1['datasource'], 'errors': errors}}) util.sym_link(os.path.relpath(result_path, link_d), result_link, force=True) return len(v1[mode]['errors']) def main(): parser = argparse.ArgumentParser() # Top level args parser.add_argument('--version', '-v', action='version', version='%(prog)s ' + (version.version_string())) parser.add_argument('--file', '-f', action='append', dest='files', help=('additional yaml configuration' ' files to use'), type=argparse.FileType('rb')) parser.add_argument('--debug', '-d', action='store_true', help=('show additional pre-action' ' logging (default: %(default)s)'), default=False) parser.add_argument('--force', action='store_true', help=('force running even if no datasource is' ' found (use at your own risk)'), dest='force', default=False) parser.set_defaults(reporter=None) subparsers = parser.add_subparsers() # Each action and its sub-options (if any) parser_init = subparsers.add_parser('init', help=('initializes cloud-init and' ' performs initial modules')) parser_init.add_argument("--local", '-l', action='store_true', help="start in local mode (default: %(default)s)", default=False) # This is used so that we can know which action is selected + # the functor to use to run this subcommand parser_init.set_defaults(action=('init', main_init)) # These settings are used for the 'config' and 'final' stages parser_mod = subparsers.add_parser('modules', help=('activates modules using ' 'a given configuration key')) parser_mod.add_argument("--mode", '-m', action='store', help=("module configuration name " "to use (default: %(default)s)"), default='config', choices=('init', 'config', 'final')) parser_mod.set_defaults(action=('modules', main_modules)) # These settings are used when you want to query information # stored in the cloud-init data objects/directories/files parser_query = subparsers.add_parser('query', help=('query information stored ' 'in cloud-init')) parser_query.add_argument("--name", '-n', action="store", help="item name to query on", required=True, choices=QUERY_DATA_TYPES) parser_query.set_defaults(action=('query', main_query)) # This subcommand allows you to run a single module parser_single = subparsers.add_parser('single', help=('run a single module ')) parser_single.set_defaults(action=('single', main_single)) parser_single.add_argument("--name", '-n', action="store", help="module name to run", required=True) parser_single.add_argument("--frequency", action="store", help=("frequency of the module"), required=False, choices=list(FREQ_SHORT_NAMES.keys())) parser_single.add_argument("--report", action="store_true", help="enable reporting", required=False) parser_single.add_argument("module_args", nargs="*", metavar='argument', help=('any additional arguments to' ' pass to this module')) parser_single.set_defaults(action=('single', main_single)) args = parser.parse_args() # Setup basic logging to start (until reinitialized) # iff in debug mode... if args.debug: logging.setupBasicLogging() # Setup signal handlers before running signal_handler.attach_handlers() if not hasattr(args, 'action'): parser.error('too few arguments') (name, functor) = args.action if name in ("modules", "init"): functor = status_wrapper report_on = True if name == "init": if args.local: rname, rdesc = ("init-local", "searching for local datasources") else: rname, rdesc = ("init-network", "searching for network datasources") elif name == "modules": rname, rdesc = ("modules-%s" % args.mode, "running modules for %s" % args.mode) elif name == "single": rname, rdesc = ("single/%s" % args.name, "running single module %s" % args.name) report_on = args.report args.reporter = events.ReportEventStack( rname, rdesc, reporting_enabled=report_on) with args.reporter: return util.log_time( logfunc=LOG.debug, msg="cloud-init mode '%s'" % name, get_uptime=True, func=functor, args=(name, args)) if __name__ == '__main__': sys.exit(main()) cloud-init-0.7.7~bzr1212/cloudinit/__init__.py0000644000000000000000000000163512704246716017265 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . cloud-init-0.7.7~bzr1212/cloudinit/cloud.py0000644000000000000000000000733312704246716016635 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import copy import os from cloudinit import log as logging from cloudinit.reporting import events LOG = logging.getLogger(__name__) # This class is the high level wrapper that provides # access to cloud-init objects without exposing the stage objects # to handler and or module manipulation. It allows for cloud # init to restrict what those types of user facing code may see # and or adjust (which helps avoid code messing with each other) # # It also provides util functions that avoid having to know # how to get a certain member from this submembers as well # as providing a backwards compatible object that can be maintained # while the stages/other objects can be worked on independently... class Cloud(object): def __init__(self, datasource, paths, cfg, distro, runners, reporter=None): self.datasource = datasource self.paths = paths self.distro = distro self._cfg = cfg self._runners = runners if reporter is None: reporter = events.ReportEventStack( name="unnamed-cloud-reporter", description="unnamed-cloud-reporter", reporting_enabled=False) self.reporter = reporter # If a 'user' manipulates logging or logging services # it is typically useful to cause the logging to be # setup again. def cycle_logging(self): logging.resetLogging() logging.setupLogging(self.cfg) @property def cfg(self): # Ensure that not indirectly modified return copy.deepcopy(self._cfg) def run(self, name, functor, args, freq=None, clear_on_fail=False): return self._runners.run(name, functor, args, freq, clear_on_fail) def get_template_filename(self, name): fn = self.paths.template_tpl % (name) if not os.path.isfile(fn): LOG.warn("No template found at %s for template named %s", fn, name) return None return fn # The rest of thes are just useful proxies def get_userdata(self, apply_filter=True): return self.datasource.get_userdata(apply_filter) def get_instance_id(self): return self.datasource.get_instance_id() @property def launch_index(self): return self.datasource.launch_index def get_public_ssh_keys(self): return self.datasource.get_public_ssh_keys() def get_locale(self): return self.datasource.get_locale() def get_hostname(self, fqdn=False): return self.datasource.get_hostname(fqdn=fqdn) def device_name_to_device(self, name): return self.datasource.device_name_to_device(name) def get_ipath_cur(self, name=None): return self.paths.get_ipath_cur(name) def get_cpath(self, name=None): return self.paths.get_cpath(name) def get_ipath(self, name=None): return self.paths.get_ipath(name) cloud-init-0.7.7~bzr1212/cloudinit/config/0000755000000000000000000000000012704246716016414 5ustar 00000000000000cloud-init-0.7.7~bzr1212/cloudinit/cs_utils.py0000644000000000000000000000700112704246716017344 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2014 CloudSigma # # Author: Kiril Vladimiroff # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . """ cepko implements easy-to-use communication with CloudSigma's VMs through a virtual serial port without bothering with formatting the messages properly nor parsing the output with the specific and sometimes confusing shell tools for that purpose. Having the server definition accessible by the VM can ve useful in various ways. For example it is possible to easily determine from within the VM, which network interfaces are connected to public and which to private network. Another use is to pass some data to initial VM setup scripts, like setting the hostname to the VM name or passing ssh public keys through server meta. For more information take a look at the Server Context section of CloudSigma API Docs: http://cloudsigma-docs.readthedocs.org/en/latest/server_context.html """ import json import platform import serial # these high timeouts are necessary as read may read a lot of data. READ_TIMEOUT = 60 WRITE_TIMEOUT = 10 SERIAL_PORT = '/dev/ttyS1' if platform.system() == 'Windows': SERIAL_PORT = 'COM2' class Cepko(object): """ One instance of that object could be use for one or more queries to the serial port. """ request_pattern = "<\n{}\n>" def get(self, key="", request_pattern=None): if request_pattern is None: request_pattern = self.request_pattern return CepkoResult(request_pattern.format(key)) def all(self): return self.get() def meta(self, key=""): request_pattern = self.request_pattern.format("/meta/{}") return self.get(key, request_pattern) def global_context(self, key=""): request_pattern = self.request_pattern.format("/global_context/{}") return self.get(key, request_pattern) class CepkoResult(object): """ CepkoResult executes the request to the virtual serial port as soon as the instance is initialized and stores the result in both raw and marshalled format. """ def __init__(self, request): self.request = request self.raw_result = self._execute() self.result = self._marshal(self.raw_result) def _execute(self): connection = serial.Serial(port=SERIAL_PORT, timeout=READ_TIMEOUT, writeTimeout=WRITE_TIMEOUT) connection.write(self.request.encode('ascii')) return connection.readline().strip(b'\x04\n').decode('ascii') def _marshal(self, raw_result): try: return json.loads(raw_result) except ValueError: return raw_result def __len__(self): return self.result.__len__() def __getitem__(self, key): return self.result.__getitem__(key) def __contains__(self, item): return self.result.__contains__(item) def __iter__(self): return self.result.__iter__() cloud-init-0.7.7~bzr1212/cloudinit/distros/0000755000000000000000000000000012704246716016636 5ustar 00000000000000cloud-init-0.7.7~bzr1212/cloudinit/ec2_utils.py0000644000000000000000000001624512704246716017422 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Yahoo! Inc. # # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import functools import json from cloudinit import log as logging from cloudinit import url_helper from cloudinit import util LOG = logging.getLogger(__name__) SKIP_USERDATA_CODES = frozenset([url_helper.NOT_FOUND]) class MetadataLeafDecoder(object): """Decodes a leaf blob into something meaningful.""" def _maybe_json_object(self, text): if not text: return False text = text.strip() if text.startswith("{") and text.endswith("}"): return True return False def __call__(self, field, blob): if not blob: return blob try: blob = util.decode_binary(blob) except UnicodeDecodeError: return blob if self._maybe_json_object(blob): try: # Assume it's json, unless it fails parsing... return json.loads(blob) except (ValueError, TypeError) as e: LOG.warn("Field %s looked like a json object, but it was" " not: %s", field, e) if blob.find("\n") != -1: return blob.splitlines() return blob # See: http://bit.ly/TyoUQs # class MetadataMaterializer(object): def __init__(self, blob, base_url, caller, leaf_decoder=None): self._blob = blob self._md = None self._base_url = base_url self._caller = caller if leaf_decoder is None: self._leaf_decoder = MetadataLeafDecoder() else: self._leaf_decoder = leaf_decoder def _parse(self, blob): leaves = {} children = [] blob = util.decode_binary(blob) if not blob: return (leaves, children) def has_children(item): if item.endswith("/"): return True else: return False def get_name(item): if item.endswith("/"): return item.rstrip("/") return item for field in blob.splitlines(): field = field.strip() field_name = get_name(field) if not field or not field_name: continue if has_children(field): if field_name not in children: children.append(field_name) else: contents = field.split("=", 1) resource = field_name if len(contents) > 1: # What a PITA... (ident, sub_contents) = contents ident = util.safe_int(ident) if ident is not None: resource = "%s/openssh-key" % (ident) field_name = sub_contents leaves[field_name] = resource return (leaves, children) def materialize(self): if self._md is not None: return self._md self._md = self._materialize(self._blob, self._base_url) return self._md def _materialize(self, blob, base_url): (leaves, children) = self._parse(blob) child_contents = {} for c in children: child_url = url_helper.combine_url(base_url, c) if not child_url.endswith("/"): child_url += "/" child_blob = self._caller(child_url) child_contents[c] = self._materialize(child_blob, child_url) leaf_contents = {} for (field, resource) in leaves.items(): leaf_url = url_helper.combine_url(base_url, resource) leaf_blob = self._caller(leaf_url) leaf_contents[field] = self._leaf_decoder(field, leaf_blob) joined = {} joined.update(child_contents) for field in leaf_contents.keys(): if field in joined: LOG.warn("Duplicate key found in results from %s", base_url) else: joined[field] = leaf_contents[field] return joined def _skip_retry_on_codes(status_codes, _request_args, cause): """Returns if a request should retry based on a given set of codes that case retrying to be stopped/skipped. """ if cause.code in status_codes: return False return True def get_instance_userdata(api_version='latest', metadata_address='http://169.254.169.254', ssl_details=None, timeout=5, retries=5): ud_url = url_helper.combine_url(metadata_address, api_version) ud_url = url_helper.combine_url(ud_url, 'user-data') user_data = '' try: # It is ok for userdata to not exist (thats why we are stopping if # NOT_FOUND occurs) and just in that case returning an empty string. exception_cb = functools.partial(_skip_retry_on_codes, SKIP_USERDATA_CODES) response = util.read_file_or_url(ud_url, ssl_details=ssl_details, timeout=timeout, retries=retries, exception_cb=exception_cb) user_data = response.contents except url_helper.UrlError as e: if e.code not in SKIP_USERDATA_CODES: util.logexc(LOG, "Failed fetching userdata from url %s", ud_url) except Exception: util.logexc(LOG, "Failed fetching userdata from url %s", ud_url) return user_data def get_instance_metadata(api_version='latest', metadata_address='http://169.254.169.254', ssl_details=None, timeout=5, retries=5, leaf_decoder=None): md_url = url_helper.combine_url(metadata_address, api_version) # Note, 'meta-data' explicitly has trailing /. # this is required for CloudStack (LP: #1356855) md_url = url_helper.combine_url(md_url, 'meta-data/') caller = functools.partial(util.read_file_or_url, ssl_details=ssl_details, timeout=timeout, retries=retries) def mcaller(url): return caller(url).contents try: response = caller(md_url) materializer = MetadataMaterializer(response.contents, md_url, mcaller, leaf_decoder=leaf_decoder) md = materializer.materialize() if not isinstance(md, (dict)): md = {} return md except Exception: util.logexc(LOG, "Failed fetching metadata from url %s", md_url) return {} cloud-init-0.7.7~bzr1212/cloudinit/filters/0000755000000000000000000000000012704246716016617 5ustar 00000000000000cloud-init-0.7.7~bzr1212/cloudinit/handlers/0000755000000000000000000000000012704246716016747 5ustar 00000000000000cloud-init-0.7.7~bzr1212/cloudinit/helpers.py0000644000000000000000000003531412704246716017171 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012, 2013 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . from time import time import contextlib import os import six from six.moves.configparser import ( NoSectionError, NoOptionError, RawConfigParser) from cloudinit.settings import (PER_INSTANCE, PER_ALWAYS, PER_ONCE, CFG_ENV_NAME) from cloudinit import log as logging from cloudinit import type_utils from cloudinit import util LOG = logging.getLogger(__name__) class LockFailure(Exception): pass class DummyLock(object): pass class DummySemaphores(object): def __init__(self): pass @contextlib.contextmanager def lock(self, _name, _freq, _clear_on_fail=False): yield DummyLock() def has_run(self, _name, _freq): return False def clear(self, _name, _freq): return True def clear_all(self): pass class FileLock(object): def __init__(self, fn): self.fn = fn def __str__(self): return "<%s using file %r>" % (type_utils.obj_name(self), self.fn) def canon_sem_name(name): return name.replace("-", "_") class FileSemaphores(object): def __init__(self, sem_path): self.sem_path = sem_path @contextlib.contextmanager def lock(self, name, freq, clear_on_fail=False): name = canon_sem_name(name) try: yield self._acquire(name, freq) except: if clear_on_fail: self.clear(name, freq) raise def clear(self, name, freq): name = canon_sem_name(name) sem_file = self._get_path(name, freq) try: util.del_file(sem_file) except (IOError, OSError): util.logexc(LOG, "Failed deleting semaphore %s", sem_file) return False return True def clear_all(self): try: util.del_dir(self.sem_path) except (IOError, OSError): util.logexc(LOG, "Failed deleting semaphore directory %s", self.sem_path) def _acquire(self, name, freq): # Check again if its been already gotten if self.has_run(name, freq): return None # This is a race condition since nothing atomic is happening # here, but this should be ok due to the nature of when # and where cloud-init runs... (file writing is not a lock...) sem_file = self._get_path(name, freq) contents = "%s: %s\n" % (os.getpid(), time()) try: util.write_file(sem_file, contents) except (IOError, OSError): util.logexc(LOG, "Failed writing semaphore file %s", sem_file) return None return FileLock(sem_file) def has_run(self, name, freq): if not freq or freq == PER_ALWAYS: return False cname = canon_sem_name(name) sem_file = self._get_path(cname, freq) # This isn't really a good atomic check # but it suffices for where and when cloudinit runs if os.path.exists(sem_file): return True # this case could happen if the migrator module hadn't run yet # but the item had run before we did canon_sem_name. if cname != name and os.path.exists(self._get_path(name, freq)): LOG.warn("%s has run without canonicalized name [%s].\n" "likely the migrator has not yet run. " "It will run next boot.\n" "run manually with: cloud-init single --name=migrator" % (name, cname)) return True return False def _get_path(self, name, freq): sem_path = self.sem_path if not freq or freq == PER_INSTANCE: return os.path.join(sem_path, name) else: return os.path.join(sem_path, "%s.%s" % (name, freq)) class Runners(object): def __init__(self, paths): self.paths = paths self.sems = {} def _get_sem(self, freq): if freq == PER_ALWAYS or not freq: return None sem_path = None if freq == PER_INSTANCE: # This may not exist, # so thats why we still check for none # below if say the paths object # doesn't have a datasource that can # provide this instance path... sem_path = self.paths.get_ipath("sem") elif freq == PER_ONCE: sem_path = self.paths.get_cpath("sem") if not sem_path: return None if sem_path not in self.sems: self.sems[sem_path] = FileSemaphores(sem_path) return self.sems[sem_path] def run(self, name, functor, args, freq=None, clear_on_fail=False): sem = self._get_sem(freq) if not sem: sem = DummySemaphores() if not args: args = [] if sem.has_run(name, freq): LOG.debug("%s already ran (freq=%s)", name, freq) return (False, None) with sem.lock(name, freq, clear_on_fail) as lk: if not lk: raise LockFailure("Failed to acquire lock for %s" % name) else: LOG.debug("Running %s using lock (%s)", name, lk) if isinstance(args, (dict)): results = functor(**args) else: results = functor(*args) return (True, results) class ConfigMerger(object): def __init__(self, paths=None, datasource=None, additional_fns=None, base_cfg=None, include_vendor=True): self._paths = paths self._ds = datasource self._fns = additional_fns self._base_cfg = base_cfg self._include_vendor = include_vendor # Created on first use self._cfg = None def _get_datasource_configs(self): d_cfgs = [] if self._ds: try: ds_cfg = self._ds.get_config_obj() if ds_cfg and isinstance(ds_cfg, (dict)): d_cfgs.append(ds_cfg) except: util.logexc(LOG, "Failed loading of datasource config object " "from %s", self._ds) return d_cfgs def _get_env_configs(self): e_cfgs = [] if CFG_ENV_NAME in os.environ: e_fn = os.environ[CFG_ENV_NAME] try: e_cfgs.append(util.read_conf(e_fn)) except: util.logexc(LOG, 'Failed loading of env. config from %s', e_fn) return e_cfgs def _get_instance_configs(self): i_cfgs = [] # If cloud-config was written, pick it up as # a configuration file to use when running... if not self._paths: return i_cfgs cc_paths = ['cloud_config'] if self._include_vendor: cc_paths.append('vendor_cloud_config') for cc_p in cc_paths: cc_fn = self._paths.get_ipath_cur(cc_p) if cc_fn and os.path.isfile(cc_fn): try: i_cfgs.append(util.read_conf(cc_fn)) except: util.logexc(LOG, 'Failed loading of cloud-config from %s', cc_fn) return i_cfgs def _read_cfg(self): # Input config files override # env config files which # override instance configs # which override datasource # configs which override # base configuration cfgs = [] if self._fns: for c_fn in self._fns: try: cfgs.append(util.read_conf(c_fn)) except: util.logexc(LOG, "Failed loading of configuration from %s", c_fn) cfgs.extend(self._get_env_configs()) cfgs.extend(self._get_instance_configs()) cfgs.extend(self._get_datasource_configs()) if self._base_cfg: cfgs.append(self._base_cfg) return util.mergemanydict(cfgs) @property def cfg(self): # None check to avoid empty case causing re-reading if self._cfg is None: self._cfg = self._read_cfg() return self._cfg class ContentHandlers(object): def __init__(self): self.registered = {} self.initialized = [] def __contains__(self, item): return self.is_registered(item) def __getitem__(self, key): return self._get_handler(key) def is_registered(self, content_type): return content_type in self.registered def register(self, mod, initialized=False, overwrite=True): types = set() for t in mod.list_types(): if overwrite: types.add(t) else: if not self.is_registered(t): types.add(t) for t in types: self.registered[t] = mod if initialized and mod not in self.initialized: self.initialized.append(mod) return types def _get_handler(self, content_type): return self.registered[content_type] def items(self): return list(self.registered.items()) class Paths(object): def __init__(self, path_cfgs, ds=None): self.cfgs = path_cfgs # Populate all the initial paths self.cloud_dir = path_cfgs.get('cloud_dir', '/var/lib/cloud') self.instance_link = os.path.join(self.cloud_dir, 'instance') self.boot_finished = os.path.join(self.instance_link, "boot-finished") self.upstart_conf_d = path_cfgs.get('upstart_dir') self.seed_dir = os.path.join(self.cloud_dir, 'seed') # This one isn't joined, since it should just be read-only template_dir = path_cfgs.get('templates_dir', '/etc/cloud/templates/') self.template_tpl = os.path.join(template_dir, '%s.tmpl') self.lookups = { "handlers": "handlers", "scripts": "scripts", "vendor_scripts": "scripts/vendor", "sem": "sem", "boothooks": "boothooks", "userdata_raw": "user-data.txt", "userdata": "user-data.txt.i", "obj_pkl": "obj.pkl", "cloud_config": "cloud-config.txt", "vendor_cloud_config": "vendor-cloud-config.txt", "data": "data", "vendordata_raw": "vendor-data.txt", "vendordata": "vendor-data.txt.i", } # Set when a datasource becomes active self.datasource = ds # get_ipath_cur: get the current instance path for an item def get_ipath_cur(self, name=None): ipath = self.instance_link add_on = self.lookups.get(name) if add_on: ipath = os.path.join(ipath, add_on) return ipath # get_cpath : get the "clouddir" (/var/lib/cloud/) # for a name in dirmap def get_cpath(self, name=None): cpath = self.cloud_dir add_on = self.lookups.get(name) if add_on: cpath = os.path.join(cpath, add_on) return cpath # _get_ipath : get the instance path for a name in pathmap # (/var/lib/cloud/instances//) def _get_ipath(self, name=None): if not self.datasource: return None iid = self.datasource.get_instance_id() if iid is None: return None ipath = os.path.join(self.cloud_dir, 'instances', str(iid)) add_on = self.lookups.get(name) if add_on: ipath = os.path.join(ipath, add_on) return ipath # get_ipath : get the instance path for a name in pathmap # (/var/lib/cloud/instances//) # returns None + warns if no active datasource.... def get_ipath(self, name=None): ipath = self._get_ipath(name) if not ipath: LOG.warn(("No per instance data available, " "is there an datasource/iid set?")) return None else: return ipath # This config parser will not throw when sections don't exist # and you are setting values on those sections which is useful # when writing to new options that may not have corresponding # sections. Also it can default other values when doing gets # so that if those sections/options do not exist you will # get a default instead of an error. Another useful case where # you can avoid catching exceptions that you typically don't # care about... class DefaultingConfigParser(RawConfigParser): DEF_INT = 0 DEF_FLOAT = 0.0 DEF_BOOLEAN = False DEF_BASE = None def get(self, section, option): value = self.DEF_BASE try: value = RawConfigParser.get(self, section, option) except NoSectionError: pass except NoOptionError: pass return value def set(self, section, option, value=None): if not self.has_section(section) and section.lower() != 'default': self.add_section(section) RawConfigParser.set(self, section, option, value) def remove_option(self, section, option): if self.has_option(section, option): RawConfigParser.remove_option(self, section, option) def getboolean(self, section, option): if not self.has_option(section, option): return self.DEF_BOOLEAN return RawConfigParser.getboolean(self, section, option) def getfloat(self, section, option): if not self.has_option(section, option): return self.DEF_FLOAT return RawConfigParser.getfloat(self, section, option) def getint(self, section, option): if not self.has_option(section, option): return self.DEF_INT return RawConfigParser.getint(self, section, option) def stringify(self, header=None): contents = '' with six.StringIO() as outputstream: self.write(outputstream) outputstream.flush() contents = outputstream.getvalue() if header: contents = "\n".join([header, contents]) return contents cloud-init-0.7.7~bzr1212/cloudinit/importer.py0000644000000000000000000000364412704246716017371 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import sys def import_module(module_name): __import__(module_name) return sys.modules[module_name] def find_module(base_name, search_paths, required_attrs=None): if not required_attrs: required_attrs = [] # NOTE(harlowja): translate the search paths to include the base name. lookup_paths = [] for path in search_paths: real_path = [] if path: real_path.extend(path.split(".")) real_path.append(base_name) full_path = '.'.join(real_path) lookup_paths.append(full_path) found_paths = [] for full_path in lookup_paths: mod = None try: mod = import_module(full_path) except ImportError: pass if not mod: continue found_attrs = 0 for attr in required_attrs: if hasattr(mod, attr): found_attrs += 1 if found_attrs == len(required_attrs): found_paths.append(full_path) return (found_paths, lookup_paths) cloud-init-0.7.7~bzr1212/cloudinit/log.py0000644000000000000000000001102012704246716016274 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import logging import logging.config import logging.handlers import collections import os import sys import six from six import StringIO # Logging levels for easy access CRITICAL = logging.CRITICAL FATAL = logging.FATAL ERROR = logging.ERROR WARNING = logging.WARNING WARN = logging.WARN INFO = logging.INFO DEBUG = logging.DEBUG NOTSET = logging.NOTSET # Default basic format DEF_CON_FORMAT = '%(asctime)s - %(filename)s[%(levelname)s]: %(message)s' def setupBasicLogging(level=DEBUG): root = logging.getLogger() console = logging.StreamHandler(sys.stderr) console.setFormatter(logging.Formatter(DEF_CON_FORMAT)) console.setLevel(level) root.addHandler(console) root.setLevel(level) def flushLoggers(root): if not root: return for h in root.handlers: if isinstance(h, (logging.StreamHandler)): try: h.flush() except IOError: pass flushLoggers(root.parent) def setupLogging(cfg=None): # See if the config provides any logging conf... if not cfg: cfg = {} log_cfgs = [] log_cfg = cfg.get('logcfg') if log_cfg and isinstance(log_cfg, six.string_types): # If there is a 'logcfg' entry in the config, # respect it, it is the old keyname log_cfgs.append(str(log_cfg)) elif "log_cfgs" in cfg: for a_cfg in cfg['log_cfgs']: if isinstance(a_cfg, six.string_types): log_cfgs.append(a_cfg) elif isinstance(a_cfg, (collections.Iterable)): cfg_str = [str(c) for c in a_cfg] log_cfgs.append('\n'.join(cfg_str)) else: log_cfgs.append(str(a_cfg)) # See if any of them actually load... am_tried = 0 for log_cfg in log_cfgs: try: am_tried += 1 # Assume its just a string if not a filename if log_cfg.startswith("/") and os.path.isfile(log_cfg): # Leave it as a file and do not make it look like # something that is a file (but is really a buffer that # is acting as a file) pass else: log_cfg = StringIO(log_cfg) # Attempt to load its config logging.config.fileConfig(log_cfg) # The first one to work wins! return except Exception: # We do not write any logs of this here, because the default # configuration includes an attempt at using /dev/log, followed # up by writing to a file. /dev/log will not exist in very early # boot, so an exception on that is expected. pass # If it didn't work, at least setup a basic logger (if desired) basic_enabled = cfg.get('log_basic', True) sys.stderr.write(("WARN: no logging configured!" " (tried %s configs)\n") % (am_tried)) if basic_enabled: sys.stderr.write("Setting up basic logging...\n") setupBasicLogging() def getLogger(name='cloudinit'): return logging.getLogger(name) # Fixes this annoyance... # No handlers could be found for logger XXX annoying output... try: from logging import NullHandler except ImportError: class NullHandler(logging.Handler): def emit(self, record): pass def _resetLogger(log): if not log: return handlers = list(log.handlers) for h in handlers: h.flush() h.close() log.removeHandler(h) log.setLevel(NOTSET) log.addHandler(NullHandler()) def resetLogging(): _resetLogger(logging.getLogger()) _resetLogger(getLogger()) resetLogging() cloud-init-0.7.7~bzr1212/cloudinit/mergers/0000755000000000000000000000000012704246716016613 5ustar 00000000000000cloud-init-0.7.7~bzr1212/cloudinit/net/0000755000000000000000000000000012704246716015735 5ustar 00000000000000cloud-init-0.7.7~bzr1212/cloudinit/netinfo.py0000644000000000000000000002111712704246716017165 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import cloudinit.util as util from cloudinit.log import logging import re from prettytable import PrettyTable LOG = logging.getLogger() def netdev_info(empty=""): fields = ("hwaddr", "addr", "bcast", "mask") (ifcfg_out, _err) = util.subp(["ifconfig", "-a"]) devs = {} for line in str(ifcfg_out).splitlines(): if len(line) == 0: continue if line[0] not in ("\t", " "): curdev = line.split()[0] devs[curdev] = {"up": False} for field in fields: devs[curdev][field] = "" toks = line.lower().strip().split() if toks[0] == "up": devs[curdev]['up'] = True # If the output of ifconfig doesn't contain the required info in the # obvious place, use a regex filter to be sure. elif len(toks) > 1: if re.search(r"flags=\d+= 0: return "%s[%s]" % (r['gateway'], r['iface']) return None def netdev_pformat(): lines = [] try: netdev = netdev_info(empty=".") except Exception: lines.append(util.center("Net device info failed", '!', 80)) else: fields = ['Device', 'Up', 'Address', 'Mask', 'Scope', 'Hw-Address'] tbl = PrettyTable(fields) for (dev, d) in netdev.items(): tbl.add_row([dev, d["up"], d["addr"], d["mask"], ".", d["hwaddr"]]) if d.get('addr6'): tbl.add_row([dev, d["up"], d["addr6"], ".", d.get("scope6"), d["hwaddr"]]) netdev_s = tbl.get_string() max_len = len(max(netdev_s.splitlines(), key=len)) header = util.center("Net device info", "+", max_len) lines.extend([header, netdev_s]) return "\n".join(lines) def route_pformat(): lines = [] try: routes = route_info() except Exception as e: lines.append(util.center('Route info failed', '!', 80)) util.logexc(LOG, "Route info failed: %s" % e) else: if routes.get('ipv4'): fields_v4 = ['Route', 'Destination', 'Gateway', 'Genmask', 'Interface', 'Flags'] tbl_v4 = PrettyTable(fields_v4) for (n, r) in enumerate(routes.get('ipv4')): route_id = str(n) tbl_v4.add_row([route_id, r['destination'], r['gateway'], r['genmask'], r['iface'], r['flags']]) route_s = tbl_v4.get_string() max_len = len(max(route_s.splitlines(), key=len)) header = util.center("Route IPv4 info", "+", max_len) lines.extend([header, route_s]) if routes.get('ipv6'): fields_v6 = ['Route', 'Proto', 'Recv-Q', 'Send-Q', 'Local Address', 'Foreign Address', 'State'] tbl_v6 = PrettyTable(fields_v6) for (n, r) in enumerate(routes.get('ipv6')): route_id = str(n) tbl_v6.add_row([route_id, r['proto'], r['recv-q'], r['send-q'], r['local address'], r['foreign address'], r['state']]) route_s = tbl_v6.get_string() max_len = len(max(route_s.splitlines(), key=len)) header = util.center("Route IPv6 info", "+", max_len) lines.extend([header, route_s]) return "\n".join(lines) def debug_info(prefix='ci-info: '): lines = [] netdev_lines = netdev_pformat().splitlines() if prefix: for line in netdev_lines: lines.append("%s%s" % (prefix, line)) else: lines.extend(netdev_lines) route_lines = route_pformat().splitlines() if prefix: for line in route_lines: lines.append("%s%s" % (prefix, line)) else: lines.extend(route_lines) return "\n".join(lines) cloud-init-0.7.7~bzr1212/cloudinit/patcher.py0000644000000000000000000000340312704246716017147 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import imp import logging import sys # Default fallback format FALL_FORMAT = ('FALLBACK: %(asctime)s - %(filename)s[%(levelname)s]: ' + '%(message)s') class QuietStreamHandler(logging.StreamHandler): def handleError(self, record): pass def _patch_logging(): # Replace 'handleError' with one that will be more # tolerant of errors in that it can avoid # re-notifying on exceptions and when errors # do occur, it can at least try to write to # sys.stderr using a fallback logger fallback_handler = QuietStreamHandler(sys.stderr) fallback_handler.setFormatter(logging.Formatter(FALL_FORMAT)) def handleError(self, record): try: fallback_handler.handle(record) fallback_handler.flush() except IOError: pass setattr(logging.Handler, 'handleError', handleError) def patch(): imp.acquire_lock() try: _patch_logging() finally: imp.release_lock() cloud-init-0.7.7~bzr1212/cloudinit/registry.py0000644000000000000000000000201212704246716017364 0ustar 00000000000000# Copyright 2015 Canonical Ltd. # This file is part of cloud-init. See LICENCE file for license information. # # vi: ts=4 expandtab import copy class DictRegistry(object): """A simple registry for a mapping of objects.""" def __init__(self): self.reset() def reset(self): self._items = {} def register_item(self, key, item): """Add item to the registry.""" if key in self._items: raise ValueError( 'Item already registered with key {0}'.format(key)) self._items[key] = item def unregister_item(self, key, force=True): """Remove item from the registry.""" if key in self._items: del self._items[key] elif not force: raise KeyError("%s: key not present to unregister" % key) @property def registered_items(self): """All the items that have been registered. This cannot be used to modify the contents of the registry. """ return copy.copy(self._items) cloud-init-0.7.7~bzr1212/cloudinit/reporting/0000755000000000000000000000000012704246716017160 5ustar 00000000000000cloud-init-0.7.7~bzr1212/cloudinit/safeyaml.py0000644000000000000000000000204412704246716017322 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # # Author: Scott Moser # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import yaml class _CustomSafeLoader(yaml.SafeLoader): def construct_python_unicode(self, node): return self.construct_scalar(node) _CustomSafeLoader.add_constructor( u'tag:yaml.org,2002:python/unicode', _CustomSafeLoader.construct_python_unicode) def load(blob): return(yaml.load(blob, Loader=_CustomSafeLoader)) cloud-init-0.7.7~bzr1212/cloudinit/settings.py0000644000000000000000000000411612704246716017363 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . # Set and read for determining the cloud config file location CFG_ENV_NAME = "CLOUD_CFG" # This is expected to be a yaml formatted file CLOUD_CONFIG = '/etc/cloud/cloud.cfg' # What u get if no config is provided CFG_BUILTIN = { 'datasource_list': [ 'NoCloud', 'ConfigDrive', 'OpenNebula', 'Azure', 'AltCloud', 'OVF', 'MAAS', 'GCE', 'OpenStack', 'Ec2', 'CloudSigma', 'CloudStack', 'SmartOS', 'Bigstep', # At the end to act as a 'catch' when none of the above work... 'None', ], 'def_log_file': '/var/log/cloud-init.log', 'log_cfgs': [], 'syslog_fix_perms': ['syslog:adm', 'root:adm'], 'system_info': { 'paths': { 'cloud_dir': '/var/lib/cloud', 'templates_dir': '/etc/cloud/templates/', }, 'distro': 'ubuntu', }, 'vendor_data': {'enabled': True, 'prefix': []}, } # Valid frequencies of handlers/modules PER_INSTANCE = "once-per-instance" PER_ALWAYS = "always" PER_ONCE = "once" # Used to sanity check incoming handlers/modules frequencies FREQUENCIES = [PER_INSTANCE, PER_ALWAYS, PER_ONCE] cloud-init-0.7.7~bzr1212/cloudinit/signal_handler.py0000644000000000000000000000450512704246716020477 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import inspect import signal import sys from six import StringIO from cloudinit import log as logging from cloudinit import util from cloudinit import version as vr LOG = logging.getLogger(__name__) BACK_FRAME_TRACE_DEPTH = 3 EXIT_FOR = { signal.SIGINT: ('Cloud-init %(version)s received SIGINT, exiting...', 1), signal.SIGTERM: ('Cloud-init %(version)s received SIGTERM, exiting...', 1), # Can't be caught... # signal.SIGKILL: ('Cloud-init killed, exiting...', 1), signal.SIGABRT: ('Cloud-init %(version)s received SIGABRT, exiting...', 1), } def _pprint_frame(frame, depth, max_depth, contents): if depth > max_depth or not frame: return frame_info = inspect.getframeinfo(frame) prefix = " " * (depth * 2) contents.write("%sFilename: %s\n" % (prefix, frame_info.filename)) contents.write("%sFunction: %s\n" % (prefix, frame_info.function)) contents.write("%sLine number: %s\n" % (prefix, frame_info.lineno)) _pprint_frame(frame.f_back, depth + 1, max_depth, contents) def _handle_exit(signum, frame): (msg, rc) = EXIT_FOR[signum] msg = msg % ({'version': vr.version()}) contents = StringIO() contents.write("%s\n" % (msg)) _pprint_frame(frame, 1, BACK_FRAME_TRACE_DEPTH, contents) util.multi_log(contents.getvalue(), console=True, stderr=False, log=LOG) sys.exit(rc) def attach_handlers(): sigs_attached = 0 for signum in EXIT_FOR.keys(): signal.signal(signum, _handle_exit) sigs_attached += len(EXIT_FOR) return sigs_attached cloud-init-0.7.7~bzr1212/cloudinit/sources/0000755000000000000000000000000012704246716016632 5ustar 00000000000000cloud-init-0.7.7~bzr1212/cloudinit/ssh_util.py0000644000000000000000000002475012704246716017363 0ustar 00000000000000#!/usr/bin/python # vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # # Author: Scott Moser # Author: Juerg Hafliger # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import os import pwd from cloudinit import log as logging from cloudinit import util LOG = logging.getLogger(__name__) # See: man sshd_config DEF_SSHD_CFG = "/etc/ssh/sshd_config" # taken from openssh source key.c/key_type_from_name VALID_KEY_TYPES = ( "rsa", "dsa", "ssh-rsa", "ssh-dss", "ecdsa", "ssh-rsa-cert-v00@openssh.com", "ssh-dss-cert-v00@openssh.com", "ssh-rsa-cert-v00@openssh.com", "ssh-dss-cert-v00@openssh.com", "ssh-rsa-cert-v01@openssh.com", "ssh-dss-cert-v01@openssh.com", "ecdsa-sha2-nistp256-cert-v01@openssh.com", "ecdsa-sha2-nistp384-cert-v01@openssh.com", "ecdsa-sha2-nistp521-cert-v01@openssh.com") class AuthKeyLine(object): def __init__(self, source, keytype=None, base64=None, comment=None, options=None): self.base64 = base64 self.comment = comment self.options = options self.keytype = keytype self.source = source def valid(self): return (self.base64 and self.keytype) def __str__(self): toks = [] if self.options: toks.append(self.options) if self.keytype: toks.append(self.keytype) if self.base64: toks.append(self.base64) if self.comment: toks.append(self.comment) if not toks: return self.source else: return ' '.join(toks) class AuthKeyLineParser(object): """ AUTHORIZED_KEYS FILE FORMAT AuthorizedKeysFile specifies the file containing public keys for public key authentication; if none is specified, the default is ~/.ssh/authorized_keys. Each line of the file contains one key (empty (because of the size of the public key encoding) up to a limit of 8 kilo- bytes, which permits DSA keys up to 8 kilobits and RSA keys up to 16 kilobits. You don't want to type them in; instead, copy the identity.pub, id_dsa.pub, or the id_rsa.pub file and edit it. sshd enforces a minimum RSA key modulus size for protocol 1 and protocol 2 keys of 768 bits. The options (if present) consist of comma-separated option specifica- tions. No spaces are permitted, except within double quotes. The fol- lowing option specifications are supported (note that option keywords are case-insensitive): """ def _extract_options(self, ent): """ The options (if present) consist of comma-separated option specifica- tions. No spaces are permitted, except within double quotes. Note that option keywords are case-insensitive. """ quoted = False i = 0 while (i < len(ent) and ((quoted) or (ent[i] not in (" ", "\t")))): curc = ent[i] if i + 1 >= len(ent): i = i + 1 break nextc = ent[i + 1] if curc == "\\" and nextc == '"': i = i + 1 elif curc == '"': quoted = not quoted i = i + 1 options = ent[0:i] # Return the rest of the string in 'remain' remain = ent[i:].lstrip() return (options, remain) def parse(self, src_line, options=None): # modeled after opensshes auth2-pubkey.c:user_key_allowed2 line = src_line.rstrip("\r\n") if line.startswith("#") or line.strip() == '': return AuthKeyLine(src_line) def parse_ssh_key(ent): # return ketype, key, [comment] toks = ent.split(None, 2) if len(toks) < 2: raise TypeError("To few fields: %s" % len(toks)) if toks[0] not in VALID_KEY_TYPES: raise TypeError("Invalid keytype %s" % toks[0]) # valid key type and 2 or 3 fields: if len(toks) == 2: # no comment in line toks.append("") return toks ent = line.strip() try: (keytype, base64, comment) = parse_ssh_key(ent) except TypeError: (keyopts, remain) = self._extract_options(ent) if options is None: options = keyopts try: (keytype, base64, comment) = parse_ssh_key(remain) except TypeError: return AuthKeyLine(src_line) return AuthKeyLine(src_line, keytype=keytype, base64=base64, comment=comment, options=options) def parse_authorized_keys(fname): lines = [] try: if os.path.isfile(fname): lines = util.load_file(fname).splitlines() except (IOError, OSError): util.logexc(LOG, "Error reading lines from %s", fname) lines = [] parser = AuthKeyLineParser() contents = [] for line in lines: contents.append(parser.parse(line)) return contents def update_authorized_keys(old_entries, keys): to_add = list(keys) for i in range(0, len(old_entries)): ent = old_entries[i] if not ent.valid(): continue # Replace those with the same base64 for k in keys: if not ent.valid(): continue if k.base64 == ent.base64: # Replace it with our better one ent = k # Don't add it later if k in to_add: to_add.remove(k) old_entries[i] = ent # Now append any entries we did not match above for key in to_add: old_entries.append(key) # Now format them back to strings... lines = [str(b) for b in old_entries] # Ensure it ends with a newline lines.append('') return '\n'.join(lines) def users_ssh_info(username): pw_ent = pwd.getpwnam(username) if not pw_ent or not pw_ent.pw_dir: raise RuntimeError("Unable to get ssh info for user %r" % (username)) return (os.path.join(pw_ent.pw_dir, '.ssh'), pw_ent) def extract_authorized_keys(username): (ssh_dir, pw_ent) = users_ssh_info(username) auth_key_fn = None with util.SeLinuxGuard(ssh_dir, recursive=True): try: # The 'AuthorizedKeysFile' may contain tokens # of the form %T which are substituted during connection set-up. # The following tokens are defined: %% is replaced by a literal # '%', %h is replaced by the home directory of the user being # authenticated and %u is replaced by the username of that user. ssh_cfg = parse_ssh_config_map(DEF_SSHD_CFG) auth_key_fn = ssh_cfg.get("authorizedkeysfile", '').strip() if not auth_key_fn: auth_key_fn = "%h/.ssh/authorized_keys" auth_key_fn = auth_key_fn.replace("%h", pw_ent.pw_dir) auth_key_fn = auth_key_fn.replace("%u", username) auth_key_fn = auth_key_fn.replace("%%", '%') if not auth_key_fn.startswith('/'): auth_key_fn = os.path.join(pw_ent.pw_dir, auth_key_fn) except (IOError, OSError): # Give up and use a default key filename auth_key_fn = os.path.join(ssh_dir, 'authorized_keys') util.logexc(LOG, "Failed extracting 'AuthorizedKeysFile' in ssh " "config from %r, using 'AuthorizedKeysFile' file " "%r instead", DEF_SSHD_CFG, auth_key_fn) return (auth_key_fn, parse_authorized_keys(auth_key_fn)) def setup_user_keys(keys, username, options=None): # Make sure the users .ssh dir is setup accordingly (ssh_dir, pwent) = users_ssh_info(username) if not os.path.isdir(ssh_dir): util.ensure_dir(ssh_dir, mode=0o700) util.chownbyid(ssh_dir, pwent.pw_uid, pwent.pw_gid) # Turn the 'update' keys given into actual entries parser = AuthKeyLineParser() key_entries = [] for k in keys: key_entries.append(parser.parse(str(k), options=options)) # Extract the old and make the new (auth_key_fn, auth_key_entries) = extract_authorized_keys(username) with util.SeLinuxGuard(ssh_dir, recursive=True): content = update_authorized_keys(auth_key_entries, key_entries) util.ensure_dir(os.path.dirname(auth_key_fn), mode=0o700) util.write_file(auth_key_fn, content, mode=0o600) util.chownbyid(auth_key_fn, pwent.pw_uid, pwent.pw_gid) class SshdConfigLine(object): def __init__(self, line, k=None, v=None): self.line = line self._key = k self.value = v @property def key(self): if self._key is None: return None # Keywords are case-insensitive return self._key.lower() def __str__(self): if self._key is None: return str(self.line) else: v = str(self._key) if self.value: v += " " + str(self.value) return v def parse_ssh_config(fname): # See: man sshd_config # The file contains keyword-argument pairs, one per line. # Lines starting with '#' and empty lines are interpreted as comments. # Note: key-words are case-insensitive and arguments are case-sensitive lines = [] if not os.path.isfile(fname): return lines for line in util.load_file(fname).splitlines(): line = line.strip() if not line or line.startswith("#"): lines.append(SshdConfigLine(line)) continue try: key, val = line.split(None, 1) except ValueError: key, val = line.split('=', 1) lines.append(SshdConfigLine(line, key, val)) return lines def parse_ssh_config_map(fname): lines = parse_ssh_config(fname) if not lines: return {} ret = {} for line in lines: if not line.key: continue ret[line.key] = line.value return ret cloud-init-0.7.7~bzr1212/cloudinit/stages.py0000644000000000000000000007735712704246716017032 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012, 2013 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import copy import os import sys import six from six.moves import cPickle as pickle from cloudinit.settings import (PER_INSTANCE, FREQUENCIES, CLOUD_CONFIG) from cloudinit import handlers # Default handlers (used if not overridden) from cloudinit.handlers import boot_hook as bh_part from cloudinit.handlers import cloud_config as cc_part from cloudinit.handlers import shell_script as ss_part from cloudinit.handlers import upstart_job as up_part from cloudinit import cloud from cloudinit import config from cloudinit import distros from cloudinit import helpers from cloudinit import importer from cloudinit import log as logging from cloudinit import net from cloudinit import sources from cloudinit import type_utils from cloudinit import util from cloudinit.reporting import events LOG = logging.getLogger(__name__) NULL_DATA_SOURCE = None class Init(object): def __init__(self, ds_deps=None, reporter=None): if ds_deps is not None: self.ds_deps = ds_deps else: self.ds_deps = [sources.DEP_FILESYSTEM, sources.DEP_NETWORK] # Created on first use self._cfg = None self._paths = None self._distro = None # Changed only when a fetch occurs self.datasource = NULL_DATA_SOURCE self.ds_restored = False if reporter is None: reporter = events.ReportEventStack( name="init-reporter", description="init-desc", reporting_enabled=False) self.reporter = reporter def _reset(self, reset_ds=False): # Recreated on access self._cfg = None self._paths = None self._distro = None if reset_ds: self.datasource = NULL_DATA_SOURCE self.ds_restored = False @property def distro(self): if not self._distro: # Try to find the right class to use system_config = self._extract_cfg('system') distro_name = system_config.pop('distro', 'ubuntu') distro_cls = distros.fetch(distro_name) LOG.debug("Using distro class %s", distro_cls) self._distro = distro_cls(distro_name, system_config, self.paths) # If we have an active datasource we need to adjust # said datasource and move its distro/system config # from whatever it was to a new set... if self.datasource is not NULL_DATA_SOURCE: self.datasource.distro = self._distro self.datasource.sys_cfg = system_config return self._distro @property def cfg(self): return self._extract_cfg('restricted') def _extract_cfg(self, restriction): # Ensure actually read self.read_cfg() # Nobody gets the real config ocfg = copy.deepcopy(self._cfg) if restriction == 'restricted': ocfg.pop('system_info', None) elif restriction == 'system': ocfg = util.get_cfg_by_path(ocfg, ('system_info',), {}) elif restriction == 'paths': ocfg = util.get_cfg_by_path(ocfg, ('system_info', 'paths'), {}) if not isinstance(ocfg, (dict)): ocfg = {} return ocfg @property def paths(self): if not self._paths: path_info = self._extract_cfg('paths') self._paths = helpers.Paths(path_info, self.datasource) return self._paths def _initial_subdirs(self): c_dir = self.paths.cloud_dir initial_dirs = [ c_dir, os.path.join(c_dir, 'scripts'), os.path.join(c_dir, 'scripts', 'per-instance'), os.path.join(c_dir, 'scripts', 'per-once'), os.path.join(c_dir, 'scripts', 'per-boot'), os.path.join(c_dir, 'scripts', 'vendor'), os.path.join(c_dir, 'seed'), os.path.join(c_dir, 'instances'), os.path.join(c_dir, 'handlers'), os.path.join(c_dir, 'sem'), os.path.join(c_dir, 'data'), ] return initial_dirs def purge_cache(self, rm_instance_lnk=False): rm_list = [] rm_list.append(self.paths.boot_finished) if rm_instance_lnk: rm_list.append(self.paths.instance_link) for f in rm_list: util.del_file(f) return len(rm_list) def initialize(self): self._initialize_filesystem() def _initialize_filesystem(self): util.ensure_dirs(self._initial_subdirs()) log_file = util.get_cfg_option_str(self.cfg, 'def_log_file') if log_file: util.ensure_file(log_file) perms = self.cfg.get('syslog_fix_perms') if not perms: perms = {} if not isinstance(perms, list): perms = [perms] error = None for perm in perms: u, g = util.extract_usergroup(perm) try: util.chownbyname(log_file, u, g) return except OSError as e: error = e LOG.warn("Failed changing perms on '%s'. tried: %s. %s", log_file, ','.join(perms), error) def read_cfg(self, extra_fns=None): # None check so that we don't keep on re-loading if empty if self._cfg is None: self._cfg = self._read_cfg(extra_fns) # LOG.debug("Loaded 'init' config %s", self._cfg) def _read_cfg(self, extra_fns): no_cfg_paths = helpers.Paths({}, self.datasource) merger = helpers.ConfigMerger(paths=no_cfg_paths, datasource=self.datasource, additional_fns=extra_fns, base_cfg=fetch_base_config()) return merger.cfg def _restore_from_cache(self): # We try to restore from a current link and static path # by using the instance link, if purge_cache was called # the file wont exist. return _pkl_load(self.paths.get_ipath_cur('obj_pkl')) def _write_to_cache(self): if self.datasource is NULL_DATA_SOURCE: return False return _pkl_store(self.datasource, self.paths.get_ipath_cur("obj_pkl")) def _get_datasources(self): # Any config provided??? pkg_list = self.cfg.get('datasource_pkg_list') or [] # Add the defaults at the end for n in ['', type_utils.obj_name(sources)]: if n not in pkg_list: pkg_list.append(n) cfg_list = self.cfg.get('datasource_list') or [] return (cfg_list, pkg_list) def _get_data_source(self, existing): if self.datasource is not NULL_DATA_SOURCE: return self.datasource with events.ReportEventStack( name="check-cache", description="attempting to read from cache [%s]" % existing, parent=self.reporter) as myrep: ds = self._restore_from_cache() if ds and existing == "trust": myrep.description = "restored from cache: %s" % ds elif ds and existing == "check": if (hasattr(ds, 'check_instance_id') and ds.check_instance_id(self.cfg)): myrep.description = "restored from checked cache: %s" % ds else: myrep.description = "cache invalid in datasource: %s" % ds ds = None else: myrep.description = "no cache found" self.ds_restored = bool(ds) LOG.debug(myrep.description) if not ds: util.del_file(self.paths.instance_link) (cfg_list, pkg_list) = self._get_datasources() # Deep copy so that user-data handlers can not modify # (which will affect user-data handlers down the line...) (ds, dsname) = sources.find_source(self.cfg, self.distro, self.paths, copy.deepcopy(self.ds_deps), cfg_list, pkg_list, self.reporter) LOG.info("Loaded datasource %s - %s", dsname, ds) self.datasource = ds # Ensure we adjust our path members datasource # now that we have one (thus allowing ipath to be used) self._reset() return ds def _get_instance_subdirs(self): return ['handlers', 'scripts', 'sem'] def _get_ipath(self, subname=None): # Force a check to see if anything # actually comes back, if not # then a datasource has not been assigned... instance_dir = self.paths.get_ipath(subname) if not instance_dir: raise RuntimeError(("No instance directory is available." " Has a datasource been fetched??")) return instance_dir def _reflect_cur_instance(self): # Remove the old symlink and attach a new one so # that further reads/writes connect into the right location idir = self._get_ipath() util.del_file(self.paths.instance_link) util.sym_link(idir, self.paths.instance_link) # Ensures these dirs exist dir_list = [] for d in self._get_instance_subdirs(): dir_list.append(os.path.join(idir, d)) util.ensure_dirs(dir_list) # Write out information on what is being used for the current instance # and what may have been used for a previous instance... dp = self.paths.get_cpath('data') # Write what the datasource was and is.. ds = "%s: %s" % (type_utils.obj_name(self.datasource), self.datasource) previous_ds = None ds_fn = os.path.join(idir, 'datasource') try: previous_ds = util.load_file(ds_fn).strip() except Exception: pass if not previous_ds: previous_ds = ds util.write_file(ds_fn, "%s\n" % ds) util.write_file(os.path.join(dp, 'previous-datasource'), "%s\n" % (previous_ds)) # What the instance id was and is... iid = self.datasource.get_instance_id() previous_iid = None iid_fn = os.path.join(dp, 'instance-id') try: previous_iid = util.load_file(iid_fn).strip() except Exception: pass if not previous_iid: previous_iid = iid util.write_file(iid_fn, "%s\n" % iid) util.write_file(os.path.join(dp, 'previous-instance-id'), "%s\n" % (previous_iid)) # Ensure needed components are regenerated # after change of instance which may cause # change of configuration self._reset() return iid def fetch(self, existing="check"): return self._get_data_source(existing=existing) def instancify(self): return self._reflect_cur_instance() def cloudify(self): # Form the needed options to cloudify our members return cloud.Cloud(self.datasource, self.paths, self.cfg, self.distro, helpers.Runners(self.paths), reporter=self.reporter) def update(self): if not self._write_to_cache(): return self._store_userdata() self._store_vendordata() def _store_userdata(self): raw_ud = self.datasource.get_userdata_raw() if raw_ud is None: raw_ud = b'' util.write_file(self._get_ipath('userdata_raw'), raw_ud, 0o600) # processed userdata is a Mime message, so write it as string. processed_ud = self.datasource.get_userdata() if processed_ud is None: raw_ud = '' util.write_file(self._get_ipath('userdata'), str(processed_ud), 0o600) def _store_vendordata(self): raw_vd = self.datasource.get_vendordata_raw() if raw_vd is None: raw_vd = b'' util.write_file(self._get_ipath('vendordata_raw'), raw_vd, 0o600) # processed vendor data is a Mime message, so write it as string. processed_vd = str(self.datasource.get_vendordata()) if processed_vd is None: processed_vd = '' util.write_file(self._get_ipath('vendordata'), str(processed_vd), 0o600) def _default_handlers(self, opts=None): if opts is None: opts = {} opts.update({ 'paths': self.paths, 'datasource': self.datasource, }) # TODO(harlowja) Hmmm, should we dynamically import these?? def_handlers = [ cc_part.CloudConfigPartHandler(**opts), ss_part.ShellScriptPartHandler(**opts), bh_part.BootHookPartHandler(**opts), up_part.UpstartJobPartHandler(**opts), ] return def_handlers def _default_userdata_handlers(self): return self._default_handlers() def _default_vendordata_handlers(self): return self._default_handlers( opts={'script_path': 'vendor_scripts', 'cloud_config_path': 'vendor_cloud_config'}) def _do_handlers(self, data_msg, c_handlers_list, frequency, excluded=None): """ Generalized handlers suitable for use with either vendordata or userdata """ if excluded is None: excluded = [] cdir = self.paths.get_cpath("handlers") idir = self._get_ipath("handlers") # Add the path to the plugins dir to the top of our list for importing # new handlers. # # Note(harlowja): instance dir should be read before cloud-dir for d in [cdir, idir]: if d and d not in sys.path: sys.path.insert(0, d) def register_handlers_in_dir(path): # Attempts to register any handler modules under the given path. if not path or not os.path.isdir(path): return potential_handlers = util.find_modules(path) for (fname, mod_name) in potential_handlers.items(): try: mod_locs, looked_locs = importer.find_module( mod_name, [''], ['list_types', 'handle_part']) if not mod_locs: LOG.warn("Could not find a valid user-data handler" " named %s in file %s (searched %s)", mod_name, fname, looked_locs) continue mod = importer.import_module(mod_locs[0]) mod = handlers.fixup_handler(mod) types = c_handlers.register(mod) if types: LOG.debug("Added custom handler for %s [%s] from %s", types, mod, fname) except Exception: util.logexc(LOG, "Failed to register handler from %s", fname) # This keeps track of all the active handlers c_handlers = helpers.ContentHandlers() # Add any handlers in the cloud-dir register_handlers_in_dir(cdir) # Register any other handlers that come from the default set. This # is done after the cloud-dir handlers so that the cdir modules can # take over the default user-data handler content-types. for mod in c_handlers_list: types = c_handlers.register(mod, overwrite=False) if types: LOG.debug("Added default handler for %s from %s", types, mod) # Form our cloud interface data = self.cloudify() def init_handlers(): # Init the handlers first for (_ctype, mod) in c_handlers.items(): if mod in c_handlers.initialized: # Avoid initing the same module twice (if said module # is registered to more than one content-type). continue handlers.call_begin(mod, data, frequency) c_handlers.initialized.append(mod) def walk_handlers(excluded): # Walk the user data part_data = { 'handlers': c_handlers, # Any new handlers that are encountered get writen here 'handlerdir': idir, 'data': data, # The default frequency if handlers don't have one 'frequency': frequency, # This will be used when new handlers are found # to help write there contents to files with numbered # names... 'handlercount': 0, 'excluded': excluded, } handlers.walk(data_msg, handlers.walker_callback, data=part_data) def finalize_handlers(): # Give callbacks opportunity to finalize for (_ctype, mod) in c_handlers.items(): if mod not in c_handlers.initialized: # Said module was never inited in the first place, so lets # not attempt to finalize those that never got called. continue c_handlers.initialized.remove(mod) try: handlers.call_end(mod, data, frequency) except: util.logexc(LOG, "Failed to finalize handler: %s", mod) try: init_handlers() walk_handlers(excluded) finally: finalize_handlers() def consume_data(self, frequency=PER_INSTANCE): # Consume the userdata first, because we need want to let the part # handlers run first (for merging stuff) with events.ReportEventStack("consume-user-data", "reading and applying user-data", parent=self.reporter): self._consume_userdata(frequency) with events.ReportEventStack("consume-vendor-data", "reading and applying vendor-data", parent=self.reporter): self._consume_vendordata(frequency) # Perform post-consumption adjustments so that # modules that run during the init stage reflect # this consumed set. # # They will be recreated on future access... self._reset() # Note(harlowja): the 'active' datasource will have # references to the previous config, distro, paths # objects before the load of the userdata happened, # this is expected. def _consume_vendordata(self, frequency=PER_INSTANCE): """ Consume the vendordata and run the part handlers on it """ # User-data should have been consumed first. # So we merge the other available cloud-configs (everything except # vendor provided), and check whether or not we should consume # vendor data at all. That gives user or system a chance to override. if not self.datasource.get_vendordata_raw(): LOG.debug("no vendordata from datasource") return _cc_merger = helpers.ConfigMerger(paths=self._paths, datasource=self.datasource, additional_fns=[], base_cfg=self.cfg, include_vendor=False) vdcfg = _cc_merger.cfg.get('vendor_data', {}) if not isinstance(vdcfg, dict): vdcfg = {'enabled': False} LOG.warn("invalid 'vendor_data' setting. resetting to: %s", vdcfg) enabled = vdcfg.get('enabled') no_handlers = vdcfg.get('disabled_handlers', None) if not util.is_true(enabled): LOG.debug("vendordata consumption is disabled.") return LOG.debug("vendor data will be consumed. disabled_handlers=%s", no_handlers) # Ensure vendordata source fetched before activation (just incase) vendor_data_msg = self.datasource.get_vendordata() # This keeps track of all the active handlers, while excluding what the # users doesn't want run, i.e. boot_hook, cloud_config, shell_script c_handlers_list = self._default_vendordata_handlers() # Run the handlers self._do_handlers(vendor_data_msg, c_handlers_list, frequency, excluded=no_handlers) def _consume_userdata(self, frequency=PER_INSTANCE): """ Consume the userdata and run the part handlers """ # Ensure datasource fetched before activation (just incase) user_data_msg = self.datasource.get_userdata(True) # This keeps track of all the active handlers c_handlers_list = self._default_handlers() # Run the handlers self._do_handlers(user_data_msg, c_handlers_list, frequency) def _find_networking_config(self): disable_file = os.path.join( self.paths.get_cpath('data'), 'upgraded-network') if os.path.exists(disable_file): return (None, disable_file) cmdline_cfg = ('cmdline', net.read_kernel_cmdline_config()) dscfg = ('ds', None) if self.datasource and hasattr(self.datasource, 'network_config'): dscfg = ('ds', self.datasource.network_config) sys_cfg = ('system_cfg', self.cfg.get('network')) for loc, ncfg in (cmdline_cfg, dscfg, sys_cfg): if net.is_disabled_cfg(ncfg): LOG.debug("network config disabled by %s", loc) return (None, loc) if ncfg: return (ncfg, loc) return (net.generate_fallback_config(), "fallback") def apply_network_config(self): netcfg, src = self._find_networking_config() if netcfg is None: LOG.info("network config is disabled by %s", src) return LOG.info("Applying network configuration from %s: %s", src, netcfg) try: return self.distro.apply_network_config(netcfg) except NotImplementedError: LOG.warn("distro '%s' does not implement apply_network_config. " "networking may not be configured properly." % self.distro) return class Modules(object): def __init__(self, init, cfg_files=None, reporter=None): self.init = init self.cfg_files = cfg_files # Created on first use self._cached_cfg = None if reporter is None: reporter = events.ReportEventStack( name="module-reporter", description="module-desc", reporting_enabled=False) self.reporter = reporter @property def cfg(self): # None check to avoid empty case causing re-reading if self._cached_cfg is None: merger = helpers.ConfigMerger(paths=self.init.paths, datasource=self.init.datasource, additional_fns=self.cfg_files, base_cfg=self.init.cfg) self._cached_cfg = merger.cfg # LOG.debug("Loading 'module' config %s", self._cached_cfg) # Only give out a copy so that others can't modify this... return copy.deepcopy(self._cached_cfg) def _read_modules(self, name): module_list = [] if name not in self.cfg: return module_list cfg_mods = self.cfg[name] # Create 'module_list', an array of hashes # Where hash['mod'] = module name # hash['freq'] = frequency # hash['args'] = arguments for item in cfg_mods: if not item: continue if isinstance(item, six.string_types): module_list.append({ 'mod': item.strip(), }) elif isinstance(item, (list)): contents = {} # Meant to fall through... if len(item) >= 1: contents['mod'] = item[0].strip() if len(item) >= 2: contents['freq'] = item[1].strip() if len(item) >= 3: contents['args'] = item[2:] if contents: module_list.append(contents) elif isinstance(item, (dict)): contents = {} valid = False if 'name' in item: contents['mod'] = item['name'].strip() valid = True if 'frequency' in item: contents['freq'] = item['frequency'].strip() if 'args' in item: contents['args'] = item['args'] or [] if contents and valid: module_list.append(contents) else: raise TypeError(("Failed to read '%s' item in config," " unknown type %s") % (item, type_utils.obj_name(item))) return module_list def _fixup_modules(self, raw_mods): mostly_mods = [] for raw_mod in raw_mods: raw_name = raw_mod['mod'] freq = raw_mod.get('freq') run_args = raw_mod.get('args') or [] mod_name = config.form_module_name(raw_name) if not mod_name: continue if freq and freq not in FREQUENCIES: LOG.warn(("Config specified module %s" " has an unknown frequency %s"), raw_name, freq) # Reset it so when ran it will get set to a known value freq = None mod_locs, looked_locs = importer.find_module( mod_name, ['', type_utils.obj_name(config)], ['handle']) if not mod_locs: LOG.warn("Could not find module named %s (searched %s)", mod_name, looked_locs) continue mod = config.fixup_module(importer.import_module(mod_locs[0])) mostly_mods.append([mod, raw_name, freq, run_args]) return mostly_mods def _run_modules(self, mostly_mods): cc = self.init.cloudify() # Return which ones ran # and which ones failed + the exception of why it failed failures = [] which_ran = [] for (mod, name, freq, args) in mostly_mods: try: # Try the modules frequency, otherwise fallback to a known one if not freq: freq = mod.frequency if freq not in FREQUENCIES: freq = PER_INSTANCE LOG.debug("Running module %s (%s) with frequency %s", name, mod, freq) # Use the configs logger and not our own # TODO(harlowja): possibly check the module # for having a LOG attr and just give it back # its own logger? func_args = [name, self.cfg, cc, config.LOG, args] # Mark it as having started running which_ran.append(name) # This name will affect the semaphore name created run_name = "config-%s" % (name) desc = "running %s with frequency %s" % (run_name, freq) myrep = events.ReportEventStack( name=run_name, description=desc, parent=self.reporter) with myrep: ran, _r = cc.run(run_name, mod.handle, func_args, freq=freq) if ran: myrep.message = "%s ran successfully" % run_name else: myrep.message = "%s previously ran" % run_name except Exception as e: util.logexc(LOG, "Running module %s (%s) failed", name, mod) failures.append((name, e)) return (which_ran, failures) def run_single(self, mod_name, args=None, freq=None): # Form the users module 'specs' mod_to_be = { 'mod': mod_name, 'args': args, 'freq': freq, } # Now resume doing the normal fixups and running raw_mods = [mod_to_be] mostly_mods = self._fixup_modules(raw_mods) return self._run_modules(mostly_mods) def run_section(self, section_name): raw_mods = self._read_modules(section_name) mostly_mods = self._fixup_modules(raw_mods) d_name = self.init.distro.name skipped = [] forced = [] overridden = self.cfg.get('unverified_modules', []) for (mod, name, _freq, _args) in mostly_mods: worked_distros = set(mod.distros) worked_distros.update( distros.Distro.expand_osfamily(mod.osfamilies)) # module does not declare 'distros' or lists this distro if not worked_distros or d_name in worked_distros: continue if name in overridden: forced.append(name) else: skipped.append(name) if skipped: LOG.info("Skipping modules %s because they are not verified " "on distro '%s'. To run anyway, add them to " "'unverified_modules' in config.", skipped, d_name) if forced: LOG.info("running unverified_modules: %s", forced) return self._run_modules(mostly_mods) def fetch_base_config(): base_cfgs = [] default_cfg = util.get_builtin_cfg() kern_contents = util.read_cc_from_cmdline() # Kernel/cmdline parameters override system config if kern_contents: base_cfgs.append(util.load_yaml(kern_contents, default={})) # Anything in your conf.d location?? # or the 'default' cloud.cfg location??? base_cfgs.append(util.read_conf_with_confd(CLOUD_CONFIG)) # And finally the default gets to play if default_cfg: base_cfgs.append(default_cfg) return util.mergemanydict(base_cfgs) def _pkl_store(obj, fname): try: pk_contents = pickle.dumps(obj) except Exception: util.logexc(LOG, "Failed pickling datasource %s", obj) return False try: util.write_file(fname, pk_contents, omode="wb", mode=0o400) except Exception: util.logexc(LOG, "Failed pickling datasource to %s", fname) return False return True def _pkl_load(fname): pickle_contents = None try: pickle_contents = util.load_file(fname, decode=False) except Exception as e: if os.path.isfile(fname): LOG.warn("failed loading pickle in %s: %s" % (fname, e)) pass # This is allowed so just return nothing successfully loaded... if not pickle_contents: return None try: return pickle.loads(pickle_contents) except Exception: util.logexc(LOG, "Failed loading pickled blob from %s", fname) return None cloud-init-0.7.7~bzr1212/cloudinit/templater.py0000644000000000000000000001320412704246716017516 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import collections import re try: from Cheetah.Template import Template as CTemplate CHEETAH_AVAILABLE = True except (ImportError, AttributeError): CHEETAH_AVAILABLE = False try: import jinja2 from jinja2 import Template as JTemplate JINJA_AVAILABLE = True except (ImportError, AttributeError): JINJA_AVAILABLE = False from cloudinit import log as logging from cloudinit import type_utils as tu from cloudinit import util LOG = logging.getLogger(__name__) TYPE_MATCHER = re.compile(r"##\s*template:(.*)", re.I) BASIC_MATCHER = re.compile(r'\$\{([A-Za-z0-9_.]+)\}|\$([A-Za-z0-9_.]+)') def basic_render(content, params): """This does simple replacement of bash variable like templates. It identifies patterns like ${a} or $a and can also identify patterns like ${a.b} or $a.b which will look for a key 'b' in the dictionary rooted by key 'a'. """ def replacer(match): # Only 1 of the 2 groups will actually have a valid entry. name = match.group(1) if name is None: name = match.group(2) if name is None: raise RuntimeError("Match encountered but no valid group present") path = collections.deque(name.split(".")) selected_params = params while len(path) > 1: key = path.popleft() if not isinstance(selected_params, dict): raise TypeError("Can not traverse into" " non-dictionary '%s' of type %s while" " looking for subkey '%s'" % (selected_params, tu.obj_name(selected_params), key)) selected_params = selected_params[key] key = path.popleft() if not isinstance(selected_params, dict): raise TypeError("Can not extract key '%s' from non-dictionary" " '%s' of type %s" % (key, selected_params, tu.obj_name(selected_params))) return str(selected_params[key]) return BASIC_MATCHER.sub(replacer, content) def detect_template(text): def cheetah_render(content, params): return CTemplate(content, searchList=[params]).respond() def jinja_render(content, params): # keep_trailing_newline is in jinja2 2.7+, not 2.6 add = "\n" if content.endswith("\n") else "" return JTemplate(content, undefined=jinja2.StrictUndefined, trim_blocks=True).render(**params) + add if text.find("\n") != -1: ident, rest = text.split("\n", 1) else: ident = text rest = '' type_match = TYPE_MATCHER.match(ident) if not type_match: if not CHEETAH_AVAILABLE: LOG.warn("Cheetah not available as the default renderer for" " unknown template, reverting to the basic renderer.") return ('basic', basic_render, text) else: return ('cheetah', cheetah_render, text) else: template_type = type_match.group(1).lower().strip() if template_type not in ('jinja', 'cheetah', 'basic'): raise ValueError("Unknown template rendering type '%s' requested" % template_type) if template_type == 'jinja' and not JINJA_AVAILABLE: LOG.warn("Jinja not available as the selected renderer for" " desired template, reverting to the basic renderer.") return ('basic', basic_render, rest) elif template_type == 'jinja' and JINJA_AVAILABLE: return ('jinja', jinja_render, rest) if template_type == 'cheetah' and not CHEETAH_AVAILABLE: LOG.warn("Cheetah not available as the selected renderer for" " desired template, reverting to the basic renderer.") return ('basic', basic_render, rest) elif template_type == 'cheetah' and CHEETAH_AVAILABLE: return ('cheetah', cheetah_render, rest) # Only thing left over is the basic renderer (it is always available). return ('basic', basic_render, rest) def render_from_file(fn, params): if not params: params = {} template_type, renderer, content = detect_template(util.load_file(fn)) LOG.debug("Rendering content of '%s' using renderer %s", fn, template_type) return renderer(content, params) def render_to_file(fn, outfn, params, mode=0o644): contents = render_from_file(fn, params) util.write_file(outfn, contents, mode=mode) def render_string(content, params): if not params: params = {} template_type, renderer, content = detect_template(content) return renderer(content, params) cloud-init-0.7.7~bzr1212/cloudinit/type_utils.py0000644000000000000000000000271512704246716017727 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import types import six if six.PY3: _NAME_TYPES = ( types.ModuleType, types.FunctionType, types.LambdaType, type, ) else: _NAME_TYPES = ( types.TypeType, types.ModuleType, types.FunctionType, types.LambdaType, types.ClassType, ) def obj_name(obj): if isinstance(obj, _NAME_TYPES): return six.text_type(obj.__name__) else: if not hasattr(obj, '__class__'): return repr(obj) else: return obj_name(obj.__class__) cloud-init-0.7.7~bzr1212/cloudinit/url_helper.py0000644000000000000000000004237512704246716017675 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import json import os import requests import six import time from email.utils import parsedate from functools import partial from requests import exceptions import oauthlib.oauth1 as oauth1 from six.moves.urllib.parse import ( urlparse, urlunparse, quote as urlquote) from cloudinit import log as logging from cloudinit import version LOG = logging.getLogger(__name__) if six.PY2: import httplib NOT_FOUND = httplib.NOT_FOUND else: import http.client NOT_FOUND = http.client.NOT_FOUND # Check if requests has ssl support (added in requests >= 0.8.8) SSL_ENABLED = False CONFIG_ENABLED = False # This was added in 0.7 (but taken out in >=1.0) _REQ_VER = None try: from distutils.version import LooseVersion import pkg_resources _REQ = pkg_resources.get_distribution('requests') _REQ_VER = LooseVersion(_REQ.version) if _REQ_VER >= LooseVersion('0.8.8'): SSL_ENABLED = True if _REQ_VER >= LooseVersion('0.7.0') and _REQ_VER < LooseVersion('1.0.0'): CONFIG_ENABLED = True except: pass def _cleanurl(url): parsed_url = list(urlparse(url, scheme='http')) if not parsed_url[1] and parsed_url[2]: # Swap these since this seems to be a common # occurrence when given urls like 'www.google.com' parsed_url[1] = parsed_url[2] parsed_url[2] = '' return urlunparse(parsed_url) def combine_url(base, *add_ons): def combine_single(url, add_on): url_parsed = list(urlparse(url)) path = url_parsed[2] if path and not path.endswith("/"): path += "/" path += urlquote(str(add_on), safe="/:") url_parsed[2] = path return urlunparse(url_parsed) url = base for add_on in add_ons: url = combine_single(url, add_on) return url # Made to have same accessors as UrlResponse so that the # read_file_or_url can return this or that object and the # 'user' of those objects will not need to know the difference. class StringResponse(object): def __init__(self, contents, code=200): self.code = code self.headers = {} self.contents = contents self.url = None def ok(self, *args, **kwargs): if self.code != 200: return False return True def __str__(self): return self.contents class FileResponse(StringResponse): def __init__(self, path, contents, code=200): StringResponse.__init__(self, contents, code=code) self.url = path class UrlResponse(object): def __init__(self, response): self._response = response @property def contents(self): return self._response.content @property def url(self): return self._response.url def ok(self, redirects_ok=False): upper = 300 if redirects_ok: upper = 400 if self.code >= 200 and self.code < upper: return True else: return False @property def headers(self): return self._response.headers @property def code(self): return self._response.status_code def __str__(self): return self._response.text class UrlError(IOError): def __init__(self, cause, code=None, headers=None, url=None): IOError.__init__(self, str(cause)) self.cause = cause self.code = code self.headers = headers if self.headers is None: self.headers = {} self.url = url def _get_ssl_args(url, ssl_details): ssl_args = {} scheme = urlparse(url).scheme if scheme == 'https' and ssl_details: if not SSL_ENABLED: LOG.warn("SSL is not supported in requests v%s, " "cert. verification can not occur!", _REQ_VER) else: if 'ca_certs' in ssl_details and ssl_details['ca_certs']: ssl_args['verify'] = ssl_details['ca_certs'] else: ssl_args['verify'] = True if 'cert_file' in ssl_details and 'key_file' in ssl_details: ssl_args['cert'] = [ssl_details['cert_file'], ssl_details['key_file']] elif 'cert_file' in ssl_details: ssl_args['cert'] = str(ssl_details['cert_file']) return ssl_args def readurl(url, data=None, timeout=None, retries=0, sec_between=1, headers=None, headers_cb=None, ssl_details=None, check_status=True, allow_redirects=True, exception_cb=None): url = _cleanurl(url) req_args = { 'url': url, } req_args.update(_get_ssl_args(url, ssl_details)) req_args['allow_redirects'] = allow_redirects req_args['method'] = 'GET' if timeout is not None: req_args['timeout'] = max(float(timeout), 0) if data: req_args['method'] = 'POST' # It doesn't seem like config # was added in older library versions (or newer ones either), thus we # need to manually do the retries if it wasn't... if CONFIG_ENABLED: req_config = { 'store_cookies': False, } # Don't use the retry support built-in # since it doesn't allow for 'sleep_times' # in between tries.... # if retries: # req_config['max_retries'] = max(int(retries), 0) req_args['config'] = req_config manual_tries = 1 if retries: manual_tries = max(int(retries) + 1, 1) def_headers = { 'User-Agent': 'Cloud-Init/%s' % (version.version_string()), } if headers: def_headers.update(headers) headers = def_headers if not headers_cb: def _cb(url): return headers headers_cb = _cb if data: req_args['data'] = data if sec_between is None: sec_between = -1 excps = [] # Handle retrying ourselves since the built-in support # doesn't handle sleeping between tries... for i in range(0, manual_tries): req_args['headers'] = headers_cb(url) filtered_req_args = {} for (k, v) in req_args.items(): if k == 'data': continue filtered_req_args[k] = v try: LOG.debug("[%s/%s] open '%s' with %s configuration", i, manual_tries, url, filtered_req_args) r = requests.request(**req_args) if check_status: r.raise_for_status() LOG.debug("Read from %s (%s, %sb) after %s attempts", url, r.status_code, len(r.content), (i + 1)) # Doesn't seem like we can make it use a different # subclass for responses, so add our own backward-compat # attrs return UrlResponse(r) except exceptions.RequestException as e: if (isinstance(e, (exceptions.HTTPError)) and hasattr(e, 'response') and # This appeared in v 0.10.8 hasattr(e.response, 'status_code')): excps.append(UrlError(e, code=e.response.status_code, headers=e.response.headers, url=url)) else: excps.append(UrlError(e, url=url)) if SSL_ENABLED and isinstance(e, exceptions.SSLError): # ssl exceptions are not going to get fixed by waiting a # few seconds break if exception_cb and exception_cb(req_args.copy(), excps[-1]): # if an exception callback was given it should return None # a true-ish value means to break and re-raise the exception break if i + 1 < manual_tries and sec_between > 0: LOG.debug("Please wait %s seconds while we wait to try again", sec_between) time.sleep(sec_between) if excps: raise excps[-1] return None # Should throw before this... def wait_for_url(urls, max_wait=None, timeout=None, status_cb=None, headers_cb=None, sleep_time=1, exception_cb=None): """ urls: a list of urls to try max_wait: roughly the maximum time to wait before giving up The max time is *actually* len(urls)*timeout as each url will be tried once and given the timeout provided. a number <= 0 will always result in only one try timeout: the timeout provided to urlopen status_cb: call method with string message when a url is not available headers_cb: call method with single argument of url to get headers for request. exception_cb: call method with 2 arguments 'msg' (per status_cb) and 'exception', the exception that occurred. the idea of this routine is to wait for the EC2 metdata service to come up. On both Eucalyptus and EC2 we have seen the case where the instance hit the MD before the MD service was up. EC2 seems to have permenantely fixed this, though. In openstack, the metadata service might be painfully slow, and unable to avoid hitting a timeout of even up to 10 seconds or more (LP: #894279) for a simple GET. Offset those needs with the need to not hang forever (and block boot) on a system where cloud-init is configured to look for EC2 Metadata service but is not going to find one. It is possible that the instance data host (169.254.169.254) may be firewalled off Entirely for a sytem, meaning that the connection will block forever unless a timeout is set. """ start_time = time.time() def log_status_cb(msg, exc=None): LOG.debug(msg) if status_cb is None: status_cb = log_status_cb def timeup(max_wait, start_time): return ((max_wait <= 0 or max_wait is None) or (time.time() - start_time > max_wait)) loop_n = 0 while True: sleep_time = int(loop_n / 5) + 1 for url in urls: now = time.time() if loop_n != 0: if timeup(max_wait, start_time): break if timeout and (now + timeout > (start_time + max_wait)): # shorten timeout to not run way over max_time timeout = int((start_time + max_wait) - now) reason = "" url_exc = None try: if headers_cb is not None: headers = headers_cb(url) else: headers = {} response = readurl(url, headers=headers, timeout=timeout, check_status=False) if not response.contents: reason = "empty response [%s]" % (response.code) url_exc = UrlError(ValueError(reason), code=response.code, headers=response.headers, url=url) elif not response.ok(): reason = "bad status code [%s]" % (response.code) url_exc = UrlError(ValueError(reason), code=response.code, headers=response.headers, url=url) else: return url except UrlError as e: reason = "request error [%s]" % e url_exc = e except Exception as e: reason = "unexpected error [%s]" % e url_exc = e time_taken = int(time.time() - start_time) status_msg = "Calling '%s' failed [%s/%ss]: %s" % (url, time_taken, max_wait, reason) status_cb(status_msg) if exception_cb: # This can be used to alter the headers that will be sent # in the future, for example this is what the MAAS datasource # does. exception_cb(msg=status_msg, exception=url_exc) if timeup(max_wait, start_time): break loop_n = loop_n + 1 LOG.debug("Please wait %s seconds while we wait to try again", sleep_time) time.sleep(sleep_time) return False class OauthUrlHelper(object): def __init__(self, consumer_key=None, token_key=None, token_secret=None, consumer_secret=None, skew_data_file="/run/oauth_skew.json"): self.consumer_key = consumer_key self.consumer_secret = consumer_secret or "" self.token_key = token_key self.token_secret = token_secret self.skew_data_file = skew_data_file self._do_oauth = True self.skew_change_limit = 5 required = (self.token_key, self.token_secret, self.consumer_key) if not any(required): self._do_oauth = False elif not all(required): raise ValueError("all or none of token_key, token_secret, or " "consumer_key can be set") old = self.read_skew_file() self.skew_data = old or {} def read_skew_file(self): if self.skew_data_file and os.path.isfile(self.skew_data_file): with open(self.skew_data_file, mode="r") as fp: return json.load(fp) return None def update_skew_file(self, host, value): # this is not atomic if not self.skew_data_file: return cur = self.read_skew_file() if cur is None: cur = {} cur[host] = value with open(self.skew_data_file, mode="w") as fp: fp.write(json.dumps(cur)) def exception_cb(self, msg, exception): if not (isinstance(exception, UrlError) and (exception.code == 403 or exception.code == 401)): return if 'date' not in exception.headers: LOG.warn("Missing header 'date' in %s response", exception.code) return date = exception.headers['date'] try: remote_time = time.mktime(parsedate(date)) except Exception as e: LOG.warn("Failed to convert datetime '%s': %s", date, e) return skew = int(remote_time - time.time()) host = urlparse(exception.url).netloc old_skew = self.skew_data.get(host, 0) if abs(old_skew - skew) > self.skew_change_limit: self.update_skew_file(host, skew) LOG.warn("Setting oauth clockskew for %s to %d", host, skew) self.skew_data[host] = skew return def headers_cb(self, url): if not self._do_oauth: return {} timestamp = None host = urlparse(url).netloc if self.skew_data and host in self.skew_data: timestamp = int(time.time()) + self.skew_data[host] return oauth_headers( url=url, consumer_key=self.consumer_key, token_key=self.token_key, token_secret=self.token_secret, consumer_secret=self.consumer_secret, timestamp=timestamp) def _wrapped(self, wrapped_func, args, kwargs): kwargs['headers_cb'] = partial( self._headers_cb, kwargs.get('headers_cb')) kwargs['exception_cb'] = partial( self._exception_cb, kwargs.get('exception_cb')) return wrapped_func(*args, **kwargs) def wait_for_url(self, *args, **kwargs): return self._wrapped(wait_for_url, args, kwargs) def readurl(self, *args, **kwargs): return self._wrapped(readurl, args, kwargs) def _exception_cb(self, extra_exception_cb, msg, exception): ret = None try: if extra_exception_cb: ret = extra_exception_cb(msg, exception) finally: self.exception_cb(msg, exception) return ret def _headers_cb(self, extra_headers_cb, url): headers = {} if extra_headers_cb: headers = extra_headers_cb(url) headers.update(self.headers_cb(url)) return headers def oauth_headers(url, consumer_key, token_key, token_secret, consumer_secret, timestamp=None): if timestamp: timestamp = str(timestamp) else: timestamp = None client = oauth1.Client( consumer_key, client_secret=consumer_secret, resource_owner_key=token_key, resource_owner_secret=token_secret, signature_method=oauth1.SIGNATURE_PLAINTEXT, timestamp=timestamp) uri, signed_headers, body = client.sign(url) return signed_headers cloud-init-0.7.7~bzr1212/cloudinit/user_data.py0000644000000000000000000003215212704246716017473 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import os from email.mime.base import MIMEBase from email.mime.multipart import MIMEMultipart from email.mime.nonmultipart import MIMENonMultipart from email.mime.text import MIMEText import six from cloudinit import handlers from cloudinit import log as logging from cloudinit import util LOG = logging.getLogger(__name__) # Constants copied in from the handler module NOT_MULTIPART_TYPE = handlers.NOT_MULTIPART_TYPE PART_FN_TPL = handlers.PART_FN_TPL OCTET_TYPE = handlers.OCTET_TYPE # Saves typing errors CONTENT_TYPE = 'Content-Type' # Various special content types that cause special actions TYPE_NEEDED = ["text/plain", "text/x-not-multipart"] INCLUDE_TYPES = ['text/x-include-url', 'text/x-include-once-url'] ARCHIVE_TYPES = ["text/cloud-config-archive"] UNDEF_TYPE = "text/plain" ARCHIVE_UNDEF_TYPE = "text/cloud-config" ARCHIVE_UNDEF_BINARY_TYPE = "application/octet-stream" # This seems to hit most of the gzip possible content types. DECOMP_TYPES = [ 'application/gzip', 'application/gzip-compressed', 'application/gzipped', 'application/x-compress', 'application/x-compressed', 'application/x-gunzip', 'application/x-gzip', 'application/x-gzip-compressed', ] # Msg header used to track attachments ATTACHMENT_FIELD = 'Number-Attachments' # Only the following content types can have there launch index examined # in there payload, evey other content type can still provide a header EXAMINE_FOR_LAUNCH_INDEX = ["text/cloud-config"] def _replace_header(msg, key, value): del msg[key] msg[key] = value def _set_filename(msg, filename): del msg['Content-Disposition'] msg.add_header('Content-Disposition', 'attachment', filename=str(filename)) class UserDataProcessor(object): def __init__(self, paths): self.paths = paths self.ssl_details = util.fetch_ssl_details(paths) def process(self, blob): accumulating_msg = MIMEMultipart() if isinstance(blob, list): for b in blob: self._process_msg(convert_string(b), accumulating_msg) else: self._process_msg(convert_string(blob), accumulating_msg) return accumulating_msg def _process_msg(self, base_msg, append_msg): def find_ctype(payload): return handlers.type_from_starts_with(payload) for part in base_msg.walk(): if is_skippable(part): continue ctype = None ctype_orig = part.get_content_type() payload = util.fully_decoded_payload(part) was_compressed = False # When the message states it is of a gzipped content type ensure # that we attempt to decode said payload so that the decompressed # data can be examined (instead of the compressed data). if ctype_orig in DECOMP_TYPES: try: payload = util.decomp_gzip(payload, quiet=False) # At this point we don't know what the content-type is # since we just decompressed it. ctype_orig = None was_compressed = True except util.DecompressionError as e: LOG.warn("Failed decompressing payload from %s of length" " %s due to: %s", ctype_orig, len(payload), e) continue # Attempt to figure out the payloads content-type if not ctype_orig: ctype_orig = UNDEF_TYPE if ctype_orig in TYPE_NEEDED: ctype = find_ctype(payload) if ctype is None: ctype = ctype_orig # In the case where the data was compressed, we want to make sure # that we create a new message that contains the found content # type with the uncompressed content since later traversals of the # messages will expect a part not compressed. if was_compressed: maintype, subtype = ctype.split("/", 1) n_part = MIMENonMultipart(maintype, subtype) n_part.set_payload(payload) # Copy various headers from the old part to the new one, # but don't include all the headers since some are not useful # after decoding and decompression. if part.get_filename(): _set_filename(n_part, part.get_filename()) for h in ('Launch-Index',): if h in part: _replace_header(n_part, h, str(part[h])) part = n_part if ctype != ctype_orig: _replace_header(part, CONTENT_TYPE, ctype) if ctype in INCLUDE_TYPES: self._do_include(payload, append_msg) continue if ctype in ARCHIVE_TYPES: self._explode_archive(payload, append_msg) continue # TODO(harlowja): Should this be happening, shouldn't # the part header be modified and not the base? _replace_header(base_msg, CONTENT_TYPE, ctype) self._attach_part(append_msg, part) def _attach_launch_index(self, msg): header_idx = msg.get('Launch-Index', None) payload_idx = None if msg.get_content_type() in EXAMINE_FOR_LAUNCH_INDEX: try: # See if it has a launch-index field # that might affect the final header payload = util.load_yaml(msg.get_payload(decode=True)) if payload: payload_idx = payload.get('launch-index') except: pass # Header overrides contents, for now (?) or the other way around? if header_idx is not None: payload_idx = header_idx # Nothing found in payload, use header (if anything there) if payload_idx is None: payload_idx = header_idx if payload_idx is not None: try: msg.add_header('Launch-Index', str(int(payload_idx))) except (ValueError, TypeError): pass def _get_include_once_filename(self, entry): entry_fn = util.hash_blob(entry, 'md5', 64) return os.path.join(self.paths.get_ipath_cur('data'), 'urlcache', entry_fn) def _process_before_attach(self, msg, attached_id): if not msg.get_filename(): _set_filename(msg, PART_FN_TPL % (attached_id)) self._attach_launch_index(msg) def _do_include(self, content, append_msg): # Include a list of urls, one per line # also support '#include ' # or #include-once '' include_once_on = False for line in content.splitlines(): lc_line = line.lower() if lc_line.startswith("#include-once"): line = line[len("#include-once"):].lstrip() # Every following include will now # not be refetched.... but will be # re-read from a local urlcache (if it worked) include_once_on = True elif lc_line.startswith("#include"): line = line[len("#include"):].lstrip() # Disable the include once if it was on # if it wasn't, then this has no effect. include_once_on = False if line.startswith("#"): continue include_url = line.strip() if not include_url: continue include_once_fn = None content = None if include_once_on: include_once_fn = self._get_include_once_filename(include_url) if include_once_on and os.path.isfile(include_once_fn): content = util.load_file(include_once_fn) else: resp = util.read_file_or_url(include_url, ssl_details=self.ssl_details) if include_once_on and resp.ok(): util.write_file(include_once_fn, resp.contents, mode=0o600) if resp.ok(): content = resp.contents else: LOG.warn(("Fetching from %s resulted in" " a invalid http code of %s"), include_url, resp.code) if content is not None: new_msg = convert_string(content) self._process_msg(new_msg, append_msg) def _explode_archive(self, archive, append_msg): entries = util.load_yaml(archive, default=[], allowed=(list, set)) for ent in entries: # ent can be one of: # dict { 'filename' : 'value', 'content' : # 'value', 'type' : 'value' } # filename and type not be present # or # scalar(payload) if isinstance(ent, six.string_types): ent = {'content': ent} if not isinstance(ent, (dict)): # TODO(harlowja) raise? continue content = ent.get('content', '') mtype = ent.get('type') if not mtype: default = ARCHIVE_UNDEF_TYPE if isinstance(content, six.binary_type): default = ARCHIVE_UNDEF_BINARY_TYPE mtype = handlers.type_from_starts_with(content, default) maintype, subtype = mtype.split('/', 1) if maintype == "text": if isinstance(content, six.binary_type): content = content.decode() msg = MIMEText(content, _subtype=subtype) else: msg = MIMEBase(maintype, subtype) msg.set_payload(content) if 'filename' in ent: _set_filename(msg, ent['filename']) if 'launch-index' in ent: msg.add_header('Launch-Index', str(ent['launch-index'])) for header in list(ent.keys()): if header.lower() in ('content', 'filename', 'type', 'launch-index', 'content-disposition', ATTACHMENT_FIELD.lower(), CONTENT_TYPE.lower()): continue msg.add_header(header, ent[header]) self._attach_part(append_msg, msg) def _multi_part_count(self, outer_msg, new_count=None): """ Return the number of attachments to this MIMEMultipart by looking at its 'Number-Attachments' header. """ if ATTACHMENT_FIELD not in outer_msg: outer_msg[ATTACHMENT_FIELD] = '0' if new_count is not None: _replace_header(outer_msg, ATTACHMENT_FIELD, str(new_count)) fetched_count = 0 try: fetched_count = int(outer_msg.get(ATTACHMENT_FIELD)) except (ValueError, TypeError): _replace_header(outer_msg, ATTACHMENT_FIELD, str(fetched_count)) return fetched_count def _attach_part(self, outer_msg, part): """ Attach a message to an outer message. outermsg must be a MIMEMultipart. Modifies a header in the outer message to keep track of number of attachments. """ part_count = self._multi_part_count(outer_msg) self._process_before_attach(part, part_count + 1) outer_msg.attach(part) self._multi_part_count(outer_msg, part_count + 1) def is_skippable(part): # multipart/* are just containers part_maintype = part.get_content_maintype() or '' if part_maintype.lower() == 'multipart': return True return False # Coverts a raw string into a mime message def convert_string(raw_data, headers=None): if not raw_data: raw_data = '' if not headers: headers = {} data = util.decode_binary(util.decomp_gzip(raw_data)) if "mime-version:" in data[0:4096].lower(): msg = util.message_from_string(data) for (key, val) in headers.items(): _replace_header(msg, key, val) else: mtype = headers.get(CONTENT_TYPE, NOT_MULTIPART_TYPE) maintype, subtype = mtype.split("/", 1) msg = MIMEBase(maintype, subtype, *headers) msg.set_payload(data) return msg cloud-init-0.7.7~bzr1212/cloudinit/util.py0000644000000000000000000020416312704246716016504 0ustar 00000000000000# vi: ts=4 expandtab # # Copyright (C) 2012 Canonical Ltd. # Copyright (C) 2012, 2013 Hewlett-Packard Development Company, L.P. # Copyright (C) 2012 Yahoo! Inc. # # Author: Scott Moser # Author: Juerg Haefliger # Author: Joshua Harlow # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License version 3, as # published by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . import contextlib import copy as obj_copy import ctypes import email import errno import glob import grp import gzip import hashlib import json import os import os.path import platform import pwd import random import re import shutil import socket import stat import string import subprocess import sys import tempfile import time from base64 import b64decode, b64encode from six.moves.urllib import parse as urlparse import six import yaml from cloudinit import importer from cloudinit import log as logging from cloudinit import mergers from cloudinit import safeyaml from cloudinit import type_utils from cloudinit import url_helper from cloudinit import version from cloudinit.settings import (CFG_BUILTIN) _DNS_REDIRECT_IP = None LOG = logging.getLogger(__name__) # Helps cleanup filenames to ensure they aren't FS incompatible FN_REPLACEMENTS = { os.sep: '_', } FN_ALLOWED = ('_-.()' + string.digits + string.ascii_letters) TRUE_STRINGS = ('true', '1', 'on', 'yes') FALSE_STRINGS = ('off', '0', 'no', 'false') # Helper utils to see if running in a container CONTAINER_TESTS = (['systemd-detect-virt', '--quiet', '--container'], ['running-in-container'], ['lxc-is-container']) PROC_CMDLINE = None def decode_binary(blob, encoding='utf-8'): # Converts a binary type into a text type using given encoding. if isinstance(blob, six.text_type): return blob return blob.decode(encoding) def encode_text(text, encoding='utf-8'): # Converts a text string into a binary type using given encoding. if isinstance(text, six.binary_type): return text return text.encode(encoding) def b64d(source): # Base64 decode some data, accepting bytes or unicode/str, and returning # str/unicode if the result is utf-8 compatible, otherwise returning bytes. decoded = b64decode(source) try: return decoded.decode('utf-8') except UnicodeDecodeError: return decoded def b64e(source): # Base64 encode some data, accepting bytes or unicode/str, and returning # str/unicode if the result is utf-8 compatible, otherwise returning bytes. if not isinstance(source, bytes): source = source.encode('utf-8') return b64encode(source).decode('utf-8') def fully_decoded_payload(part): # In Python 3, decoding the payload will ironically hand us a bytes object. # 'decode' means to decode according to Content-Transfer-Encoding, not # according to any charset in the Content-Type. So, if we end up with # bytes, first try to decode to str via CT charset, and failing that, try # utf-8 using surrogate escapes. cte_payload = part.get_payload(decode=True) if (six.PY3 and part.get_content_maintype() == 'text' and isinstance(cte_payload, bytes)): charset = part.get_charset() if charset and charset.input_codec: encoding = charset.input_codec else: encoding = 'utf-8' return cte_payload.decode(encoding, errors='surrogateescape') return cte_payload # Path for DMI Data DMI_SYS_PATH = "/sys/class/dmi/id" # dmidecode and /sys/class/dmi/id/* use different names for the same value, # this allows us to refer to them by one canonical name DMIDECODE_TO_DMI_SYS_MAPPING = { 'baseboard-asset-tag': 'board_asset_tag', 'baseboard-manufacturer': 'board_vendor', 'baseboard-product-name': 'board_name', 'baseboard-serial-number': 'board_serial', 'baseboard-version': 'board_version', 'bios-release-date': 'bios_date', 'bios-vendor': 'bios_vendor', 'bios-version': 'bios_version', 'chassis-asset-tag': 'chassis_asset_tag', 'chassis-manufacturer': 'chassis_vendor', 'chassis-serial-number': 'chassis_serial', 'chassis-version': 'chassis_version', 'system-manufacturer': 'sys_vendor', 'system-product-name': 'product_name', 'system-serial-number': 'product_serial', 'system-uuid': 'product_uuid', 'system-version': 'product_version', } class ProcessExecutionError(IOError): MESSAGE_TMPL = ('%(description)s\n' 'Command: %(cmd)s\n' 'Exit code: %(exit_code)s\n' 'Reason: %(reason)s\n' 'Stdout: %(stdout)r\n' 'Stderr: %(stderr)r') def __init__(self, stdout=None, stderr=None, exit_code=None, cmd=None, description=None, reason=None): if not cmd: self.cmd = '-' else: self.cmd = cmd if not description: self.description = 'Unexpected error while running command.' else: self.description = description if not isinstance(exit_code, six.integer_types): self.exit_code = '-' else: self.exit_code = exit_code if not stderr: self.stderr = '' else: self.stderr = stderr if not stdout: self.stdout = '' else: self.stdout = stdout if reason: self.reason = reason else: self.reason = '-' message = self.MESSAGE_TMPL % { 'description': self.description, 'cmd': self.cmd, 'exit_code': self.exit_code, 'stdout': self.stdout, 'stderr': self.stderr, 'reason': self.reason, } IOError.__init__(self, message) # For backward compatibility with Python 2. if not hasattr(self, 'message'): self.message = message class SeLinuxGuard(object): def __init__(self, path, recursive=False): # Late import since it might not always # be possible to use this try: self.selinux = importer.import_module('selinux') except ImportError: self.selinux = None self.path = path self.recursive = recursive def __enter__(self): if self.selinux and self.selinux.is_selinux_enabled(): return True else: return False def __exit__(self, excp_type, excp_value, excp_traceback): if not self.selinux or not self.selinux.is_selinux_enabled(): return if not os.path.lexists(self.path): return path = os.path.realpath(self.path) # path should be a string, not unicode if six.PY2: path = str(path) try: stats = os.lstat(path) self.selinux.matchpathcon(path, stats[stat.ST_MODE]) except OSError: return LOG.debug("Restoring selinux mode for %s (recursive=%s)", path, self.recursive) self.selinux.restorecon(path, recursive=self.recursive) class MountFailedError(Exception): pass class DecompressionError(Exception): pass def ExtendedTemporaryFile(**kwargs): fh = tempfile.NamedTemporaryFile(**kwargs) # Replace its unlink with a quiet version # that does not raise errors when the # file to unlink has been unlinked elsewhere.. LOG.debug("Created temporary file %s", fh.name) fh.unlink = del_file # Add a new method that will unlink # right 'now' but still lets the exit # method attempt to remove it (which will # not throw due to our del file being quiet # about files that are not there) def unlink_now(): fh.unlink(fh.name) setattr(fh, 'unlink_now', unlink_now) return fh def fork_cb(child_cb, *args, **kwargs): fid = os.fork() if fid == 0: try: child_cb(*args, **kwargs) os._exit(0) except: logexc(LOG, "Failed forking and calling callback %s", type_utils.obj_name(child_cb)) os._exit(1) else: LOG.debug("Forked child %s who will run callback %s", fid, type_utils.obj_name(child_cb)) def is_true(val, addons=None): if isinstance(val, (bool)): return val is True check_set = TRUE_STRINGS if addons: check_set = list(check_set) + addons if six.text_type(val).lower().strip() in check_set: return True return False def is_false(val, addons=None): if isinstance(val, (bool)): return val is False check_set = FALSE_STRINGS if addons: check_set = list(check_set) + addons if six.text_type(val).lower().strip() in check_set: return True return False def translate_bool(val, addons=None): if not val: # This handles empty lists and false and # other things that python believes are false return False # If its already a boolean skip if isinstance(val, (bool)): return val return is_true(val, addons) def rand_str(strlen=32, select_from=None): if not select_from: select_from = string.ascii_letters + string.digits return "".join([random.choice(select_from) for _x in range(0, strlen)]) def read_conf(fname): try: return load_yaml(load_file(fname), default={}) except IOError as e: if e.errno == errno.ENOENT: return {} else: raise # Merges X lists, and then keeps the # unique ones, but orders by sort order # instead of by the original order def uniq_merge_sorted(*lists): return sorted(uniq_merge(*lists)) # Merges X lists and then iterates over those # and only keeps the unique items (order preserving) # and returns that merged and uniqued list as the # final result. # # Note: if any entry is a string it will be # split on commas and empty entries will be # evicted and merged in accordingly. def uniq_merge(*lists): combined_list = [] for a_list in lists: if isinstance(a_list, six.string_types): a_list = a_list.strip().split(",") # Kickout the empty ones a_list = [a for a in a_list if len(a)] combined_list.extend(a_list) return uniq_list(combined_list) def clean_filename(fn): for (k, v) in FN_REPLACEMENTS.items(): fn = fn.replace(k, v) removals = [] for k in fn: if k not in FN_ALLOWED: removals.append(k) for k in removals: fn = fn.replace(k, '') fn = fn.strip() return fn def decomp_gzip(data, quiet=True, decode=True): try: buf = six.BytesIO(encode_text(data)) with contextlib.closing(gzip.GzipFile(None, "rb", 1, buf)) as gh: if decode: return decode_binary(gh.read()) else: return gh.read() except Exception as e: if quiet: return data else: raise DecompressionError(six.text_type(e)) def extract_usergroup(ug_pair): if not ug_pair: return (None, None) ug_parted = ug_pair.split(':', 1) u = ug_parted[0].strip() if len(ug_parted) == 2: g = ug_parted[1].strip() else: g = None if not u or u == "-1" or u.lower() == "none": u = None if not g or g == "-1" or g.lower() == "none": g = None return (u, g) def find_modules(root_dir): entries = dict() for fname in glob.glob(os.path.join(root_dir, "*.py")): if not os.path.isfile(fname): continue modname = os.path.basename(fname)[0:-3] modname = modname.strip() if modname and modname.find(".") == -1: entries[fname] = modname return entries def multi_log(text, console=True, stderr=True, log=None, log_level=logging.DEBUG): if stderr: sys.stderr.write(text) if console: conpath = "/dev/console" if os.path.exists(conpath): with open(conpath, 'w') as wfh: wfh.write(text) wfh.flush() else: # A container may lack /dev/console (arguably a container bug). If # it does not exist, then write output to stdout. this will result # in duplicate stderr and stdout messages if stderr was True. # # even though upstart or systemd might have set up output to go to # /dev/console, the user may have configured elsewhere via # cloud-config 'output'. If there is /dev/console, messages will # still get there. sys.stdout.write(text) if log: if text[-1] == "\n": log.log(log_level, text[:-1]) else: log.log(log_level, text) def load_json(text, root_types=(dict,)): decoded = json.loads(decode_binary(text)) if not isinstance(decoded, tuple(root_types)): expected_types = ", ".join([str(t) for t in root_types]) raise TypeError("(%s) root types expected, got %s instead" % (expected_types, type(decoded))) return decoded def is_ipv4(instr): """determine if input string is a ipv4 address. return boolean.""" toks = instr.split('.') if len(toks) != 4: return False try: toks = [x for x in toks if int(x) < 256 and int(x) >= 0] except: return False return len(toks) == 4 def get_cfg_option_bool(yobj, key, default=False): if key not in yobj: return default return translate_bool(yobj[key]) def get_cfg_option_str(yobj, key, default=None): if key not in yobj: return default val = yobj[key] if not isinstance(val, six.string_types): val = str(val) return val def get_cfg_option_int(yobj, key, default=0): return int(get_cfg_option_str(yobj, key, default=default)) def system_info(): return { 'platform': platform.platform(), 'release': platform.release(), 'python': platform.python_version(), 'uname': platform.uname(), 'dist': platform.linux_distribution(), } def get_cfg_option_list(yobj, key, default=None): """ Gets the C{key} config option from C{yobj} as a list of strings. If the key is present as a single string it will be returned as a list with one string arg. @param yobj: The configuration object. @param key: The configuration key to get. @param default: The default to return if key is not found. @return: The configuration option as a list of strings or default if key is not found. """ if key not in yobj: return default if yobj[key] is None: return [] val = yobj[key] if isinstance(val, (list)): cval = [v for v in val] return cval if not isinstance(val, six.string_types): val = str(val) return [val] # get a cfg entry by its path array # for f['a']['b']: get_cfg_by_path(mycfg,('a','b')) def get_cfg_by_path(yobj, keyp, default=None): cur = yobj for tok in keyp: if tok not in cur: return default cur = cur[tok] return cur def fixup_output(cfg, mode): (outfmt, errfmt) = get_output_cfg(cfg, mode) redirect_output(outfmt, errfmt) return (outfmt, errfmt) # redirect_output(outfmt, errfmt, orig_out, orig_err) # replace orig_out and orig_err with filehandles specified in outfmt or errfmt # fmt can be: # > FILEPATH # >> FILEPATH # | program [ arg1 [ arg2 [ ... ] ] ] # # with a '|', arguments are passed to shell, so one level of # shell escape is required. # # if _CLOUD_INIT_SAVE_STDOUT is set in environment to a non empty and true # value then output input will not be closed (useful for debugging). # def redirect_output(outfmt, errfmt, o_out=None, o_err=None): if is_true(os.environ.get("_CLOUD_INIT_SAVE_STDOUT")): LOG.debug("Not redirecting output due to _CLOUD_INIT_SAVE_STDOUT") return if not o_out: o_out = sys.stdout if not o_err: o_err = sys.stderr if outfmt: LOG.debug("Redirecting %s to %s", o_out, outfmt) (mode, arg) = outfmt.split(" ", 1) if mode == ">" or mode == ">>": owith = "ab" if mode == ">": owith = "wb" new_fp = open(arg, owith) elif mode == "|": proc = subprocess.Popen(arg, shell=True, stdin=subprocess.PIPE) new_fp = proc.stdin else: raise TypeError("Invalid type for output format: %s" % outfmt) if o_out: os.dup2(new_fp.fileno(), o_out.fileno()) if errfmt == outfmt: LOG.debug("Redirecting %s to %s", o_err, outfmt) os.dup2(new_fp.fileno(), o_err.fileno()) return if errfmt: LOG.debug("Redirecting %s to %s", o_err, errfmt) (mode, arg) = errfmt.split(" ", 1) if mode == ">" or mode == ">>": owith = "ab" if mode == ">": owith = "wb" new_fp = open(arg, owith) elif mode == "|": proc = subprocess.Popen(arg, shell=True, stdin=subprocess.PIPE) new_fp = proc.stdin else: raise TypeError("Invalid type for error format: %s" % errfmt) if o_err: os.dup2(new_fp.fileno(), o_err.fileno()) def make_url(scheme, host, port=None, path='', params='', query='', fragment=''): pieces = [] pieces.append(scheme or '') netloc = '' if host: netloc = str(host) if port is not None: netloc += ":" + "%s" % (port) pieces.append(netloc or '') pieces.append(path or '') pieces.append(params or '') pieces.append(query or '') pieces.append(fragment or '') return urlparse.urlunparse(pieces) def mergemanydict(srcs, reverse=False): if reverse: srcs = reversed(srcs) merged_cfg = {} for cfg in srcs: if cfg: # Figure out which mergers to apply... mergers_to_apply = mergers.dict_extract_mergers(cfg) if not mergers_to_apply: mergers_to_apply = mergers.default_mergers() merger = mergers.construct(mergers_to_apply) merged_cfg = merger.merge(merged_cfg, cfg) return merged_cfg @contextlib.contextmanager def chdir(ndir): curr = os.getcwd() try: os.chdir(ndir) yield ndir finally: os.chdir(curr) @contextlib.contextmanager def umask(n_msk): old = os.umask(n_msk) try: yield old finally: os.umask(old) @contextlib.contextmanager def tempdir(**kwargs): # This seems like it was only added in python 3.2 # Make it since its useful... # See: http://bugs.python.org/file12970/tempdir.patch tdir = tempfile.mkdtemp(**kwargs) try: yield tdir finally: del_dir(tdir) def center(text, fill, max_len): return '{0:{fill}{align}{size}}'.format(text, fill=fill, align="^", size=max_len) def del_dir(path): LOG.debug("Recursively deleting %s", path) shutil.rmtree(path) def runparts(dirp, skip_no_exist=True, exe_prefix=None): if skip_no_exist and not os.path.isdir(dirp): return failed = [] attempted = [] if exe_prefix is None: prefix = [] elif isinstance(exe_prefix, str): prefix = [str(exe_prefix)] elif isinstance(exe_prefix, list): prefix = exe_prefix else: raise TypeError("exe_prefix must be None, str, or list") for exe_name in sorted(os.listdir(dirp)): exe_path = os.path.join(dirp, exe_name) if os.path.isfile(exe_path) and os.access(exe_path, os.X_OK): attempted.append(exe_path) try: subp(prefix + [exe_path], capture=False) except ProcessExecutionError as e: logexc(LOG, "Failed running %s [%s]", exe_path, e.exit_code) failed.append(e) if failed and attempted: raise RuntimeError('Runparts: %s failures in %s attempted commands' % (len(failed), len(attempted))) # read_optional_seed # returns boolean indicating success or failure (presense of files) # if files are present, populates 'fill' dictionary with 'user-data' and # 'meta-data' entries def read_optional_seed(fill, base="", ext="", timeout=5): try: (md, ud) = read_seeded(base, ext, timeout) fill['user-data'] = ud fill['meta-data'] = md return True except url_helper.UrlError as e: if e.code == url_helper.NOT_FOUND: return False raise def fetch_ssl_details(paths=None): ssl_details = {} # Lookup in these locations for ssl key/cert files ssl_cert_paths = [ '/var/lib/cloud/data/ssl', '/var/lib/cloud/instance/data/ssl', ] if paths: ssl_cert_paths.extend([ os.path.join(paths.get_ipath_cur('data'), 'ssl'), os.path.join(paths.get_cpath('data'), 'ssl'), ]) ssl_cert_paths = uniq_merge(ssl_cert_paths) ssl_cert_paths = [d for d in ssl_cert_paths if d and os.path.isdir(d)] cert_file = None for d in ssl_cert_paths: if os.path.isfile(os.path.join(d, 'cert.pem')): cert_file = os.path.join(d, 'cert.pem') break key_file = None for d in ssl_cert_paths: if os.path.isfile(os.path.join(d, 'key.pem')): key_file = os.path.join(d, 'key.pem') break if cert_file and key_file: ssl_details['cert_file'] = cert_file ssl_details['key_file'] = key_file elif cert_file: ssl_details['cert_file'] = cert_file return ssl_details def read_file_or_url(url, timeout=5, retries=10, headers=None, data=None, sec_between=1, ssl_details=None, headers_cb=None, exception_cb=None): url = url.lstrip() if url.startswith("/"): url = "file://%s" % url if url.lower().startswith("file://"): if data: LOG.warn("Unable to post data to file resource %s", url) file_path = url[len("file://"):] try: contents = load_file(file_path, decode=False) except IOError as e: code = e.errno if e.errno == errno.ENOENT: code = url_helper.NOT_FOUND raise url_helper.UrlError(cause=e, code=code, headers=None, url=url) return url_helper.FileResponse(file_path, contents=contents) else: return url_helper.readurl(url, timeout=timeout, retries=retries, headers=headers, headers_cb=headers_cb, data=data, sec_between=sec_between, ssl_details=ssl_details, exception_cb=exception_cb) def load_yaml(blob, default=None, allowed=(dict,)): loaded = default blob = decode_binary(blob) try: LOG.debug("Attempting to load yaml from string " "of length %s with allowed root types %s", len(blob), allowed) converted = safeyaml.load(blob) if not isinstance(converted, allowed): # Yes this will just be caught, but thats ok for now... raise TypeError(("Yaml load allows %s root types," " but got %s instead") % (allowed, type_utils.obj_name(converted))) loaded = converted except (yaml.YAMLError, TypeError, ValueError): if len(blob) == 0: LOG.debug("load_yaml given empty string, returning default") else: logexc(LOG, "Failed loading yaml blob") return loaded def read_seeded(base="", ext="", timeout=5, retries=10, file_retries=0): if base.startswith("/"): base = "file://%s" % base # default retries for file is 0. for network is 10 if base.startswith("file://"): retries = file_retries if base.find("%s") >= 0: ud_url = base % ("user-data" + ext) md_url = base % ("meta-data" + ext) else: ud_url = "%s%s%s" % (base, "user-data", ext) md_url = "%s%s%s" % (base, "meta-data", ext) md_resp = read_file_or_url(md_url, timeout, retries, file_retries) md = None if md_resp.ok(): md = load_yaml(decode_binary(md_resp.contents), default={}) ud_resp = read_file_or_url(ud_url, timeout, retries, file_retries) ud = None if ud_resp.ok(): ud = ud_resp.contents return (md, ud) def read_conf_d(confd): # Get reverse sorted list (later trumps newer) confs = sorted(os.listdir(confd), reverse=True) # Remove anything not ending in '.cfg' confs = [f for f in confs if f.endswith(".cfg")] # Remove anything not a file confs = [f for f in confs if os.path.isfile(os.path.join(confd, f))] # Load them all so that they can be merged cfgs = [] for fn in confs: cfgs.append(read_conf(os.path.join(confd, fn))) return mergemanydict(cfgs) def read_conf_with_confd(cfgfile): cfg = read_conf(cfgfile) confd = False if "conf_d" in cfg: confd = cfg['conf_d'] if confd: if not isinstance(confd, six.string_types): raise TypeError(("Config file %s contains 'conf_d' " "with non-string type %s") % (cfgfile, type_utils.obj_name(confd))) else: confd = str(confd).strip() elif os.path.isdir("%s.d" % cfgfile): confd = "%s.d" % cfgfile if not confd or not os.path.isdir(confd): return cfg # Conf.d settings override input configuration confd_cfg = read_conf_d(confd) return mergemanydict([confd_cfg, cfg]) def read_cc_from_cmdline(cmdline=None): # this should support reading cloud-config information from # the kernel command line. It is intended to support content of the # format: # cc: [end_cc] # this would include: # cc: ssh_import_id: [smoser, kirkland]\\n # cc: ssh_import_id: [smoser, bob]\\nruncmd: [ [ ls, -l ], echo hi ] end_cc # cc:ssh_import_id: [smoser] end_cc cc:runcmd: [ [ ls, -l ] ] end_cc if cmdline is None: cmdline = get_cmdline() tag_begin = "cc:" tag_end = "end_cc" begin_l = len(tag_begin) end_l = len(tag_end) clen = len(cmdline) tokens = [] begin = cmdline.find(tag_begin) while begin >= 0: end = cmdline.find(tag_end, begin + begin_l) if end < 0: end = clen tokens.append(cmdline[begin + begin_l:end].lstrip().replace("\\n", "\n")) begin = cmdline.find(tag_begin, end + end_l) return '\n'.join(tokens) def dos2unix(contents): # find first end of line pos = contents.find('\n') if pos <= 0 or contents[pos - 1] != '\r': return contents return contents.replace('\r\n', '\n') def get_hostname_fqdn(cfg, cloud): # return the hostname and fqdn from 'cfg'. If not found in cfg, # then fall back to data from cloud if "fqdn" in cfg: # user specified a fqdn. Default hostname then is based off that fqdn = cfg['fqdn'] hostname = get_cfg_option_str(cfg, "hostname", fqdn.split('.')[0]) else: if "hostname" in cfg and cfg['hostname'].find('.') > 0: # user specified hostname, and it had '.' in it # be nice to them. set fqdn and hostname from that fqdn = cfg['hostname'] hostname = cfg['hostname'][:fqdn.find('.')] else: # no fqdn set, get fqdn from cloud. # get hostname from cfg if available otherwise cloud fqdn = cloud.get_hostname(fqdn=True) if "hostname" in cfg: hostname = cfg['hostname'] else: hostname = cloud.get_hostname() return (hostname, fqdn) def get_fqdn_from_hosts(hostname, filename="/etc/hosts"): """ For each host a single line should be present with the following information: IP_address canonical_hostname [aliases...] Fields of the entry are separated by any number of blanks and/or tab characters. Text from a "#" character until the end of the line is a comment, and is ignored. Host names may contain only alphanumeric characters, minus signs ("-"), and periods ("."). They must begin with an alphabetic character and end with an alphanumeric character. Optional aliases provide for name changes, alternate spellings, shorter hostnames, or generic hostnames (for example, localhost). """ fqdn = None try: for line in load_file(filename).splitlines(): hashpos = line.find("#") if hashpos >= 0: line = line[0:hashpos] line = line.strip() if not line: continue # If there there is less than 3 entries # (IP_address, canonical_hostname, alias) # then ignore this line toks = line.split() if len(toks) < 3: continue if hostname in toks[2:]: fqdn = toks[1] break except IOError: pass return fqdn def get_cmdline_url(names=('cloud-config-url', 'url'), starts=b"#cloud-config", cmdline=None): if cmdline is None: cmdline = get_cmdline() data = keyval_str_to_dict(cmdline) url = None key = None for key in names: if key in data: url = data[key] break if not url: return (None, None, None) resp = read_file_or_url(url) # allow callers to pass starts as text when comparing to bytes contents starts = encode_text(starts) if resp.ok() and resp.contents.startswith(starts): return (key, url, resp.contents) return (key, url, None) def is_resolvable(name): """determine if a url is resolvable, return a boolean This also attempts to be resilent against dns redirection. Note, that normal nsswitch resolution is used here. So in order to avoid any utilization of 'search' entries in /etc/resolv.conf we have to append '.'. The top level 'invalid' domain is invalid per RFC. And example.com should also not exist. The random entry will be resolved inside the search list. """ global _DNS_REDIRECT_IP if _DNS_REDIRECT_IP is None: badips = set() badnames = ("does-not-exist.example.com.", "example.invalid.", rand_str()) badresults = {} for iname in badnames: try: result = socket.getaddrinfo(iname, None, 0, 0, socket.SOCK_STREAM, socket.AI_CANONNAME) badresults[iname] = [] for (_fam, _stype, _proto, cname, sockaddr) in result: badresults[iname].append("%s: %s" % (cname, sockaddr[0])) badips.add(sockaddr[0]) except (socket.gaierror, socket.error): pass _DNS_REDIRECT_IP = badips if badresults: LOG.debug("detected dns redirection: %s", badresults) try: result = socket.getaddrinfo(name, None) # check first result's sockaddr field addr = result[0][4][0] if addr in _DNS_REDIRECT_IP: return False return True except (socket.gaierror, socket.error): return False def get_hostname(): hostname = socket.gethostname() return hostname def gethostbyaddr(ip): try: return socket.gethostbyaddr(ip)[0] except socket.herror: return None def is_resolvable_url(url): """determine if this url is resolvable (existing or ip).""" return is_resolvable(urlparse.urlparse(url).hostname) def search_for_mirror(candidates): """ Search through a list of mirror urls for one that works This needs to return quickly. """ for cand in candidates: try: if is_resolvable_url(cand): return cand except Exception: pass return None def close_stdin(): """ reopen stdin as /dev/null so even subprocesses or other os level things get /dev/null as input. if _CLOUD_INIT_SAVE_STDIN is set in environment to a non empty and true value then input will not be closed (useful for debugging). """ if is_true(os.environ.get("_CLOUD_INIT_SAVE_STDIN")): return with open(os.devnull) as fp: os.dup2(fp.fileno(), sys.stdin.fileno()) def find_devs_with(criteria=None, oformat='device', tag=None, no_cache=False, path=None): """ find devices matching given criteria (via blkid) criteria can be *one* of: TYPE= LABEL=