nova-lxd-17.0.0/0000775000175100017510000000000013246266437013370 5ustar zuulzuul00000000000000nova-lxd-17.0.0/etc/0000775000175100017510000000000013246266437014143 5ustar zuulzuul00000000000000nova-lxd-17.0.0/etc/nova/0000775000175100017510000000000013246266437015106 5ustar zuulzuul00000000000000nova-lxd-17.0.0/etc/nova/rootwrap.d/0000775000175100017510000000000013246266437017205 5ustar zuulzuul00000000000000nova-lxd-17.0.0/etc/nova/rootwrap.d/lxd.filters0000666000175100017510000000046013246266025021361 0ustar zuulzuul00000000000000# nova-rootwrap filters for compute nodes running nova-lxd # This file should be owned by (and only-writable by) the root user [Filters] zfs: CommandFilter, zfs, root zpool: CommandFilter, zpool, root btrfs: CommandFilter, btrfs, root chown: CommandFilter, chown, root chmod: CommandFilter, chmod, root nova-lxd-17.0.0/CONTRIBUTING.rst0000666000175100017510000000517313246266025016032 0ustar zuulzuul00000000000000Crash course in lxd setup ========================= nova-lxd absolutely requires lxd, though its installation and configuration is out of scope here. If you're running Ubuntu, here is the easy path to a running lxd. .. code-block: bash add-apt-repository ppa:ubuntu-lxc/lxd-git-master && sudo apt-get update apt-get -y install lxd usermod -G lxd ${your_username|stack} service lxd start If you're currently logged in as the user you just added to lxd, you'll need to log out and log back in again. Using nova-lxd with devstack ============================ nova-lxd includes a plugin for use in devstack. If you'd like to run devstack with nova-lxd, you'll want to add the following to `local.conf`: .. code-block: bash enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd In this case, nova-lxd will run HEAD from master. You may want to point this at your own fork. A final argument to `enable_plugin` can be used to specify a git revision. Configuration and installation of devstack is beyond the scope of this document. Here's an example `local.conf` file that will run the very minimum you`ll need for devstack. .. code-block: bash [[local|localrc]] ADMIN_PASSWORD=password DATABASE_PASSWORD=$ADMIN_PASSWORD RABBIT_PASSWORD=$ADMIN_PASSWORD SERVICE_PASSWORD=$ADMIN_PASSWORD SERVICE_TOKEN=$ADMIN_PASSWORD disable_service cinder c-sch c-api c-vol disable_service n-net n-novnc disable_service horizon disable_service ironic ir-api ir-cond enable_service q-svc q-agt q-dhcp q-13 q-meta # Optional, to enable tempest configuration as part of devstack enable_service tempest enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd # More often than not, stack.sh explodes trying to configure IPv6 support, # so let's just disable it for now. IP_VERSION=4 Once devstack is running, you'll want to add the lxd image to glance. You can do this (as an admin) with: .. code-block: bash wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-root.tar.xz glance image-create --name lxd --container-format bare --disk-format raw \ --visibility=public < trusty-server-cloudimg-amd64-root.tar.xz To run the tempest tests, you can use: .. code-block: bash /opt/stack/tempest/run_tempest.sh -N tempest.api.compute Errata ====== Patches should be submitted to Openstack Gerrit via `git-review`. Bugs should be filed on Launchpad: https://bugs.launchpad.net/nova-lxd If you would like to contribute to the development of OpenStack, you must follow the steps in this page: https://docs.openstack.org/infra/manual/developers.html nova-lxd-17.0.0/contrib/0000775000175100017510000000000013246266437015030 5ustar zuulzuul00000000000000nova-lxd-17.0.0/contrib/tempest/0000775000175100017510000000000013246266437016511 5ustar zuulzuul00000000000000nova-lxd-17.0.0/contrib/tempest/README.1st0000666000175100017510000000012013246266025020063 0ustar zuulzuul00000000000000Run run_tempest_lxd.sh to run tempest.api.compute tests to run against nova-lxd nova-lxd-17.0.0/contrib/tempest/run_tempest_lxd.sh0000666000175100017510000000221713246266025022256 0ustar zuulzuul00000000000000#!/bin/bash # Construct a regex t ouse when limiting scope of tempest # to avoid features unsupported by nova-lxd # Note that several tests are disabled by the use of tempest # feature toggels in devstack for an LXD config # so this regex is not entiriely representative of # what's excluded # Wen adding entries to the ignored_tests, add a comment explaining # why since this list should not grow # Temporarily skip the image tests since they give false positivies # for nova-lxd ignored_tests="|^tempest.api.compute.images" # Regressions ignored_tests="$ignored_tests|.*AttachInterfacesTestJSON.test_create_list_show_delete_interfaces" # backups are not supported ignored_tests="$ignored_tests|.*ServerActionsTestJSON.test_create_backup" # failed verfication tests ignored_tests="$ignored_tests|.*ServersWithSpecificFlavorTestJSON.test_verify_created_server_ephemeral_disk" ignored_tests="$ignored_tests|.*AttachVolumeShelveTestJSON.test_attach_detach_volume" ignored_tests="$ignored_tests|.*AttachVolumeTestJSON.test_attach_detach_volume" regex="(?!.*\\[.*\\bslow\\b.*\\]$ignored_tests)(^tempest\\.api.\\compute)"; ostestr --serial --regex $regex run nova-lxd-17.0.0/contrib/ci/0000775000175100017510000000000013246266437015423 5ustar zuulzuul00000000000000nova-lxd-17.0.0/contrib/ci/post_test_hook.sh0000777000175100017510000000157013246266025021024 0ustar zuulzuul00000000000000#!/bin/bash -xe # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This script is executed inside post_test function in devstack gate. source $BASE/new/devstack/functions INSTALLDIR=${INSTALLDIR:-/opt/stack} source $INSTALLDIR/devstack/functions-common LOGDIR=/opt/stack/logs # Collect logs from the containers sudo mkdir -p $LOGDIR/containers/ sudo cp -rp /var/log/lxd/* $LOGDIR/containers nova-lxd-17.0.0/contrib/ci/pre_test_hook.sh0000777000175100017510000000167213246266025020630 0ustar zuulzuul00000000000000#!/bin/bash -xe # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This script is executed inside pre_test_hook function in devstack gate. # First argument ($1) expects boolean as value where: # 'False' means share driver will not handle share servers # 'True' means it will handle share servers. # Import devstack function 'trueorfalse' source $BASE/new/devstack/functions DEVSTACK_LOCAL_CONFIG+=$'\n'"LXD_BACKEND_DRIVER=zfs" export DEVSTACK_LOCAL_CONFIG nova-lxd-17.0.0/contrib/glance_metadefs/0000775000175100017510000000000013246266437020131 5ustar zuulzuul00000000000000nova-lxd-17.0.0/contrib/glance_metadefs/compute-lxd-flavor.json0000666000175100017510000000204113246266025024544 0ustar zuulzuul00000000000000{ "namespace": "OS::Nova::LXDFlavor", "display_name": "LXD properties", "description": "You can pass several options to the LXD container hypervisor that will affect the container's capabilities.", "visibility": "public", "protected": false, "resource_type_associations": [ { "name": "OS::Nova::Flavor" } ], "properties": { "lxd:nested_allowed": { "title": "Allow nested containers", "description": "Allow or disallow creation of nested containers. If True, you can install and run LXD inside the VM itself and provision another level of containers.", "type": "string", "default": false }, "lxd:privileged_allowed": { "title": "Create privileged container", "description": "Containers created as Privileged have elevated powers on the compute host. You should not set this option on containers that you don't fully trust.", "type": "string", "default": false } } } nova-lxd-17.0.0/babel.cfg0000666000175100017510000000002013246266025015101 0ustar zuulzuul00000000000000[python: **.py] nova-lxd-17.0.0/requirements.txt0000666000175100017510000000122013246266025016642 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 os-brick>=2.2.0 # Apache-2.0 os-vif!=1.8.0,>=1.7.0 # Apache-2.0 oslo.config>=5.1.0 # Apache-2.0 oslo.concurrency>=3.25.0 # Apache-2.0 oslo.utils>=3.33.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 oslo.log>=3.36.0 # Apache-2.0 pylxd>=2.2.4 # Apache-2.0 # XXX: rockstar (17 Feb 2016) - oslo_config imports # debtcollector, which imports this, but doesn't # require it in dependencies. #wrapt>=1.7.0 # BSD License nova-lxd-17.0.0/setup.py0000666000175100017510000000200613246266025015073 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. # THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT import setuptools # In python < 2.7.4, a lazy loading of package `pbr` will break # setuptools if some other modules registered functions in `atexit`. # solution from: http://bugs.python.org/issue15881#msg170215 try: import multiprocessing # noqa except ImportError: pass setuptools.setup( setup_requires=['pbr>=2.0.0'], pbr=True) nova-lxd-17.0.0/.zuul.yaml0000666000175100017510000000173313246266025015330 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # This job will execute 'tox -e func_lxd' from the OSA # repo specified in 'osa_test_repo'. - job: name: openstack-ansible-nova-lxd parent: openstack-ansible-cross-repo-functional voting: false required-projects: - name: openstack/openstack-ansible-os_nova vars: tox_env: func_lxd osa_test_repo: openstack/openstack-ansible-os_nova - project: check: jobs: - openstack-ansible-nova-lxd nova-lxd-17.0.0/HACKING.rst0000666000175100017510000000023613246266025015162 0ustar zuulzuul00000000000000nova-lxd Style Commandments =============================================== Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/ nova-lxd-17.0.0/AUTHORS0000664000175100017510000000300213246266434014430 0ustar zuulzuul00000000000000Alex Kavanagh Alexander Kharkov Amrith Kumar Andrea Frittoli Andy McCrae Chandan Kumar Chris MacNaughton Chuck Short Chuck Short Daniel Stelter-Gliese Eli Qiao Gyorgy Szombathelyi Hangdong Zhang James E. Blair James Page Jesse Pretorius Jimmy McCrory Ken'ichi Ohmichi Masaki Matsushita Michael Gugino Michał Sawicz Neil Jerram Nguyen Hung Phuong Paul Hummer Paul Hummer Paul Hummer Peter Slovak Pushkar Umaranikar Ryan Harper Stéphane Graber Tony Breeds Vu Cong Tuan Yurtaykin, Andrey Zuul gecong1973 ghanshyam libing ricolin zulcss nova-lxd-17.0.0/.mailmap0000666000175100017510000000013013246266025014776 0ustar zuulzuul00000000000000# Format is: # # nova-lxd-17.0.0/MANIFEST.in0000666000175100017510000000013513246266025015120 0ustar zuulzuul00000000000000include AUTHORS include ChangeLog exclude .gitignore exclude .gitreview global-exclude *.pycnova-lxd-17.0.0/run_tests.sh0000777000175100017510000001741013246266025015753 0ustar zuulzuul00000000000000#!/bin/bash set -eu function usage { echo "Usage: $0 [OPTION]..." echo "Run Nova's test suite(s)" echo "" echo " -V, --virtual-env Always use virtualenv. Install automatically if not present" echo " -N, --no-virtual-env Don't use virtualenv. Run tests in local environment" echo " -s, --no-site-packages Isolate the virtualenv from the global Python environment" echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added." echo " -u, --update Update the virtual environment with any newer package versions" echo " -p, --pep8 Just run PEP8 and HACKING compliance check" echo " -8, --pep8-only-changed Just run PEP8 and HACKING compliance check on files changed since HEAD~1" echo " -P, --no-pep8 Don't run static code checks" echo " -c, --coverage Generate coverage report" echo " -d, --debug Run tests with testtools instead of testr. This allows you to use the debugger." echo " -h, --help Print this usage message" echo " --hide-elapsed Don't print the elapsed time for each test along with slow test list" echo " --virtual-env-path Location of the virtualenv directory" echo " Default: \$(pwd)" echo " --virtual-env-name Name of the virtualenv directory" echo " Default: .venv" echo " --tools-path Location of the tools directory" echo " Default: \$(pwd)" echo " --concurrency How many processes to use when running the tests. A value of 0 autodetects concurrency from your CPU count" echo " Default: 0" echo "" echo "Note: with no options specified, the script will try to run the tests in a virtual environment," echo " If no virtualenv is found, the script will ask if you would like to create one. If you " echo " prefer to run tests NOT in a virtual environment, simply pass the -N option." exit } function process_options { i=1 while [ $i -le $# ]; do case "${!i}" in -h|--help) usage;; -V|--virtual-env) always_venv=1; never_venv=0;; -N|--no-virtual-env) always_venv=0; never_venv=1;; -s|--no-site-packages) no_site_packages=1;; -f|--force) force=1;; -u|--update) update=1;; -p|--pep8) just_pep8=1;; -8|--pep8-only-changed) just_pep8_changed=1;; -P|--no-pep8) no_pep8=1;; -c|--coverage) coverage=1;; -d|--debug) debug=1;; --virtual-env-path) (( i++ )) venv_path=${!i} ;; --virtual-env-name) (( i++ )) venv_dir=${!i} ;; --tools-path) (( i++ )) tools_path=${!i} ;; --concurrency) (( i++ )) concurrency=${!i} ;; -*) testropts="$testropts ${!i}";; *) testrargs="$testrargs ${!i}" esac (( i++ )) done } tool_path=${tools_path:-$(pwd)} venv_path=${venv_path:-$(pwd)} venv_dir=${venv_name:-.venv} with_venv=tools/with_venv.sh always_venv=0 never_venv=0 force=0 no_site_packages=0 installvenvopts= testrargs= testropts= wrapper="" just_pep8=0 just_pep8_changed=0 no_pep8=0 coverage=0 debug=0 update=0 concurrency=0 LANG=en_US.UTF-8 LANGUAGE=en_US:en LC_ALL=C process_options $@ # Make our paths available to other scripts we call export venv_path export venv_dir export venv_name export tools_dir export venv=${venv_path}/${venv_dir} if [ $no_site_packages -eq 1 ]; then installvenvopts="--no-site-packages" fi function run_tests { # Cleanup *pyc ${wrapper} find . -type f -name "*.pyc" -delete if [ $debug -eq 1 ]; then if [ "$testropts" = "" ] && [ "$testrargs" = "" ]; then # Default to running all tests if specific test is not # provided. testrargs="discover ./nova_lxd/tests" fi ${wrapper} python -m testtools.run $testropts $testrargs # Short circuit because all of the testr and coverage stuff # below does not make sense when running testtools.run for # debugging purposes. return $? fi if [ $coverage -eq 1 ]; then TESTRTESTS="$TESTRTESTS --coverage" else TESTRTESTS="$TESTRTESTS" fi # Just run the test suites in current environment set +e testrargs=`echo "$testrargs" | sed -e's/^\s*\(.*\)\s*$/\1/'` TESTRTESTS="$TESTRTESTS --testr-args='--subunit --concurrency $concurrency $testropts $testrargs'" if [ setup.cfg -nt nova.egg-info/entry_points.txt ] then ${wrapper} python setup.py egg_info fi echo "Running \`${wrapper} $TESTRTESTS\`" if ${wrapper} which subunit-2to1 2>&1 > /dev/null then # subunit-2to1 is present, testr subunit stream should be in version 2 # format. Convert to version one before colorizing. bash -c "${wrapper} $TESTRTESTS | ${wrapper} subunit-2to1 | ${wrapper} tools/colorizer.py" else bash -c "${wrapper} $TESTRTESTS | ${wrapper} tools/colorizer.py" fi RESULT=$? set -e copy_subunit_log if [ $coverage -eq 1 ]; then echo "Generating coverage report in covhtml/" # Don't compute coverage for common code, which is tested elsewhere ${wrapper} coverage combine ${wrapper} coverage html --include='nova/*' --omit='nova/openstack/common/*' -d covhtml -i fi return $RESULT } function copy_subunit_log { LOGNAME=`cat .testrepository/next-stream` LOGNAME=$(($LOGNAME - 1)) LOGNAME=".testrepository/${LOGNAME}" cp $LOGNAME subunit.log } function warn_on_flake8_without_venv { if [ $never_venv -eq 1 ]; then echo "**WARNING**:" echo "Running flake8 without virtual env may miss OpenStack HACKING detection" fi } function run_pep8 { echo "Running flake8 ..." warn_on_flake8_without_venv bash -c "${wrapper} flake8" } TESTRTESTS="python setup.py testr" if [ $never_venv -eq 0 ] then # Remove the virtual environment if --force used if [ $force -eq 1 ]; then echo "Cleaning virtualenv..." rm -rf ${venv} fi if [ $update -eq 1 ]; then echo "Updating virtualenv..." python tools/install_venv.py $installvenvopts fi if [ -e ${venv} ]; then wrapper="${with_venv}" else if [ $always_venv -eq 1 ]; then # Automatically install the virtualenv python tools/install_venv.py $installvenvopts wrapper="${with_venv}" else echo -e "No virtual environment found...create one? (Y/n) \c" read use_ve if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then # Install the virtualenv and run the test suite in it python tools/install_venv.py $installvenvopts wrapper=${with_venv} fi fi fi fi # Delete old coverage data from previous runs if [ $coverage -eq 1 ]; then ${wrapper} coverage erase fi if [ $just_pep8 -eq 1 ]; then run_pep8 exit fi if [ $just_pep8_changed -eq 1 ]; then # NOTE(gilliard) We want use flake8 to check the entirety of every file that has # a change in it. Unfortunately the --filenames argument to flake8 only accepts # file *names* and there are no files named (eg) "nova/compute/manager.py". The # --diff argument behaves surprisingly as well, because although you feed it a # diff, it actually checks the file on disk anyway. files=$(git diff --name-only HEAD~1 | tr '\n' ' ') echo "Running flake8 on ${files}" warn_on_flake8_without_venv bash -c "diff -u --from-file /dev/null ${files} | ${wrapper} flake8 --diff" exit fi run_tests # NOTE(sirp): we only want to run pep8 when we're running the full-test suite, # not when we're running tests individually. To handle this, we need to # distinguish between options (testropts), which begin with a '-', and # arguments (testrargs). if [ -z "$testrargs" ]; then if [ $no_pep8 -eq 0 ]; then run_pep8 fi fi nova-lxd-17.0.0/ChangeLog0000664000175100017510000006467313246266434015157 0ustar zuulzuul00000000000000CHANGES ======= 17.0.0 ------ * Fixed 'privileged' instance creation using config-drive * Add a test for destroying an instance when in rescue mode * Change the .testr.conf to .stestr.conf * Change .gitreview for stable/queens 17.0.0.0rc1 ----------- * Add placement api and client to devstack configuration * Replace the usage of some aliases in tempest * Switch to tempest.common.utils.requires\_ext * Add stestr to .gitignore file * Memory hog during image import from glance fixed * Zuul: Remove project name * Add capabilities flag "supports\_multiattach" flag * Updated from global requirements * Updated from global requirements * Updated from global requirements * Fixed import error of nova-lxd-tempest-plugin * Updated from global requirements * Updated from global requirements * Unblock nova-lxd gate * Add non-voting nova-lxd functional test * Redux use of InstanceInfo object 16.0.0.0rc1 ----------- * Implemented resume\_state\_on\_host\_boot driver callback * Raise understandable by nova exception when LXD instance not found on host * Added support for the LXD unified tarball format * Updated from global requirements * Replace deprecated test.attr with decorators.attr * Updated from global requirements * Update the documentation link for doc migration * Fix traceback when running nova-console * vif: redux interface wiring approach * devstack: disable volume boot pattern tests * Updated from global requirements * Tidy tox targets, fix coverage reporting * Blacklist test\_get\_server\_diagnostics tests * Enable tempest interface related tests * Updated from global requirements * Updated from global requirements * Refactor container VIF handling for linuxbridge * Re-check image alias prior to sync from glance * Misc fixes for devstack gate tests * Using assertIsNone(xxx) instead of assertEqual(None, xxx) * Add storage pool support * Pass readonly flag as a string, not a boolean * Update output to match our specs 16.0.0.0b1 ---------- * Report ZFS pool capacity and usage statistics * Updated from global requirements * Ensure config-drive is read-only * Fix config-drive support inline with cloud-init * Allow mounting more device types * Remove log translations * Local copy of scenario test base class * Fix typo in documentation * Pass client object to flavor.to\_profile calls * Switch nova and upper-constraints back to master branches * Switch to use stable remote\_client * Updated from global requirements * Updated from global requirements * Switch to use stable data\_utils * [Fix gate]Update test requirement * Updated from global requirements * Update constraints to use stable/ocata * Update gitreview for stable/ocata branch 15.0.0 ------ * Manual backport of attach/detach interface patch * attach\_interface and detach\_interface now take a context parameter * Updated from global requirements 15.0.0.0rc1 ----------- * Updated from global requirements * Remove the last remnants of LXDSession use * Switch to decorators.idempotent\_id * Updated from global requirements * Refactor the flavor <-> profile work * Refactor storage handling * Update the glance<->lxd image sync * Updated from global requirements * Align with move of hv\_type -> fields.HVType & vm\_mode -> fields.VMMode * Align with move of arch -> fields.Architecture * Fix "unary operator expected" error * Consolidate profile creation tooling * Remove erroneous XXX comments * Move nova-lxd to use os-vif * Fix image alias creation * Remove profile-related methods from nova.virt.lxd.session * Do not write or read from the database in the LXDDriver tests * Remove the simple nova.virt.lxd.session image methods * Remove the remainder of the small container methods from session * Remove dead code in nova.virt.lxd.session * Updated from global requirements * Updated from global requirements * Use constratints when installing requirements * Remove some unneeded/unused dependencies * Updated from global requirements * Expand power state querying and resolution * Remove \`nova.virt.lxd.migrate\` * Remove the need for the \`utils\` module * Fix devstack plugin to be smarter about the work it does * Updated from global requirements * Wait for contianer to start before querying state * Fix unit tests to work with pylxd 2.1.2 * Update blacklist * Simplify vif plugging * Updated from global requirements * Updated from global requirements * Fix attach/detach network interface * Reset blacklist * Add post\_test\_hook.sh * Add pre\_test\_hook.sh * Allow devstack plugin to manage zfs * Remove validation * Add tempest-dsvm-lxd-rc * Remove pylxd from devstack installation 14.0.0 ------ * Reset vifs after reboot * Add LXC cirros image to devstack configuration 14.0.0.0rc1 ----------- * Update snapshot, fixing the hanging issues * Updated from global requirements * Updated from global requirements * Update test\_create\_server tests * Remove discover from test-requirements * Align directory structure with upstream * Add scenario tests for block storage * Fix the issue where the host cpu socket count is incorrect * Add unit tests to verify block storage * Set volume device name for tempest * Updated from global requirements * Fix usability issue when attaching block device * Cleanup after container start fails * Re-add pylxd to requirements.txt * Better directory organisation * Add the exclusive machine scheduling docs * Re-add support for configdrive * Narrow down the flake8 call * Dont raise exception if instance is not found * Updated from global requirements 14.0.0.0b3 ---------- * Update blacklist * add code-block to CONTRIBUTING.rst * Mock out fileutils entirely in the driver tests * Add nova-lxd tempest plugin 14.0.0.0b2 ---------- * Check for container exists before deletion * Clean up LXD container and profile if failure * Enable user namespace for ext4 * Run tempest validation * Add script to run tempest.api.compute * Updated from global requirements * Disable encrypted volume tests * Fix race when unrescuing instance * Fix container deletion in 'paused' case * Set container format when doing a snapshot * Verify contianer power on * Verify container power off * Check volume exists when detaching * Update tempest config * Recreate instance bridges after reboot * Set container\_format when uploading image * Simplify get\_info method * Fix destroy while container is shut off * Configure glance disk-formats * Make devstack plugin a bit more user friendly * Fix container rescue/unrescue * Install the rootwrap filters * Remove check for running instances during destroy * Replace iscsi\_use\_multipath with volume\_use\_multipath * Updated from global requirements * Fix freeze/unfreeze * Updated from global requirements * Updated from global requirements * Simplify empheral profile creation * Remove unused LXD config option * Update and test the last of the LXDDriver non-migration methods * Remove profile after migration * Update usage.rst doc to reflect new module path * Fix various typos found * Update power\_off/power\_on/get\_available\_resource * Update live-migration * Rework rescue/unrescue methods * Update filters * Add os-brick * Add support for lvm ephemeral storage * Do a full driver audit of methods that are unimplemented * Rework LXDDriver pause, unpause, suspend, resume * Update nova-lxd for charm * Add support for btrfs ephemeral storage * Remove version 14.0.0.0b1 ---------- * Ensure that oslo\_utils.fileutils is mocked * [RFC] Add support for VIF\_TYPE\_TAP * Fix typos * Add support for zfs backend * Add support for persistent block devices * Move migrate\_disk\_and\_power\_off to LXDDriver * Remove uneeded file * Update attach\_interface and detach\_interface in LXDDriver * Remove dead code * Updated from global requirements * Update a number of driver methods, and document the missing ones * lxd installation with devstack on ubuntu 14.04 * Remove configdrive * Update LXDDriver.destroy and LXDDriver.cleanup * Rework LXDDriver.spawn to use the new pylxd API * Modernize more of LXDDriver * Remove usage of xz * Begin the modernization of LXDDriver * Move constants module contents to where it's used * Move LXDContainerConfig methods in LXDDriver * Move image handling into LXDDriver * Add support for ephemeral storage * Fix uploading images * Rework filepathing to not need a whole class * Move the vif methods into migrate * Move container power state manipulation to LXDDriver * Move more LXDContainerOperations to LXDDriver * Move \_uid\_map to LXDDriver * Move LXDContainerOperations.get\_info to LXDDriver.get\_info * De-rabbithole reboot method * Remove dead code * De-rabbit hold get\_console\_output method * De-rabbit hole instance destroy * De-rabbithole container spawning * Use the new pylxd api for list\_instances * Make container\_firewall die a death * Remove container\_snapshot * Remove virtapi parameter from places it's unused * Remove LXDHost * Updated from global requirements * Remove LXDHost.host\_ping * Fix networking with LinuxBridge * Updated from global requirements * Consolidate device configuration * Fix source and destination host * Fix traceback when migrating containers * Add support for live migration * Updated from global requirements * Fix package installs * Remove all traces of github usage * Add mocking to ensure test isolation from fs * Correct repository location for gitreview * Fix rendering of README.md * Fix pep8 * Fix pep8 * Remove container\_defined method usage * Remove dead crufty unused code * Update devstack local.conf.sample * Show actual error message when LXD API fails * fixed unit conversions in network quotas * Transition to nova.conf * Fix pep8 * More compute driver syncs * extra\_specs are int + incorrect quota value warning + more pythonic * tests for disk and nic quotas * skip defining 'limits.max' if a more concrete quota has been defined * respect disk and network quotas defined in OS::Compute::Quota MD namespace * Fix typo * Sync with compute driver * No python 3.4 on xenial * importable Glance metadata definition file * Fix tox.ini * Update travis * Disable config drive by default * Fix pep8 * Fid unit tests to build with py34 * Fix unit tests with py27 * Clean up setup.cfg * Add sample local.conf.sample * Remove linux-image-converter * Update devstack configuration * Fix typo in devstack * Fix nova-lxd driver loading * Doc: improve README.md * Update readme for better detail for using with devstack * modified tests to use lxd: namespace in extra\_specs * More travis fixes * Fix travis * Update jenkins * Move test-requirements to tox.ini * use lxd: namespace for flavor extra\_specs * Adjust permissions for configdrive * Re-add check for root-tar image format * Change location of console.log * Tag 13.0.0 * Query the LXD API for memory usage * Fix issue #20 * Fix migration with configdrive * Fix spelling typos * Remove old LXD console configuration * Fix launching instance with configdrive * fix pep8 * Fix logic error * Fix migration with configdrive * Revert "Wire in live migration" * Revert "Fix pep8" * Fix pep8 * Wire in live migration * Fix unit tests * Simplify block detection * Adjust function name * Check for LXD block devices * Fix pep8 * Remove extra debug messages * Bump version * Use certificate when copying hosts * Fix the imports for pylxd.exceptions to pylxd.deprecated.exceptions * Add a broken dependency issue * Bump version to 13.0.0b2 * Fix pep8 * Fix disk limits * Update travis * Update requirements * Fix typo * Fix pep8 * Add more unit tests * Add unit tests for profiles * Fix failing pep8 and unit tests * Fix various typos found * Refactor migration support * Fix typos * Add finish\_revert\_migration * Make functions non-private * Make sure profiles exist * Remove unimplemented methods * Cleanup log messages * Always connect to the unix socket * Implement resize and contianer resource limits * Rename container migration * Fix PEP * Change session to package rather than a module * Consolidate utils into session * Add session mixin consolidation * Consolidate profile session code * Unify the operations handling * Consolidate container session stuff * Consolidate image session handling * No * Whitespace.. * Add docs for setting everything up * Update unit tests and fix pep8 * Update unit tests * Fix bridge creation with LinuxBridge * Fix concurrency issues when running snapshots * Fix attach interface * Fix typos * Make snapshots less racey and update for newer LXD * Fix config drive * Re-enable console-log * Simplify and consolidate rescue/unrescue * Fix image uploading * Fix unplug bridge with nova-network * Fix coverage * Revert "Fix image\_meta properties for architecture" * Fix image\_meta properties for architecture * revert .testr.conf change * Fix coverage usage * Fix pep8 issues * Fix pep8 issues * Drop JSON encoding for supported\_instances * Remove files * Fix unit tests and pep8 * More doc strings * Fix pep8 issues * Remove import alias * Fix pep8 errors * Add more doc strings * Fix typos * Update note about config\_migration * Make it more pythonic * Add more doc strings * Fix spelling typos * Be more pythonic * Refactor image creation * Fix typos, unit tests and pep8 * Set timeout to -1 * Fix typos uncovered by tempest * Update unit tests and fix pep8 errors * Update unit tests * Remove return from methods * Fix unit tests * Fix pep8 errors * Refactor container output * Refactor container info * Refactor cleanup * Refactor unrescue * Remove dead code * Refactor container local copy * Refactor rescue * Refactor resume * Refactor suspend * Refactor pause, power\_on, power\_off * Refactor power off * Refactor container destroy * Refactor container reboot * Fix up typos * Re-add config drive support * Refactor spawn method * Start the network early * Rename container\_ops and test\_container\_ops * Fix unit tests and pep8 errors * Remove the usage of the default profile * Refactor LXD container creation * Rename container\_config and test\_container\_config * Bump version * Bump to v0.19 * Fix grammar * Fix pep8 * Fix typos * Improve error message for ephemeral devices * Fix almost empty line * Fix typo * Fix container\_migrate tests in py3 * Update travis * Check for container operation in iamge\_upload * More pep8 fixes * Fix pep8 and py27 errors * Add H404 to tox.ini * Ignore H405 * Re-add container\_migration tests * Update doc strings * Update docstring in tests * Check for container operation in iamge\_upload * Fix typo * Cache travis pip installation * Bump travis to test on 3.4 * WTF. Case got changed on the bool * Fix tests to run on python3 * Don't write pyc files when running tox * Fix a bug in the envlist for tox * Fix relative import to be a full path import * Re-add LXC hv\_types * Use LXD as hypervisor type * Use socket.gethostname() * Remove container\_utils and ontainer\_client * Refactor utils used by LXD functions * Refactor snapstop LXD functions * Refactor migration LXD functions * Refactor container LXD functions * Remove container\_client and container\_utils * Fix strings * Use hostname fqdn * Fix arch on non-x86\_64 arches * Fix pep8 and py27 errors * More typo fixes * Fix typos * Improve debug output * Split out fetch image * Add doc strings * Remove more dead code * Add image\_upload to session * Refactor container\_wait * Consolidate create\_alias * Add image\_ref to the fake instance * Use alias\_defined * Add docstring for tests * Refactor image\_defined * Fix fall out from renames * Rename test\_container\_image.py and container\_image.py * Remove dead code * Fix pep8 * Wait for image to upload * Transition from nova-compute-lxd to nova-lxd * Update readme * Update requirements * Fix alias creation * Split out LXDContainerDirectories * Update devstack config * Update default profile * Update test\_vif\_api * Update test\_driver\_api * Update test\_container\_utils * Update test\_container\_ops * Update test\_container\_migration * Update test\_container\_image * Update test\_container\_config * Update entry point * Update driver * Update container\_snapshot * Update container\_ops * Update container\_migrate * nclxd -> nova\_lxd transition * Update container\_client * Remove contrib directory * Update todo list * Update coverage * Update package defaults * nclxd becomes nova\_lxd * Pass the full operation url * Fix typo * Switch devstack installation to devstack-plugin * PEP8 * implemented plug\_vifs() and unplug\_vifs() driver methods, fixed missing import of nova.exception * Fix pep8 * Remove cpu limits * Fix pep8 errors * Fix container migration and resize * Update spec * Use snapshot id when creating images * Revert "Refactor network creation" * Revert "Fix pep8" * Fix pep8 * Refactor network creation * Bump to 0.18 * Bump to 0.18 * Fix container rescue/unrescue with lvm * Update get\_container\_dir * Transition to use instance.name * Handle container initialization failure * Re-add timeout * Fix unit tests * Add container migration unit tests * Fix vif creation * Fix unit tests * Spelling fixes * Fixup for rename of README.rst to README.md * Fix pep8 * Fix setup.cfg * Add unit tests * Fix contianer destroy tests * Stop the container stop before destroying * Specify the correct host * Remove wait\_for\_state * Fix exceptions and LOG info * Ensure vifs are unplugged during cleanup * Updates to README * More pep8 fixes * Fix image\_properties handling * fix pep8 * Update image property * Fix pep8 * Add negative test * Fix failing unit test * Generate LXD manifest from image\_meta * Always use unix socket * fix pep8 * Remove python3.4 checks * LXD test image fixes * Remove LXDTestContainerUtils * Fix LXDTestNetworkDriver * Fix LXDContainerOps * Update travis config * Fix LXDTestDriver * Fix LXDTestDriverNoops tests * Fix flake8 errors * Add travis * Update readme * Ensure image upload activities are guarded against concurrency * Drop update for glance image metadata * Use unix socket if hosts matches up * Remove unused variables * Fix test\_container\_config * Create a fake instance * Split out mock functions * Version bump * Use os-testr * Adjust image properties * Update setup.cfg * Container check * small fixes * Fix typo * Remove host check * Update tests for container\_config * Add cold migration support * Destroy check * Refactor container creation and operations * Capture back traces * Spawn a container into a seperate thread * Refactor container rescue/unrescue * fix snapshot name * fix another typo * wire up container snapshot * dont sent task\_state directly * operation-show -> operation-info * Fix container\_stop * Rescue * fix typo * Query snapshot status * Update rescue continer - part 1 * Update continer - part 1 * Pep8 fixes * Fix typos * fix bad merge * Update attach interface * Fix typos * remove debug info * Simplify vif plugging * Improve debugging messages * Dont upload image if exists * Just do a single init * Update snapshot support * Fix various typos * Ensure that we are on the right host * Fix debug info * Re-add suspend/resume * Fix unknown power\_state * Fix container pause/unpause * Reset container migration * Dont stop container before deleting it * Update requirements * Check status\_code for power\_state * Reset migration * Optimize image creation and startup * Optimize image creation usage * Re-add bridge support * Refactor network creation * Release 0.16 * Add new features: * Refactor init\_host * Fix up instance.uuid migration * Revert "Rename instance.name to instance.uuid." * Rename instance.name to instance.uuid * Fix pep8 and flake8 tests * Update test-requirements * Update requirements * devstack: Fix the README some more * Fix devstack integration * Cover more cases for cpu stats * Add failing snapshot test * Add failing wait tests * Drop container\_utils tests * Cover container\_update fail * Cover container\_init fail * Simplify test\_rescue\_defined * Cover the 404 case of container\_defined * Add bridge type vif tests * Add OVS unplug tests * Add LXDTestOVSDriver * Add uuid to MockInstance * Add test\_node\_is\_available * Add test\_get\_available\_nodes * Add test\_get\_host\_uptime * Don't use iteritems * Fix firewall calls * Add test\_firewall\_calls * Don't handle PyLXDException in container\_ops * Fix test\_detach\_interface * Add test\_attach\_interface * Fix PEP8 issues * Fix raise * Add get\_available\_resource tests * Refactor simple cases * Add power\_off tests * Move rescue tests to driver suite * Add pause test * Add simple / noop method tests * Add test\_snapshot * Don't catch all exceptions * Add detach interface tests * Fix NotImplementedError raises * Add more NotImplemented names * Add test\_get\_host\_ip\_addr * Add more NotImplemented tests * Move console test to driver suite * Add reboot tests * Add cleanup test * This line was never reached * Fix typo * Add destroy tests * Move test\_spawn\_ to driver suite * Add not implemented suite * Add test\_estimate\_instance\_overhead * Add test\_instance\_exists * Move test\_get\_info to driver suite * Move list\_instances test to driver suite * Move init tests to driver suite * Add LXDDriver config test * Use fake virtapi * Drop network\_info from tests * Adapt tests to new paths * Check for host before creating profile * Add container\_info * Add container\_config * Removed unused variable * Update root\_dir * wire up get\_host\_cpu\_stats * Dropped reset\_network * Deprecate inject\_file and inject\_network\_info * Drop the redundant 'lxd' prefix in config options * Fix py3 compatibility * Move container\_ops to oslo\_utils, too * Fix PEP8 issues * More fixes * More fileutils fixes * Switch to fileutils * Update requirements * Add attach/detach interface * Rename container\_info to container\_state * Fix container\_defined * Add test\_init\_host\_new\_profile * Fix PEP8 * Add test\_get\_console\_output * Add test\_get\_info * Add test\_container\_unrescue * Add rescue tests * Fix oslo\_utils import * Fix log message * Simplify container\_running * Fix fallback events type * Add test\_start\_instance * Add test\_create\_instance * Add failing create\_instance tests * Add test\_spawn\_new * Fix invocation of InstanceExists and raises * Add first container\_ops tests * Create default profile if it doesnt exist * Fix bad merge * tag 0.13 * Update requirements * Add security groups support * Add security groups support * Update requirements * Refactor image tests to mock CONF, not utils * Refactor config tests to mock CONF, not utils * Add test\_fetch\_image\_new * Add test\_fetch\_image\_new\_alias\_failed * Add test\_fetch\_image\_new\_upload\_failed * pylxd.API.image\_defined never raises 404 * Fix LXDContainerImage.fetch\_image * Add test\_fetch\_image\_new\_defined * Add tests for existing image * Move MockInstance to nclxd.tests * Add test\_configure\_container\_configdrive tests * Add test\_configure\_container\_rescuedisk * Add test\_configure\_network\_devices * Add test\_configure\_container\_config * Add first container\_config tests * Fix .coveragerc * Update requirements * Re-add missing import * Add requirement of pylxd since its in pypi * Fixes for python3 * Use absolute imports * Reorder imports * Fix typo in fix\_container\_configdrive name * Fix PEP8 errors * Fix container utils tests * Fix run\_tests.sh path for --debug * tag 0.12 * Fix container cleanup * image synching * Add snapshot support * More updates * remove cruft * cleanup * rescue/unrescue part 2 * rescue part 1 * rough in rescue/unrescue * Fix container config * Configure container devices * Rename function * Fix add-config for container configuration * Remove profile * create default profile * More cleanups * remove cruft * Add configdrive support * refactor container config * Split out profile and container configuration * Refactor container-configuration * Prep for rescue/unrescue * Add suspend/resume/pause/unpause * Add power on and power off * Add more unit tests * Add container reboot * final code refactor * Revert "nclxd ng" * Revert "Add get-available-nodes" * Revert "Remove cleanup\_host" * Revert "Add .idea directory" * Revert "Wire up list\_instances and list\_instances\_uuid." * Revert "wire up hosts" * wire up hosts * Wire up list\_instances and list\_instances\_uuid * Add .idea directory * Remove cleanup\_host * Add get-available-nodes * nclxd ng * Correct test cases * Remove unused file * Add firewall support * Fix up typos * Liberty fixes * Fix typo * devstack fixes * Update devstack support * Add Scott's image conversion script * Open for liberty * fix git repository * Add todo list * Various bug fixes * Updates for LXD 0.7 * Removed debug info * Update filters * Create the container at one shot * More fixes from lxd testing * Fixes from lxd testing * More fixes from lxd testing * Miscalenous vivid fixes for 0.5 * fix destroy * Vivid overlay * Add lxd load testing fixes * remove firewall * Add fixups from load testing * Work backwards for known units * Correct the dict * Correct hard disk calculation * converter remove verbose flag in tar creation * converter fix tar parameter quoting * converter remove extra tar parameter * Add support for security groups * Re-do lxd-image-converter * Update to version 0.5 * Add utiltity to convert tarballs * pep8 clean ups * Removed unused import * Adjust changes for nclxd * 0.3 updates * Update * Sync with 0.4 * Re-do image creation * Use pylxd instead * Switch to using pylxd * 0.2 changes * Debug * use instance img\_href * Create alias * Get fingerprint of the image * typo fix * Fetch image from glance * Update clients * Start of surgery * Re-do how we do container list * Loose logging * Loose logging * Use oslo.log * Speed up fixes * Fix console and other stuff * Clean up * Add version * Remove debug * Add debug * Fix lxc-usernet * Fix typo * Fix template * fix typo * Fix formatting * fix typo * default user * Config fix ups * Revert "Add default user" * More fixes * Fix typo * Add default user * Configure lxd a bit harder * Fix lxd * LXD changes * Fix LXC configuration file of console and logfile * Fix typo * Fix unix sockets * Use unix socket * Add filters * Add missing file * Add missing btrfs * More typos * KeyError * Specify the correct directory * Makre sure we are using ints * Fix containre stats * Add statistics for container * Remove ppa and fix up networking a bit more * Fix console log * Stop the container before destorying it * Force Stop * More whoopsies * Whoops * Fix typos * More config changes * Fix container console * Add missing images * More updates * Fix typo * Fix network again * Fix typo * Fix white space * More networking changes * More fixes * Updates * More fixes * Startover with tests * Fix requirements * Update deps * Update requirements * More test fixes * Add missing tools directory * Add more unit tests * Add run\_tests.sh * Improve tests * Fix pep8 * Add missing tox.ini * M00re fixes * more fixes * More fixes * Fix container reboot * Copyright headers and cleanup * Add missing files * More fixes * Start adding tests * Update filters * Misc fixes * Add missing files * Add missing files * Frist commit nova-lxd-17.0.0/doc/0000775000175100017510000000000013246266437014135 5ustar zuulzuul00000000000000nova-lxd-17.0.0/doc/source/0000775000175100017510000000000013246266437015435 5ustar zuulzuul00000000000000nova-lxd-17.0.0/doc/source/contributing.rst0000666000175100017510000000011213246266025020663 0ustar zuulzuul00000000000000============ Contributing ============ .. include:: ../../CONTRIBUTING.rstnova-lxd-17.0.0/doc/source/usage.rst0000666000175100017510000000012213246266025017261 0ustar zuulzuul00000000000000======== Usage ======== To use nova-lxd in a project:: import nova.virt.lxd nova-lxd-17.0.0/doc/source/conf.py0000777000175100017510000000462113246266025016735 0ustar zuulzuul00000000000000# -*- coding: utf-8 -*- # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys sys.path.insert(0, os.path.abspath('../..')) # -- General configuration ---------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', #'sphinx.ext.intersphinx', 'oslosphinx' ] # autodoc generation is a bit aggressive and a nuisance when doing heavy # text edit cycles. # execute "export SPHINX_DEBUG=1" in your terminal to disable # The suffix of source filenames. source_suffix = '.rst' # The master toctree document. master_doc = 'index' # General information about the project. project = u'nova-lxd' copyright = u'2015, Canonical Ltd' # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). add_module_names = True # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # -- Options for HTML output -------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. # html_theme_path = ["."] # html_theme = '_theme' # html_static_path = ['static'] # Output file base name for HTML help builder. htmlhelp_basename = '%sdoc' % project # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass # [howto/manual]). latex_documents = [ ('index', '%s.tex' % project, u'%s Documentation' % project, u'OpenStack Foundation', 'manual'), ] # Example configuration for intersphinx: refer to the Python standard library. #intersphinx_mapping = {'http://docs.python.org/': None} nova-lxd-17.0.0/doc/source/vif_wiring.rst0000666000175100017510000000477413246266025020341 0ustar zuulzuul00000000000000Nova-LXD VIF Design Notes ========================= VIF plugging workflow --------------------- Nova-LXD makes use of the os-vif interface plugging library to wire LXD instances into underlying Neutron networking; however there are some subtle differences between the Nova-Libvirt driver and the Nova-LXD driver in terms of how the last mile wiring is done to the instances. In the Nova-Libvirt driver, Libvirt is used to start the instance in a paused state, which creates the required tap device and any required wiring to bridges created in previous os-vif plugging events. The concept of 'start-and-pause' does not exist in LXD, so the driver creates a veth pair instead, allowing the last mile wiring to be created in advance of the actual LXD container being created. This allows Neutron to complete the underlying VIF plugging at which point it will notify Nova and the Nova-LXD driver will create the LXD container and wire the pre-created veth pair into its profile. tap/tin veth pairs ------------------ The veth pair created to wire the LXD instance into the underlying Neutron networking uses the tap and tin prefixes; the tap named device is present on the host OS, allowing iptables based firewall rules to be applied as they are for other virt drivers, and the tin named device is passed to LXD as part of the container profile. LXD will rename this device internally within the container to an ethNN style name. The LXD profile devices for network interfaces are created as 'physical' rather than 'bridged' network devices as the driver handles creation of the veth pair, rather than LXD (as would happen with a bridged device). LXD profile interface naming ---------------------------- The name of the interfaces in each containers LXD profile maps to the devname provided by Neutron as part of VIF plugging - this will typically be of the format tapXXXXXXX. This allows for easier identification of the interface during detachment events later in instance lifecycle. Prior versions of the nova-lxd driver did not take this approach; interface naming was not consistent depending on when the interface was attached. The legacy code used to detach interfaces based on MAC address is used as a fallback in the event that the new style device name is not found, supporting upgraders from previous versions of the driver. Supported Interface Types ------------------------- The Nova-LXD driver has been validated with: - OpenvSwitch (ovs) hybrid bridge ports. - OpenvSwitch (ovs) standard ports. - Linuxbridge (bridge) ports nova-lxd-17.0.0/doc/source/index.rst0000666000175100017510000000077313246266025017300 0ustar zuulzuul00000000000000.. nova-lxd documentation master file, created by sphinx-quickstart on Tue Jul 9 22:26:36 2013. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to nova-lxd's documentation! ======================================================== Contents: .. toctree:: :maxdepth: 2 usage contributing exclusive_machine vif_wiring Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` nova-lxd-17.0.0/doc/source/exclusive_machine.rst0000666000175100017510000001530213246266025021656 0ustar zuulzuul00000000000000Nova-LXD Exclusive Machine ========================== As LXD is a system container format, it is possible to provision "bare metal" machines with nova-lxd without exposing the kernel and firmware to the tenant. This is done by means of host aggregates and flavor assignment. The instance will fill the entirety of the host, and no other instances will be assigned to it. This document describes the method used to achieve this exclusive machine scheduling. It is meant to serve as an example; the names of flavors and aggregates may be named as desired. Prerequisites ------------- Exclusive machine scheduling requires two scheduler filters to be enabled in `scheduler_default_filters` in `nova.conf`, namely `AggregateInstanceExtraSpecsFilter` and `AggregateNumInstancesFilter`. If juju was used to install and manage the openstack environment, the following command will enable these filters:: juju set nova-cloud-controller scheduler-default-filters="AggregateInstanceExtraSpecsFilter,AggregateNumInstancesFilter,RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter" Host Aggregate -------------- Each host designed to be exclusively available to a single instance must be added to a special host aggregate. In this example, the following is a nova host listing:: user@openstack$ nova host-list +------------+-----------+----------+ | host_name | service | zone | +------------+-----------+----------+ | machine-9 | cert | internal | | machine-9 | scheduler | internal | | machine-9 | conductor | internal | | machine-12 | compute | nova | | machine-11 | compute | nova | | machine-10 | compute | nova | +------------+-----------+----------+ Create the host aggregate itself. In this example, the aggregate is called "exclusive-machines":: user@openstack$ nova aggregate-create exclusive-machines +----+--------------------+-------------------+-------+----------+ | 1 | exclusive-machines | - | | | +----+--------------------+-------------------+-------+----------+ Two metadata properties are then set on the host aggregate itself:: user@openstack$ nova aggregate-set-metadata 1 aggregate_instance_extra_specs:exclusive=true Metadata has been successfully updated for aggregate 1. +----+--------------------+-------------------+-------+-------------------------------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+--------------------+-------------------+-------+-------------------------------------------------+ | 1 | exclusive-machines | - | | 'aggregate_instance_extra_specs:exclusive=true' | +----+--------------------+-------------------+-------+-------------------------------------------------+ user@openstack$ nova aggregate-set-metadata 1 max_instances_per_host=1 Metadata has been successfully updated for aggregate 1. +----+--------------------+-------------------+-------+-----------------------------------------------------------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+--------------------+-------------------+-------+-----------------------------------------------------------------------------+ | 1 | exclusive-machines | - | | 'aggregate_instance_extra_specs:exclusive=true', 'max_instances_per_host=1' | +----+--------------------+-------------------+-------+----------------------------------------------------------------------------- The first aggregate metadata property is the link between the flavor (still to be created) and the compute hosts (still to be added to the aggregate). The second metadata property ensures that nova doesn't ever try to add another instance to this one in (e.g. if nova is configured to overcommit resources). Now the hosts must be added to the aggregate. Once they are added to the host aggregate, they will not be available for other flavors. This will be important in resource sizing efforts. To add the hosts:: user@openstack$ nova aggregate-add-host exclusive-machines machine-10 Host juju-serverstack-machine-10 has been successfully added for aggregate 1 +----+--------------------+-------------------+--------------+-----------------------------------------------------------------------------+ | Id | Name | Availability Zone | Hosts | Metadata | +----+--------------------+-------------------+--------------+-----------------------------------------------------------------------------+ | 1 | exclusive-machines | - | 'machine-10' | 'aggregate_instance_extra_specs:exclusive=true', 'max_instances_per_host=1' | +----+--------------------+-------------------+--------------+-----------------------------------------------------------------------------+ Exclusive machine flavors ------------------------- When planning for exclusive machine flavors, there is still a small amount of various resources that will be needed for nova compute and lxd itself. In general, it's a safe bet that this can be quantified in 100MB of RAM, though specific hosts may need to be configured more closely to their use cases. In this example, `machine-10` has 4096MB of total memory, 2 CPUS, and 500GB of disk space. The flavor that is created will have a quantity of 3996MB of RAM, 2 CPUS, and 500GB of disk.:: user@openstack$ nova flavor-create --is-public true e1.medium 100 3996 500 2 +-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ | 100 | e1.medium | 3996 | 500 | 0 | | 2 | 1.0 | True | +-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ The `e1.medium` flavor must now have some metadata set to link it with the `exclusive-machines` host aggregate.:: user@openstack$ nova flavor-key 100 set exclusive=true Booting an exclusive instance ----------------------------- Once the host aggregate and flavor have been created, exclusive machines can be provisioned by using the flavor `e1.medium`:: user@openstack$ nova boot --flavor 100 --image $IMAGE exclusive The `exclusive` instance, once provisioned, will fill the entire host machine. nova-lxd-17.0.0/nova/0000775000175100017510000000000013246266437014333 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova/virt/0000775000175100017510000000000013246266437015317 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova/virt/lxd/0000775000175100017510000000000013246266437016106 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova/virt/lxd/storage.py0000666000175100017510000001151713246266025020124 0ustar zuulzuul00000000000000# Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_utils import fileutils from nova import exception from nova import utils from nova.virt import driver from nova.virt.lxd import common def attach_ephemeral(client, block_device_info, lxd_config, instance): """Attach ephemeral storage to an instance.""" ephemeral_storage = driver.block_device_info_get_ephemerals( block_device_info) if ephemeral_storage: storage_driver = lxd_config['environment']['storage'] container = client.containers.get(instance.name) container_id_map = container.config[ 'volatile.last_state.idmap'].split(',') storage_id = container_id_map[2].split(':')[1] instance_attrs = common.InstanceAttributes(instance) for ephemeral in ephemeral_storage: storage_dir = os.path.join( instance_attrs.storage_path, ephemeral['virtual_name']) if storage_driver == 'zfs': zfs_pool = lxd_config['config']['storage.zfs_pool_name'] utils.execute( 'zfs', 'create', '-o', 'mountpoint=%s' % storage_dir, '-o', 'quota=%sG' % instance.ephemeral_gb, '%s/%s-ephemeral' % (zfs_pool, instance.name), run_as_root=True) elif storage_driver == 'btrfs': # We re-use the same btrfs subvolumes that LXD uses, # so the ephemeral storage path is updated in the profile # before the container starts. storage_dir = os.path.join( instance_attrs.container_path, ephemeral['virtual_name']) profile = client.profiles.get(instance.name) storage_name = ephemeral['virtual_name'] profile.devices[storage_name]['source'] = storage_dir profile.save() utils.execute( 'btrfs', 'subvolume', 'create', storage_dir, run_as_root=True) utils.execute( 'btrfs', 'qgroup', 'limit', '%sg' % instance.ephemeral_gb, storage_dir, run_as_root=True) elif storage_driver == 'lvm': fileutils.ensure_tree(storage_dir) lvm_pool = lxd_config['config']['storage.lvm_vg_name'] lvm_volume = '%s-%s' % (instance.name, ephemeral['virtual_name']) lvm_path = '/dev/%s/%s' % (lvm_pool, lvm_volume) cmd = ( 'lvcreate', '-L', '%sG' % instance.ephemeral_gb, '-n', lvm_volume, lvm_pool) utils.execute(*cmd, run_as_root=True, attempts=3) utils.execute('mkfs', '-t', 'ext4', lvm_path, run_as_root=True) cmd = ('mount', '-t', 'ext4', lvm_path, storage_dir) utils.execute(*cmd, run_as_root=True) else: reason = _('Unsupport LXD storage detected. Supported' ' storage drivers are zfs and btrfs.') raise exception.NovaException(reason) utils.execute( 'chown', storage_id, storage_dir, run_as_root=True) def detach_ephemeral(block_device_info, lxd_config, instance): """Detach ephemeral device from the instance.""" ephemeral_storage = driver.block_device_info_get_ephemerals( block_device_info) if ephemeral_storage: storage_driver = lxd_config['environment']['storage'] for ephemeral in ephemeral_storage: if storage_driver == 'zfs': zfs_pool = lxd_config['config']['storage.zfs_pool_name'] utils.execute( 'zfs', 'destroy', '%s/%s-ephemeral' % (zfs_pool, instance.name), run_as_root=True) if storage_driver == 'lvm': lvm_pool = lxd_config['config']['storage.lvm_vg_name'] lvm_path = '/dev/%s/%s-%s' % ( lvm_pool, instance.name, ephemeral['virtual_name']) utils.execute('umount', lvm_path, run_as_root=True) utils.execute('lvremove', '-f', lvm_path, run_as_root=True) nova-lxd-17.0.0/nova/virt/lxd/vif.py0000666000175100017510000002031713246266025017242 0ustar zuulzuul00000000000000# Copyright (c) 2015 Canonical Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_concurrency import processutils from oslo_log import log as logging from nova import conf from nova import exception from nova import utils from nova.network import linux_net from nova.network import model as network_model from nova.network import os_vif_util import os_vif CONF = conf.CONF LOG = logging.getLogger(__name__) def get_vif_devname(vif): """Get device name for a given vif.""" if 'devname' in vif: return vif['devname'] return ("nic" + vif['id'])[:network_model.NIC_NAME_LEN] def get_vif_internal_devname(vif): """Get the internal device name for a given vif.""" return get_vif_devname(vif).replace('tap', 'tin') def _create_veth_pair(dev1_name, dev2_name, mtu=None): """Create a pair of veth devices with the specified names, deleting any previous devices with those names. """ for dev in [dev1_name, dev2_name]: linux_net.delete_net_dev(dev) utils.execute('ip', 'link', 'add', dev1_name, 'type', 'veth', 'peer', 'name', dev2_name, run_as_root=True) for dev in [dev1_name, dev2_name]: utils.execute('ip', 'link', 'set', dev, 'up', run_as_root=True) linux_net._set_device_mtu(dev, mtu) def _add_bridge_port(bridge, dev): utils.execute('brctl', 'addif', bridge, dev, run_as_root=True) def _is_no_op_firewall(): return CONF.firewall_driver == "nova.virt.firewall.NoopFirewallDriver" def _is_ovs_vif_port(vif): return vif['type'] == 'ovs' and not vif.is_hybrid_plug_enabled() def _get_bridge_config(vif): return { 'bridge': vif['network']['bridge'], 'mac_address': vif['address']} def _get_ovs_config(vif): if not _is_no_op_firewall() or vif.is_hybrid_plug_enabled(): return { 'bridge': ('qbr{}'.format(vif['id']))[:network_model.NIC_NAME_LEN], 'mac_address': vif['address']} else: return { 'bridge': vif['network']['bridge'], 'mac_address': vif['address']} def _get_tap_config(vif): return {'mac_address': vif['address']} CONFIG_GENERATORS = { 'bridge': _get_bridge_config, 'ovs': _get_ovs_config, 'tap': _get_tap_config, } def get_config(vif): """Get LXD specific config for a vif.""" vif_type = vif['type'] try: return CONFIG_GENERATORS[vif_type](vif) except KeyError: raise exception.NovaException( 'Unsupported vif type: {}'.format(vif_type)) # VIF_TYPE_OVS = 'ovs' # VIF_TYPE_BRIDGE = 'bridge' def _post_plug_wiring_veth_and_bridge(instance, vif): config = get_config(vif) network = vif.get('network') mtu = network.get_meta('mtu') if network else None v1_name = get_vif_devname(vif) v2_name = get_vif_internal_devname(vif) if not linux_net.device_exists(v1_name): _create_veth_pair(v1_name, v2_name, mtu) if _is_ovs_vif_port(vif): # NOTE(jamespage): wire tap device directly to ovs bridge linux_net.create_ovs_vif_port(vif['network']['bridge'], v1_name, vif['id'], vif['address'], instance.uuid, mtu) else: # NOTE(jamespage): wire tap device linux bridge _add_bridge_port(config['bridge'], v1_name) else: linux_net._set_device_mtu(v1_name, mtu) POST_PLUG_WIRING = { 'bridge': _post_plug_wiring_veth_and_bridge, 'ovs': _post_plug_wiring_veth_and_bridge, } def _post_plug_wiring(instance, vif): """Perform nova-lxd specific post os-vif plug processing :param vif: a nova.network.model.VIF instance Perform any post os-vif plug wiring requires to network the instance LXD container with the underlying Neutron network infrastructure """ LOG.debug("Performing post plug wiring for VIF %s", vif) vif_type = vif['type'] try: POST_PLUG_WIRING[vif_type](instance, vif) except KeyError: LOG.debug("No post plug wiring step " "for vif type: {}".format(vif_type)) # VIF_TYPE_OVS = 'ovs' # VIF_TYPE_BRIDGE = 'bridge' def _post_unplug_wiring_delete_veth(instance, vif): v1_name = get_vif_devname(vif) try: if _is_ovs_vif_port(vif): linux_net.delete_ovs_vif_port(vif['network']['bridge'], v1_name, True) else: linux_net.delete_net_dev(v1_name) except processutils.ProcessExecutionError: LOG.exception("Failed to delete veth for vif", vif=vif) POST_UNPLUG_WIRING = { 'bridge': _post_unplug_wiring_delete_veth, 'ovs': _post_unplug_wiring_delete_veth, } def _post_unplug_wiring(instance, vif): """Perform nova-lxd specific post os-vif unplug processing :param vif: a nova.network.model.VIF instance Perform any post os-vif unplug wiring requires to remove network interfaces assocaited with a lxd container. """ LOG.debug("Performing post unplug wiring for VIF %s", vif) vif_type = vif['type'] try: POST_UNPLUG_WIRING[vif_type](instance, vif) except KeyError: LOG.debug("No post unplug wiring step " "for vif type: {}".format(vif_type)) class LXDGenericVifDriver(object): """Generic VIF driver for LXD networking.""" def __init__(self): os_vif.initialize() def plug(self, instance, vif): vif_type = vif['type'] instance_info = os_vif_util.nova_to_osvif_instance(instance) # Try os-vif codepath first vif_obj = os_vif_util.nova_to_osvif_vif(vif) if vif_obj is not None: os_vif.plug(vif_obj, instance_info) else: # Legacy non-os-vif codepath func = getattr(self, 'plug_%s' % vif_type, None) if not func: raise exception.InternalError( "Unexpected vif_type=%s" % vif_type ) func(instance, vif) _post_plug_wiring(instance, vif) def unplug(self, instance, vif): vif_type = vif['type'] instance_info = os_vif_util.nova_to_osvif_instance(instance) # Try os-vif codepath first vif_obj = os_vif_util.nova_to_osvif_vif(vif) if vif_obj is not None: os_vif.unplug(vif_obj, instance_info) else: # Legacy non-os-vif codepath func = getattr(self, 'unplug_%s' % vif_type, None) if not func: raise exception.InternalError( "Unexpected vif_type=%s" % vif_type ) func(instance, vif) _post_unplug_wiring(instance, vif) def plug_tap(self, instance, vif): """Plug a VIF_TYPE_TAP virtual interface.""" v1_name = get_vif_devname(vif) v2_name = get_vif_internal_devname(vif) network = vif.get('network') mtu = network.get_meta('mtu') if network else None # NOTE(jamespage): For nova-lxd this is really a veth pair # so that a) security rules get applied on the host # and b) that the container can still be wired. if not linux_net.device_exists(v1_name): _create_veth_pair(v1_name, v2_name, mtu) else: linux_net._set_device_mtu(v1_name, mtu) def unplug_tap(self, instance, vif): """Unplug a VIF_TYPE_TAP virtual interface.""" dev = get_vif_devname(vif) try: linux_net.delete_net_dev(dev) except processutils.ProcessExecutionError: LOG.exception("Failed while unplugging vif", instance=instance) nova-lxd-17.0.0/nova/virt/lxd/session.py0000666000175100017510000001676013246266025020150 0ustar zuulzuul00000000000000# Copyright 2015 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See # the License for the specific language governing permissions and # limitations under the License. import nova.conf from nova import context as nova_context from nova import exception from nova import i18n from nova import rpc from oslo_log import log as logging from oslo_utils import excutils from pylxd.deprecated import api from pylxd.deprecated import exceptions as lxd_exceptions _ = i18n._ CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class LXDAPISession(object): """The session to invoke the LXD API session.""" def get_session(self, host=None): """Returns a connection to the LXD hypervisor This method should be used to create a connection to the LXD hypervisor via the pylxd API call. :param host: host is the LXD daemon to connect to :return: pylxd object """ try: if host: return api.API(host=host) else: return api.API() except Exception as ex: # notify the compute host that the connection failed # via an rpc call LOG.exception('Connection to LXD failed') payload = dict(ip=CONF.host, method='_connect', reason=ex) rpc.get_notifier('compute').error(nova_context.get_admin_context, 'compute.nova_lxd.error', payload) raise exception.HypervisorUnavailable(host=CONF.host) # # Container related API methods # def container_init(self, config, instance, host=None): """Create a LXD container :param config: LXD container config as a dict :param instance: nova instance object :param host: perform initialization on perfered host """ LOG.debug('container_init called for instance', instance=instance) try: LOG.info('Creating container %(instance)s with' ' %(image)s', {'instance': instance.name, 'image': instance.image_ref}) client = self.get_session(host=host) (state, data) = client.container_init(config) operation = data.get('operation') self.operation_wait(operation, instance, host=host) status, data = self.operation_info(operation, instance, host=host) data = data.get('metadata') if not data['status_code'] == 200: msg = data.get('err') or data['metadata'] raise exception.NovaException(msg) LOG.info('Successfully created container %(instance)s with' ' %(image)s', {'instance': instance.name, 'image': instance.image_ref}) except lxd_exceptions.APIError as ex: msg = _('Failed to communicate with LXD API %(instance)s:' ' %(reason)s') % {'instance': instance.name, 'reason': ex} raise exception.NovaException(msg) except Exception as ex: with excutils.save_and_reraise_exception(): LOG.error( 'Failed to create container %(instance)s: %(reason)s', {'instance': instance.name, 'reason': ex}, instance=instance) # # Operation methods # def operation_wait(self, operation_id, instance, host=None): """Waits for an operation to return 200 (Success) :param operation_id: The operation to wait for. :param instance: nova instace object """ LOG.debug('wait_for_container for instance', instance=instance) try: client = self.get_session(host=host) if not client.wait_container_operation(operation_id, 200, -1): msg = _('Container creation timed out') raise exception.NovaException(msg) except lxd_exceptions.APIError as ex: msg = _('Failed to communicate with LXD API %(instance)s:' '%(reason)s') % {'instance': instance.image_ref, 'reason': ex} LOG.error(msg) raise exception.NovaException(msg) except Exception as e: with excutils.save_and_reraise_exception(): LOG.error('Error from LXD during operation wait' '%(instance)s: %(reason)s', {'instance': instance.image_ref, 'reason': e}, instance=instance) def operation_info(self, operation_id, instance, host=None): LOG.debug('operation_info called for instance', instance=instance) try: client = self.get_session(host=host) return client.operation_info(operation_id) except lxd_exceptions.APIError as ex: msg = _('Failed to communicate with LXD API %(instance)s:' ' %(reason)s') % {'instance': instance.image_ref, 'reason': ex} LOG.error(msg) raise exception.NovaException(msg) except Exception as e: with excutils.save_and_reraise_exception(): LOG.error('Error from LXD during operation_info ' '%(instance)s: %(reason)s', {'instance': instance.image_ref, 'reason': e}, instance=instance) # # Migrate methods # def container_migrate(self, instance_name, host, instance): """Initialize a container migration for LXD :param instance_name: container name :param host: host to move container from :param instance: nova instance object :return: dictionary of the container keys """ LOG.debug('container_migrate called for instance', instance=instance) try: LOG.info('Migrating instance %(instance)s with ' '%(image)s', {'instance': instance_name, 'image': instance.image_ref}) client = self.get_session() (state, data) = client.container_migrate(instance_name) LOG.info('Successfully initialized migration for instance ' '%(instance)s with %(image)s', {'instance': instance.name, 'image': instance.image_ref}) return (state, data) except lxd_exceptions.APIError as ex: msg = _('Failed to communicate with LXD API %(instance)s:' ' %(reason)s') % {'instance': instance.name, 'reason': ex} raise exception.NovaException(msg) except Exception as ex: with excutils.save_and_reraise_exception(): LOG.error( 'Failed to migrate container %(instance)s: %(' 'reason)s', {'instance': instance.name, 'reason': ex}, instance=instance) nova-lxd-17.0.0/nova/virt/lxd/flavor.py0000666000175100017510000001656713246266025017763 0ustar zuulzuul00000000000000# Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from nova import exception from nova import i18n from nova.virt import driver from oslo_config import cfg from oslo_utils import units from nova.virt.lxd import common from nova.virt.lxd import vif _ = i18n._ CONF = cfg.CONF def _base_config(instance, _): instance_attributes = common.InstanceAttributes(instance) return { 'environment.product_name': 'OpenStack Nova', 'raw.lxc': 'lxc.console.logfile={}\n'.format( instance_attributes.console_path), } def _nesting(instance, _): if instance.flavor.extra_specs.get('lxd:nested_allowed'): return {'security.nesting': 'True'} def _security(instance, _): if instance.flavor.extra_specs.get('lxd:privileged_allowed'): return {'security.privileged': 'True'} def _memory(instance, _): mem = instance.memory_mb if mem >= 0: return {'limits.memory': '{}MB'.format(mem)} def _cpu(instance, _): vcpus = instance.flavor.vcpus if vcpus >= 0: return {'limits.cpu': str(vcpus)} def _isolated(instance, client): lxd_isolated = instance.flavor.extra_specs.get('lxd:isolated') if lxd_isolated: extensions = client.host_info.get('api_extensions', []) if 'id_map' in extensions: return {'security.idmap.isolated': 'True'} else: msg = _('Host does not support isolated instances') raise exception.NovaException(msg) _CONFIG_FILTER_MAP = [ _base_config, _nesting, _security, _memory, _cpu, _isolated, ] def _root(instance, client, *_): """Configure the root disk.""" device = {'type': 'disk', 'path': '/'} environment = client.host_info['environment'] if environment['storage'] in ['btrfs', 'zfs'] or CONF.lxd.pool: device['size'] = '{}GB'.format(instance.root_gb) specs = instance.flavor.extra_specs # Bytes and iops are not separate config options in a container # profile - we let Bytes take priority over iops if both are set. # Align all limits to MiB/s, which should be a sensible middle road. if specs.get('quota:disk_read_iops_sec'): device['limits.read'] = '{}iops'.format( specs['quota:disk_read_iops_sec']) if specs.get('quota:disk_write_iops_sec'): device['limits.write'] = '{}iops'.format( specs['quota:disk_write_iops_sec']) if specs.get('quota:disk_read_bytes_sec'): device['limits.read'] = '{}MB'.format( int(specs['quota:disk_read_bytes_sec']) // units.Mi) if specs.get('quota:disk_write_bytes_sec'): device['limits.write'] = '{}MB'.format( int(specs['quota:disk_write_bytes_sec']) // units.Mi) minor_quota_defined = 'limits.write' in device or 'limits.read' in device if specs.get('quota:disk_total_iops_sec') and not minor_quota_defined: device['limits.max'] = '{}iops'.format( specs['quota:disk_total_iops_sec']) if specs.get('quota:disk_total_bytes_sec') and not minor_quota_defined: device['limits.max'] = '{}MB'.format( int(specs['quota:disk_total_bytes_sec']) // units.Mi) if CONF.lxd.pool: extensions = client.host_info.get('api_extensions', []) if 'storage' in extensions: device['pool'] = CONF.lxd.pool else: msg = _('Host does not have storage pool support') raise exception.NovaException(msg) return {'root': device} def _ephemeral_storage(instance, client, __, block_info): instance_attributes = common.InstanceAttributes(instance) ephemeral_storage = driver.block_device_info_get_ephemerals(block_info) if ephemeral_storage: devices = {} for ephemeral in ephemeral_storage: ephemeral_src = os.path.join( instance_attributes.storage_path, ephemeral['virtual_name']) device = { 'path': '/mnt', 'source': ephemeral_src, 'type': 'disk', } if CONF.lxd.pool: extensions = client.host_info.get('api_extensions', []) if 'storage' in extensions: device['pool'] = CONF.lxd.pool else: msg = _('Host does not have storage pool support') raise exception.NovaException(msg) devices[ephemeral['virtual_name']] = device return devices def _network(instance, _, network_info, __): if not network_info: return devices = {} for vifaddr in network_info: cfg = vif.get_config(vifaddr) devname = vif.get_vif_devname(vifaddr) key = devname devices[key] = { 'nictype': 'physical', 'hwaddr': str(cfg['mac_address']), 'parent': vif.get_vif_internal_devname(vifaddr), 'type': 'nic' } specs = instance.flavor.extra_specs # Since LXD does not implement average NIC IO and number of burst # bytes, we take the max(vif_*_average, vif_*_peak) to set the peak # network IO and simply ignore the burst bytes. # Align values to MBit/s (8 * powers of 1000 in this case), having # in mind that the values are recieved in Kilobytes/s. vif_inbound_limit = max( int(specs.get('quota:vif_inbound_average', 0)), int(specs.get('quota:vif_inbound_peak', 0)), ) if vif_inbound_limit: devices[key]['limits.ingress'] = '{}Mbit'.format( vif_inbound_limit * units.k * 8 // units.M) vif_outbound_limit = max( int(specs.get('quota:vif_outbound_average', 0)), int(specs.get('quota:vif_outbound_peak', 0)), ) if vif_outbound_limit: devices[key]['limits.egress'] = '{}Mbit'.format( vif_outbound_limit * units.k * 8 // units.M) return devices _DEVICE_FILTER_MAP = [ _root, _ephemeral_storage, _network, ] def to_profile(client, instance, network_info, block_info, update=False): """Convert a nova flavor to a lxd profile. Every instance container created via nova-lxd has a profile by the same name. The profile is sync'd with the configuration of the container. When the instance container is deleted, so is the profile. """ name = instance.name config = {} for f in _CONFIG_FILTER_MAP: new = f(instance, client) if new: config.update(new) devices = {} for f in _DEVICE_FILTER_MAP: new = f(instance, client, network_info, block_info) if new: devices.update(new) if update is True: profile = client.profiles.get(name) profile.devices = devices profile.config = config profile.save() return profile else: return client.profiles.create(name, config, devices) nova-lxd-17.0.0/nova/virt/lxd/__init__.py0000666000175100017510000000007713246266025020216 0ustar zuulzuul00000000000000from nova.virt.lxd import driver LXDDriver = driver.LXDDriver nova-lxd-17.0.0/nova/virt/lxd/common.py0000666000175100017510000000245513246266025017751 0ustar zuulzuul00000000000000# Copyright 2015 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import os from nova import conf _InstanceAttributes = collections.namedtuple('InstanceAttributes', [ 'instance_dir', 'console_path', 'storage_path', 'container_path']) def InstanceAttributes(instance): """An instance adapter for nova-lxd specific attributes.""" instance_dir = os.path.join(conf.CONF.instances_path, instance.name) console_path = os.path.join('/var/log/lxd/', instance.name, 'console.log') storage_path = os.path.join(instance_dir, 'storage') container_path = os.path.join( conf.CONF.lxd.root_dir, 'containers', instance.name) return _InstanceAttributes( instance_dir, console_path, storage_path, container_path) nova-lxd-17.0.0/nova/virt/lxd/driver.py0000666000175100017510000014124413246266025017754 0ustar zuulzuul00000000000000# Copyright 2015 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import errno import io import json import os import platform import pwd import shutil import socket import tarfile import tempfile import hashlib import eventlet import nova.conf import nova.context from contextlib import closing from nova import exception from nova import i18n from nova import image from nova import network from nova.network import model as network_model from nova import objects from nova.virt import driver from os_brick.initiator import connector from oslo_concurrency import processutils from oslo_config import cfg from oslo_log import log as logging from oslo_utils import fileutils import pylxd from pylxd import exceptions as lxd_exceptions from nova.virt.lxd import vif as lxd_vif from nova.virt.lxd import common from nova.virt.lxd import flavor from nova.virt.lxd import storage from nova.api.metadata import base as instance_metadata from nova.objects import fields as obj_fields from nova.objects import migrate_data from nova.virt import configdrive from nova.compute import power_state from nova.compute import vm_states from nova.virt import hardware from oslo_utils import units from oslo_serialization import jsonutils from nova import utils import psutil from oslo_concurrency import lockutils from nova.compute import task_states from oslo_utils import excutils from oslo_utils import strutils from nova.virt import firewall _ = i18n._ lxd_opts = [ cfg.StrOpt('root_dir', default='/var/lib/lxd/', help='Default LXD directory'), cfg.StrOpt('pool', default=None, help='LXD Storage pool to use with LXD >= 2.9'), cfg.IntOpt('timeout', default=-1, help='Default LXD timeout'), cfg.BoolOpt('allow_live_migration', default=False, help='Determine wheter to allow live migration'), ] CONF = cfg.CONF CONF.register_opts(lxd_opts, 'lxd') LOG = logging.getLogger(__name__) IMAGE_API = image.API() MAX_CONSOLE_BYTES = 100 * units.Ki NOVA_CONF = nova.conf.CONF ACCEPTABLE_IMAGE_FORMATS = {'raw', 'root-tar', 'squashfs'} BASE_DIR = os.path.join( CONF.instances_path, CONF.image_cache_subdirectory_name) def _last_bytes(file_like_object, num): """Return num bytes from the end of the file, and remaning byte count. :param file_like_object: The file to read :param num: The number of bytes to return :returns: (data, remaining) """ try: file_like_object.seek(-num, os.SEEK_END) except IOError as e: # seek() fails with EINVAL when trying to go before the start of # the file. It means that num is larger than the file size, so # just go to the start. if e.errno == errno.EINVAL: file_like_object.seek(0, os.SEEK_SET) else: raise remaining = file_like_object.tell() return (file_like_object.read(), remaining) def _neutron_failed_callback(event_name, instance): LOG.error('Neutron Reported failure on event ' '%(event)s for instance %(uuid)s', {'event': event_name, 'uuid': instance.name}, instance=instance) if CONF.vif_plugging_is_fatal: raise exception.VirtualInterfaceCreateException() def _get_cpu_info(): """Get cpu information. This method executes lscpu and then parses the output, returning a dictionary of information. """ cpuinfo = {} out, err = utils.execute('lscpu') if err: msg = _('Unable to parse lscpu output.') raise exception.NovaException(msg) cpu = [line.strip('\n') for line in out.splitlines()] for line in cpu: if line.strip(): name, value = line.split(':', 1) name = name.strip().lower() cpuinfo[name] = value.strip() f = open('/proc/cpuinfo', 'r') features = [line.strip('\n') for line in f.readlines()] for line in features: if line.strip(): if line.startswith('flags'): name, value = line.split(':', 1) name = name.strip().lower() cpuinfo[name] = value.strip() return cpuinfo def _get_ram_usage(): """Get memory info.""" with open('/proc/meminfo') as fp: m = fp.read().split() idx1 = m.index('MemTotal:') idx2 = m.index('MemFree:') idx3 = m.index('Buffers:') idx4 = m.index('Cached:') total = int(m[idx1 + 1]) avail = int(m[idx2 + 1]) + int(m[idx3 + 1]) + int(m[idx4 + 1]) return { 'total': total * 1024, 'used': (total - avail) * 1024 } def _get_fs_info(path): """Get free/used/total disk space.""" hddinfo = os.statvfs(path) total = hddinfo.f_blocks * hddinfo.f_bsize available = hddinfo.f_bavail * hddinfo.f_bsize used = total - available return {'total': total, 'available': available, 'used': used} def _get_zpool_info(pool): """Get free/used/total disk space in a zfs pool.""" def _get_zpool_attribute(attribute): value, err = utils.execute('zpool', 'list', '-o', attribute, '-H', pool, run_as_root=True) if err: msg = _('Unable to parse zpool output.') raise exception.NovaException(msg) value = strutils.string_to_bytes('{}B'.format(value.strip()), return_int=True) return value total = _get_zpool_attribute('size') used = _get_zpool_attribute('alloc') available = _get_zpool_attribute('free') return {'total': total, 'available': available, 'used': used} def _get_power_state(lxd_state): """Take a lxd state code and translate it to nova power state.""" state_map = [ (power_state.RUNNING, {100, 101, 103, 200}), (power_state.SHUTDOWN, {102, 104, 107}), (power_state.NOSTATE, {105, 106, 401}), (power_state.CRASHED, {108, 400}), (power_state.SUSPENDED, {109, 110, 111}), ] for nova_state, lxd_states in state_map: if lxd_state in lxd_states: return nova_state raise ValueError('Unknown LXD power state: {}'.format(lxd_state)) def _sync_glance_image_to_lxd(client, context, image_ref): """Sync an image from glance to LXD image store. The image from glance can't go directly into the LXD image store, as LXD needs some extra metadata connected to it. The image is stored in the LXD image store with an alias to the image_ref. This way, it will only copy over once. """ lock_path = os.path.join(CONF.instances_path, 'locks') with lockutils.lock( lock_path, external=True, lock_file_prefix='lxd-image-{}'.format(image_ref)): # NOTE(jamespage): Re-query by image_ref to ensure # that another process did not # sneak infront of this one and create # the same image already. try: client.images.get_by_alias(image_ref) return except lxd_exceptions.LXDAPIException as e: if e.response.status_code != 404: raise try: image_file = tempfile.mkstemp()[1] manifest_file = tempfile.mkstemp()[1] image = IMAGE_API.get(context, image_ref) if image.get('disk_format') not in ACCEPTABLE_IMAGE_FORMATS: raise exception.ImageUnacceptable( image_id=image_ref, reason=_('Bad image format')) IMAGE_API.download(context, image_ref, dest_path=image_file) # It is possible that LXD already have the same image # but NOT aliased as result of previous publish/export operation # (snapshot from openstack). # In that case attempt to add it again # (implicitly via instance launch from affected image) will produce # LXD error - "Image with same fingerprint already exists". # Error does not have unique identifier to handle it we calculate # fingerprint of image as LXD do it and check if LXD already have # image with such fingerprint. # If any we will add alias to this image and will not re-import it def add_alias(): def lxdimage_fingerprint(): def sha256_file(): sha256 = hashlib.sha256() with closing(open(image_file, 'rb')) as f: for block in iter(lambda: f.read(65536), b''): sha256.update(block) return sha256.hexdigest() return sha256_file() fingerprint = lxdimage_fingerprint() if client.images.exists(fingerprint): LOG.info( 'Image with fingerprint %(fingerprint)s already exists' 'but not accessible by alias %(alias)s, add alias', {'fingerprint': fingerprint, 'alias': image_ref}) lxdimage = client.images.get(fingerprint) lxdimage.add_alias(image_ref, '') return True return False if add_alias(): return # up2date LXD publish/export operations produce images which # already contains /rootfs and metdata.yaml in exported file. # We should not pass metdata explicitly in that case as imported # image will be unusable bacause LXD will think that it containts # rootfs and will not extract embedded /rootfs properly. # Try to detect if image content already has metadata and not pass # explicit metadata in that case def imagefile_has_metadata(image_file): try: with closing(tarfile.TarFile.open( name=image_file, mode='r:*')) as tf: try: tf.getmember('metadata.yaml') return True except KeyError: pass except tarfile.ReadError: pass return False if imagefile_has_metadata(image_file): LOG.info('Image %(alias)s already has metadata, ' 'skipping metadata injection...', {'alias': image_ref}) with open(image_file, 'rb') as image: image = client.images.create(image, wait=True) else: metadata = { 'architecture': image.get( 'hw_architecture', obj_fields.Architecture.from_host()), 'creation_date': int(os.stat(image_file).st_ctime)} metadata_yaml = json.dumps( metadata, sort_keys=True, indent=4, separators=(',', ': '), ensure_ascii=False).encode('utf-8') + b"\n" tarball = tarfile.open(manifest_file, "w:gz") tarinfo = tarfile.TarInfo(name='metadata.yaml') tarinfo.size = len(metadata_yaml) tarball.addfile(tarinfo, io.BytesIO(metadata_yaml)) tarball.close() with open(manifest_file, 'rb') as manifest: with open(image_file, 'rb') as image: image = client.images.create( image, metadata=manifest, wait=True) image.add_alias(image_ref, '') finally: os.unlink(image_file) os.unlink(manifest_file) def brick_get_connector_properties(multipath=False, enforce_multipath=False): """Wrapper to automatically set root_helper in brick calls. :param multipath: A boolean indicating whether the connector can support multipath. :param enforce_multipath: If True, it raises exception when multipath=True is specified but multipathd is not running. If False, it falls back to multipath=False when multipathd is not running. """ root_helper = utils.get_root_helper() return connector.get_connector_properties(root_helper, CONF.my_ip, multipath, enforce_multipath) def brick_get_connector(protocol, driver=None, use_multipath=False, device_scan_attempts=3, *args, **kwargs): """Wrapper to get a brick connector object. This automatically populates the required protocol as well as the root_helper needed to execute commands. """ root_helper = utils.get_root_helper() if protocol.upper() == "RBD": kwargs['do_local_attach'] = True return connector.InitiatorConnector.factory( protocol, root_helper, driver=driver, use_multipath=use_multipath, device_scan_attempts=device_scan_attempts, *args, **kwargs) class LXDLiveMigrateData(migrate_data.LiveMigrateData): """LiveMigrateData for LXD.""" VERSION = '1.0' fields = {} class LXDDriver(driver.ComputeDriver): """A LXD driver for nova. LXD is a system container hypervisor. LXDDriver provides LXD functionality to nova. For more information about LXD, see http://www.ubuntu.com/cloud/lxd """ capabilities = { "has_imagecache": False, "supports_recreate": False, "supports_migrate_to_same_host": False, "supports_attach_interface": True, "supports_multiattach": False, } def __init__(self, virtapi): super(LXDDriver, self).__init__(virtapi) self.client = None # Initialized by init_host self.host = NOVA_CONF.host self.network_api = network.API() self.vif_driver = lxd_vif.LXDGenericVifDriver() self.firewall_driver = firewall.load_driver( default='nova.virt.firewall.NoopFirewallDriver') def init_host(self, host): """Initialize the driver on the host. The pylxd Client is initialized. This initialization may raise an exception if the LXD instance cannot be found. The `host` argument is ignored here, as the LXD instance is assumed to be on the same system as the compute worker running this code. This is by (current) design. See `nova.virt.driver.ComputeDriver.init_host` for more information. """ try: self.client = pylxd.Client() except lxd_exceptions.ClientConnectionFailed as e: msg = _('Unable to connect to LXD daemon: %s') % e raise exception.HostNotFound(msg) self._after_reboot() def cleanup_host(self, host): """Clean up the host. `nova.virt.ComputeDriver` defines this method. It is overridden here to be explicit that there is nothing to be done, as `init_host` does not create any resources that would need to be cleaned up. See `nova.virt.driver.ComputeDriver.cleanup_host` for more information. """ def get_info(self, instance): """Return an InstanceInfo object for the instance.""" try: container = self.client.containers.get(instance.name) except lxd_exceptions.NotFound: raise exception.InstanceNotFound(instance_id=instance.uuid) state = container.state() return hardware.InstanceInfo( state=_get_power_state(state.status_code)) def list_instances(self): """Return a list of all instance names.""" return [c.name for c in self.client.containers.all()] def spawn(self, context, instance, image_meta, injected_files, admin_password, allocations, network_info=None, block_device_info=None): """Create a new lxd container as a nova instance. Creating a new container requires a number of steps. First, the image is fetched from glance, if needed. Next, the network is connected. A profile is created in LXD, and then the container is created and started. See `nova.virt.driver.ComputeDriver.spawn` for more information. """ try: self.client.containers.get(instance.name) raise exception.InstanceExists(name=instance.name) except lxd_exceptions.LXDAPIException as e: if e.response.status_code != 404: raise # Re-raise the exception if it wasn't NotFound instance_dir = common.InstanceAttributes(instance).instance_dir if not os.path.exists(instance_dir): fileutils.ensure_tree(instance_dir) # Check to see if LXD already has a copy of the image. If not, # fetch it. try: self.client.images.get_by_alias(instance.image_ref) except lxd_exceptions.LXDAPIException as e: if e.response.status_code != 404: raise _sync_glance_image_to_lxd( self.client, context, instance.image_ref) # Plug in the network if network_info: timeout = CONF.vif_plugging_timeout if (utils.is_neutron() and timeout): events = [('network-vif-plugged', vif['id']) for vif in network_info if not vif.get( 'active', True)] else: events = [] try: with self.virtapi.wait_for_instance_event( instance, events, deadline=timeout, error_callback=_neutron_failed_callback): self.plug_vifs(instance, network_info) except eventlet.timeout.Timeout: LOG.warn('Timeout waiting for vif plugging callback for ' 'instance %(uuid)s', {'uuid': instance['name']}) if CONF.vif_plugging_is_fatal: self.destroy( context, instance, network_info, block_device_info) raise exception.InstanceDeployFailure( 'Timeout waiting for vif plugging', instance_id=instance['name']) # Create the profile try: profile = flavor.to_profile( self.client, instance, network_info, block_device_info) except lxd_exceptions.LXDAPIException as e: with excutils.save_and_reraise_exception(): self.cleanup( context, instance, network_info, block_device_info) # Create the container container_config = { 'name': instance.name, 'profiles': [profile.name], 'source': { 'type': 'image', 'alias': instance.image_ref, }, } try: container = self.client.containers.create( container_config, wait=True) except lxd_exceptions.LXDAPIException as e: with excutils.save_and_reraise_exception(): self.cleanup( context, instance, network_info, block_device_info) lxd_config = self.client.host_info storage.attach_ephemeral( self.client, block_device_info, lxd_config, instance) if configdrive.required_by(instance): configdrive_path = self._add_configdrive( context, instance, injected_files, admin_password, network_info) profile = self.client.profiles.get(instance.name) config_drive = { 'configdrive': { 'path': '/config-drive', 'source': configdrive_path, 'type': 'disk', 'readonly': 'True', } } profile.devices.update(config_drive) profile.save() try: self.firewall_driver.setup_basic_filtering( instance, network_info) self.firewall_driver.instance_filter( instance, network_info) container.start(wait=True) self.firewall_driver.apply_instance_filter( instance, network_info) except lxd_exceptions.LXDAPIException as e: with excutils.save_and_reraise_exception(): self.cleanup( context, instance, network_info, block_device_info) def destroy(self, context, instance, network_info, block_device_info=None, destroy_disks=True, migrate_data=None): """Destroy a running instance. Since the profile and the instance are created on `spawn`, it is safe to delete them together. See `nova.virt.driver.ComputeDriver.destroy` for more information. """ try: container = self.client.containers.get(instance.name) if container.status != 'Stopped': container.stop(wait=True) container.delete(wait=True) if (instance.vm_state == vm_states.RESCUED): rescued_container = self.client.containers.get( '%s-rescue' % instance.name) if rescued_container.status != 'Stopped': rescued_container.stop(wait=True) rescued_container.delete(wait=True) except lxd_exceptions.LXDAPIException as e: if e.response.status_code == 404: LOG.warning('Failed to delete instance. ' 'Container does not exist for %(instance)s.', {'instance': instance.name}) else: raise finally: self.cleanup( context, instance, network_info, block_device_info) def cleanup(self, context, instance, network_info, block_device_info=None, destroy_disks=True, migrate_data=None, destroy_vifs=True): """Clean up the filesystem around the container. See `nova.virt.driver.ComputeDriver.cleanup` for more information. """ if destroy_vifs: self.unplug_vifs(instance, network_info) self.firewall_driver.unfilter_instance(instance, network_info) lxd_config = self.client.host_info storage.detach_ephemeral(block_device_info, lxd_config, instance) name = pwd.getpwuid(os.getuid()).pw_name container_dir = common.InstanceAttributes(instance).instance_dir if os.path.exists(container_dir): utils.execute( 'chown', '-R', '{}:{}'.format(name, name), container_dir, run_as_root=True) shutil.rmtree(container_dir) try: self.client.profiles.get(instance.name).delete() except lxd_exceptions.LXDAPIException as e: if e.response.status_code == 404: LOG.warning('Failed to delete instance. ' 'Profile does not exist for %(instance)s.', {'instance': instance.name}) else: raise def reboot(self, context, instance, network_info, reboot_type, block_device_info=None, bad_volumes_callback=None): """Reboot the container. Nova *should* not execute this on a stopped container, but the documentation specifically says that if it is called, the container should always return to a 'Running' state. See `nova.virt.driver.ComputeDriver.cleanup` for more information. """ container = self.client.containers.get(instance.name) container.restart(force=True, wait=True) def get_console_output(self, context, instance): """Get the output of the container console. See `nova.virt.driver.ComputeDriver.get_console_output` for more information. """ instance_attrs = common.InstanceAttributes(instance) console_path = instance_attrs.console_path if not os.path.exists(console_path): return '' uid = pwd.getpwuid(os.getuid()).pw_uid utils.execute( 'chown', '%s:%s' % (uid, uid), console_path, run_as_root=True) utils.execute( 'chmod', '755', instance_attrs.container_path, run_as_root=True) with open(console_path, 'rb') as f: log_data, _ = _last_bytes(f, MAX_CONSOLE_BYTES) return log_data def get_host_ip_addr(self): return CONF.my_ip def attach_volume(self, context, connection_info, instance, mountpoint, disk_bus=None, device_type=None, encryption=None): """Attach block device to a nova instance. Attaching a block device to a container requires a couple of steps. First os_brick connects the cinder volume to the host. Next, the block device is added to the containers profile. Next, the apparmor profile for the container is updated to allow mounting 'ext4' block devices. Finally, the profile is saved. The block device must be formatted as ext4 in order to mount the block device inside the container. See `nova.virt.driver.ComputeDriver.attach_volume' for more information/ """ profile = self.client.profiles.get(instance.name) protocol = connection_info['driver_volume_type'] storage_driver = brick_get_connector(protocol) device_info = storage_driver.connect_volume( connection_info['data']) disk = os.stat(os.path.realpath(device_info['path'])) vol_id = connection_info['data']['volume_id'] disk_device = { vol_id: { 'path': mountpoint, 'major': '%s' % os.major(disk.st_rdev), 'minor': '%s' % os.minor(disk.st_rdev), 'type': 'unix-block' } } profile.devices.update(disk_device) # XXX zulcss (10 Jul 2016) - fused is currently not supported. profile.config.update({'raw.apparmor': 'mount fstype=ext4,'}) profile.save() def detach_volume(self, connection_info, instance, mountpoint, encryption=None): """Detach block device from a nova instance. First the volume id is deleted from the profile, and the profile is saved. The os-brick disconnects the volume from the host. See `nova.virt.driver.Computedriver.detach_volume` for more information. """ profile = self.client.profiles.get(instance.name) vol_id = connection_info['data']['volume_id'] if vol_id in profile.devices: del profile.devices[vol_id] profile.save() protocol = connection_info['driver_volume_type'] storage_driver = brick_get_connector(protocol) storage_driver.disconnect_volume(connection_info['data'], None) def attach_interface(self, context, instance, image_meta, vif): self.vif_driver.plug(instance, vif) self.firewall_driver.setup_basic_filtering(instance, vif) profile = self.client.profiles.get(instance.name) net_device = lxd_vif.get_vif_devname(vif) config_update = { net_device: { 'nictype': 'physical', 'hwaddr': vif['address'], 'parent': lxd_vif.get_vif_internal_devname(vif), 'type': 'nic', } } profile.devices.update(config_update) profile.save(wait=True) def detach_interface(self, context, instance, vif): profile = self.client.profiles.get(instance.name) devname = lxd_vif.get_vif_devname(vif) # NOTE(jamespage): Attempt to remove device using # new style tap naming if devname in profile.devices: del profile.devices[devname] profile.save(wait=True) else: # NOTE(jamespage): For upgrades, scan devices # and attempt to identify # using mac address as the # device will *not* have a # consistent name for key, val in profile.devices.items(): if val.get('hwaddr') == vif['address']: del profile.devices[key] profile.save(wait=True) break self.vif_driver.unplug(instance, vif) def migrate_disk_and_power_off( self, context, instance, dest, _flavor, network_info, block_device_info=None, timeout=0, retry_interval=0): if CONF.my_ip == dest: # Make sure that the profile for the container is up-to-date to # the actual state of the container. flavor.to_profile( self.client, instance, network_info, block_device_info, update=True) container = self.client.containers.get(instance.name) container.stop(wait=True) return '' def snapshot(self, context, instance, image_id, update_task_state): lock_path = str(os.path.join(CONF.instances_path, 'locks')) with lockutils.lock( lock_path, external=True, lock_file_prefix=('lxd-snapshot-%s' % instance.name)): update_task_state(task_state=task_states.IMAGE_PENDING_UPLOAD) container = self.client.containers.get(instance.name) if container.status != 'Stopped': container.stop(wait=True) image = container.publish(wait=True) container.start(wait=True) update_task_state( task_state=task_states.IMAGE_UPLOADING, expected_state=task_states.IMAGE_PENDING_UPLOAD) snapshot = IMAGE_API.get(context, image_id) data = image.export() image_meta = {'name': snapshot['name'], 'disk_format': 'raw', 'container_format': 'bare'} IMAGE_API.update(context, image_id, image_meta, data) def pause(self, instance): """Pause container. See `nova.virt.driver.ComputeDriver.pause` for more information. """ container = self.client.containers.get(instance.name) container.freeze(wait=True) def unpause(self, instance): """Unpause container. See `nova.virt.driver.ComputeDriver.unpause` for more information. """ container = self.client.containers.get(instance.name) container.unfreeze(wait=True) def suspend(self, context, instance): """Suspend container. See `nova.virt.driver.ComputeDriver.suspend` for more information. """ self.pause(instance) def resume(self, context, instance, network_info, block_device_info=None): """Resume container. See `nova.virt.driver.ComputeDriver.resume` for more information. """ self.unpause(instance) def resume_state_on_host_boot(self, context, instance, network_info, block_device_info=None): """resume guest state when a host is booted.""" try: state = self.get_info(instance).state ignored_states = (power_state.RUNNING, power_state.SUSPENDED, power_state.NOSTATE, power_state.PAUSED) if state in ignored_states: return self.power_on(context, instance, network_info, block_device_info) except (exception.InternalError, exception.InstanceNotFound): pass def rescue(self, context, instance, network_info, image_meta, rescue_password): """Rescue a LXD container. From the perspective of nova, rescuing a instance requires a number of steps. First, the failed container is stopped, and then this method is called. So the original container is already stopped, and thus, next, '-rescue', is appended to the failed container's name, this is done so the container can be unrescued. The container's profile is updated with the rootfs of the failed container. Finally, a new container is created and started. See 'nova.virt.driver.ComputeDriver.rescue` for more information. """ rescue = '%s-rescue' % instance.name container = self.client.containers.get(instance.name) container_rootfs = os.path.join( nova.conf.CONF.lxd.root_dir, 'containers', instance.name, 'rootfs') container.rename(rescue, wait=True) profile = self.client.profiles.get(instance.name) rescue_dir = { 'rescue': { 'source': container_rootfs, 'path': '/mnt', 'type': 'disk', } } profile.devices.update(rescue_dir) profile.save() container_config = { 'name': instance.name, 'profiles': [profile.name], 'source': { 'type': 'image', 'alias': instance.image_ref, } } container = self.client.containers.create( container_config, wait=True) container.start(wait=True) def unrescue(self, instance, network_info): """Unrescue an instance. Unrescue a container that has previously been rescued. First the rescue containerisremoved. Next the rootfs of the defective container is removed from the profile. Finally the container is renamed and started. See 'nova.virt.drvier.ComputeDriver.unrescue` for more information. """ rescue = '%s-rescue' % instance.name container = self.client.containers.get(instance.name) if container.status != 'Stopped': container.stop(wait=True) container.delete(wait=True) profile = self.client.profiles.get(instance.name) del profile.devices['rescue'] profile.save() container = self.client.containers.get(rescue) container.rename(instance.name, wait=True) container.start(wait=True) def power_off(self, instance, timeout=0, retry_interval=0): """Power off an instance See 'nova.virt.drvier.ComputeDriver.power_off` for more information. """ container = self.client.containers.get(instance.name) if container.status != 'Stopped': container.stop(wait=True) def power_on(self, context, instance, network_info, block_device_info=None): """Power on an instance See 'nova.virt.drvier.ComputeDriver.power_on` for more information. """ container = self.client.containers.get(instance.name) if container.status != 'Running': container.start(wait=True) def get_available_resource(self, nodename): """Aggregate all available system resources. See 'nova.virt.drvier.ComputeDriver.get_available_resource` for more information. """ cpuinfo = _get_cpu_info() cpu_info = { 'arch': platform.uname()[5], 'features': cpuinfo.get('flags', 'unknown'), 'model': cpuinfo.get('model name', 'unknown'), 'topology': { 'sockets': cpuinfo['socket(s)'], 'cores': cpuinfo['core(s) per socket'], 'threads': cpuinfo['thread(s) per core'], }, 'vendor': cpuinfo.get('vendor id', 'unknown'), } cpu_topology = cpu_info['topology'] vcpus = (int(cpu_topology['cores']) * int(cpu_topology['sockets']) * int(cpu_topology['threads'])) local_memory_info = _get_ram_usage() lxd_config = self.client.host_info # NOTE(jamespage): ZFS storage report is very LXD 2.0.x # centric and will need to be updated # to support LXD storage pools storage_driver = lxd_config['environment']['storage'] if storage_driver == 'zfs': local_disk_info = _get_zpool_info( lxd_config['config']['storage.zfs_pool_name'] ) else: local_disk_info = _get_fs_info(CONF.lxd.root_dir) data = { 'vcpus': vcpus, 'memory_mb': local_memory_info['total'] // units.Mi, 'memory_mb_used': local_memory_info['used'] // units.Mi, 'local_gb': local_disk_info['total'] // units.Gi, 'local_gb_used': local_disk_info['used'] // units.Gi, 'vcpus_used': 0, 'hypervisor_type': 'lxd', 'hypervisor_version': '011', 'cpu_info': jsonutils.dumps(cpu_info), 'hypervisor_hostname': socket.gethostname(), 'supported_instances': [ (obj_fields.Architecture.I686, obj_fields.HVType.LXD, obj_fields.VMMode.EXE), (obj_fields.Architecture.X86_64, obj_fields.HVType.LXD, obj_fields.VMMode.EXE), (obj_fields.Architecture.I686, obj_fields.HVType.LXC, obj_fields.VMMode.EXE), (obj_fields.Architecture.X86_64, obj_fields.HVType.LXC, obj_fields.VMMode.EXE), ], 'numa_topology': None, } return data def refresh_instance_security_rules(self, instance): return self.firewall_driver.refresh_instance_security_rules( instance) def ensure_filtering_rules_for_instance(self, instance, network_info): return self.firewall_driver.ensure_filtering_rules_for_instance( instance, network_info) def filter_defer_apply_on(self): return self.firewall_driver.filter_defer_apply_on() def filter_defer_apply_off(self): return self.firewall_driver.filter_defer_apply_off() def unfilter_instance(self, instance, network_info): return self.firewall_driver.unfilter_instance( instance, network_info) def get_host_uptime(self): out, err = utils.execute('env', 'LANG=C', 'uptime') return out def plug_vifs(self, instance, network_info): for vif in network_info: self.vif_driver.plug(instance, vif) def unplug_vifs(self, instance, network_info): for vif in network_info: self.vif_driver.unplug(instance, vif) def get_host_cpu_stats(self): return { 'kernel': int(psutil.cpu_times()[2]), 'idle': int(psutil.cpu_times()[3]), 'user': int(psutil.cpu_times()[0]), 'iowait': int(psutil.cpu_times()[4]), 'frequency': _get_cpu_info().get('cpu mhz', 0) } def get_volume_connector(self, instance): return {'ip': CONF.my_block_storage_ip, 'initiator': 'fake', 'host': 'fakehost'} def get_available_nodes(self, refresh=False): hostname = socket.gethostname() return [hostname] # XXX: rockstar (5 July 2016) - The methods and code below this line # have not been through the cleanup process. We know the cleanup process # is complete when there is no more code below this comment, and the # comment can be removed. # # ComputeDriver implementation methods # def finish_migration(self, context, migration, instance, disk_info, network_info, image_meta, resize_instance, block_device_info=None, power_on=True): # Ensure that the instance directory exists instance_dir = common.InstanceAttributes(instance).instance_dir if not os.path.exists(instance_dir): fileutils.ensure_tree(instance_dir) # Step 1 - Setup the profile on the dest host flavor.to_profile(self.client, instance, network_info, block_device_info) # Step 2 - Open a websocket on the srct and and # generate the container config self._migrate(migration['source_compute'], instance) # Step 3 - Start the network and container self.plug_vifs(instance, network_info) self.client.container.get(instance.name).start(wait=True) def confirm_migration(self, migration, instance, network_info): self.unplug_vifs(instance, network_info) self.client.profiles.get(instance.name).delete() self.client.containers.get(instance.name).delete(wait=True) def finish_revert_migration(self, context, instance, network_info, block_device_info=None, power_on=True): self.client.containers.get(instance.name).start(wait=True) def pre_live_migration(self, context, instance, block_device_info, network_info, disk_info, migrate_data=None): for vif in network_info: self.vif_driver.plug(instance, vif) self.firewall_driver.setup_basic_filtering( instance, network_info) self.firewall_driver.prepare_instance_filter( instance, network_info) self.firewall_driver.apply_instance_filter( instance, network_info) flavor.to_profile(self.client, instance, network_info, block_device_info) def live_migration(self, context, instance, dest, post_method, recover_method, block_migration=False, migrate_data=None): self._migrate(dest, instance) post_method(context, instance, dest, block_migration) def post_live_migration(self, context, instance, block_device_info, migrate_data=None): self.client.containers.get(instance.name).delete(wait=True) def post_live_migration_at_source(self, context, instance, network_info): self.client.profiles.get(instance.name).delete() self.cleanup(context, instance, network_info) def check_can_live_migrate_destination( self, context, instance, src_compute_info, dst_compute_info, block_migration=False, disk_over_commit=False): try: self.client.containers.get(instance.name) raise exception.InstanceExists(name=instance.name) except lxd_exceptions.LXDAPIException as e: if e.response.status_code != 404: raise return LXDLiveMigrateData() def cleanup_live_migration_destination_check( self, context, dest_check_data): return def check_can_live_migrate_source(self, context, instance, dest_check_data, block_device_info=None): if not CONF.lxd.allow_live_migration: msg = _('Live migration is not enabled.') LOG.error(msg, instance=instance) raise exception.MigrationPreCheckError(reason=msg) return dest_check_data # # LXDDriver "private" implementation methods # # XXX: rockstar (21 Nov 2016) - The methods and code below this line # have not been through the cleanup process. We know the cleanup process # is complete when there is no more code below this comment, and the # comment can be removed. def _add_configdrive(self, context, instance, injected_files, admin_password, network_info): """Create configdrive for the instance.""" if CONF.config_drive_format != 'iso9660': raise exception.ConfigDriveUnsupportedFormat( format=CONF.config_drive_format) container = self.client.containers.get(instance.name) storage_id = 0 """ Determine UID shift used for container uid mapping Sample JSON config from LXD { "volatile.apply_template": "create", ... "volatile.last_state.idmap": "[ { \"Isuid\":true, \"Isgid\":false, \"Hostid\":100000, \"Nsid\":0, \"Maprange\":65536 }, { \"Isuid\":false, \"Isgid\":true, \"Hostid\":100000, \"Nsid\":0, \"Maprange\":65536 }] ", "volatile.tap5fd6808a-7b.name": "eth0" } """ container_id_map = json.loads( container.config['volatile.last_state.idmap']) uid_map = filter(lambda id_map: id_map.get("Isuid"), container_id_map) if uid_map: storage_id = uid_map[0].get("Hostid", 0) else: # privileged containers does not have uid/gid mapping # LXD API return nothing pass extra_md = {} if admin_password: extra_md['admin_pass'] = admin_password inst_md = instance_metadata.InstanceMetadata( instance, content=injected_files, extra_md=extra_md, network_info=network_info, request_context=context) iso_path = os.path.join( common.InstanceAttributes(instance).instance_dir, 'configdrive.iso') with configdrive.ConfigDriveBuilder(instance_md=inst_md) as cdb: try: cdb.make_drive(iso_path) except processutils.ProcessExecutionError as e: with excutils.save_and_reraise_exception(): LOG.error('Creating config drive failed with ' 'error: %s', e, instance=instance) configdrive_dir = os.path.join( nova.conf.CONF.instances_path, instance.name, 'configdrive') if not os.path.exists(configdrive_dir): fileutils.ensure_tree(configdrive_dir) with utils.tempdir() as tmpdir: mounted = False try: _, err = utils.execute('mount', '-o', 'loop,uid=%d,gid=%d' % (os.getuid(), os.getgid()), iso_path, tmpdir, run_as_root=True) mounted = True # Copy and adjust the files from the ISO so that we # dont have the ISO mounted during the life cycle of the # instance and the directory can be removed once the instance # is terminated for ent in os.listdir(tmpdir): shutil.copytree(os.path.join(tmpdir, ent), os.path.join(configdrive_dir, ent)) utils.execute('chmod', '-R', '775', configdrive_dir, run_as_root=True) utils.execute('chown', '-R', '%s:%s' % (storage_id, storage_id), configdrive_dir, run_as_root=True) finally: if mounted: utils.execute('umount', tmpdir, run_as_root=True) return configdrive_dir def _after_reboot(self): """Perform sync operation after host reboot.""" context = nova.context.get_admin_context() instances = objects.InstanceList.get_by_host( context, self.host, expected_attrs=['info_cache', 'metadata']) for instance in instances: if (instance.vm_state != vm_states.STOPPED): continue try: network_info = self.network_api.get_instance_nw_info( context, instance) except exception.InstanceNotFound: network_info = network_model.NetworkInfo() self.plug_vifs(instance, network_info) self.firewall_driver.setup_basic_filtering(instance, network_info) self.firewall_driver.prepare_instance_filter( instance, network_info) self.firewall_driver.apply_instance_filter(instance, network_info) def _migrate(self, source_host, instance): """Migrate an instance from source.""" source_client = pylxd.Client( endpoint='https://{}'.format(source_host), verify=False) container = source_client.containers.get(instance.name) data = container.generate_migration_data() self.containers.create(data, wait=True) nova-lxd-17.0.0/nova/virt/__init__.py0000666000175100017510000000007013246266025017420 0ustar zuulzuul00000000000000__import__('pkg_resources').declare_namespace(__name__) nova-lxd-17.0.0/nova/tests/0000775000175100017510000000000013246266437015475 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova/tests/unit/0000775000175100017510000000000013246266437016454 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova/tests/unit/virt/0000775000175100017510000000000013246266437017440 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova/tests/unit/virt/lxd/0000775000175100017510000000000013246266437020227 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova/tests/unit/virt/lxd/test_flavor.py0000666000175100017510000004073713246266025023137 0ustar zuulzuul00000000000000# Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import context from nova import exception from nova import test from nova.network import model as network_model from nova.tests.unit import fake_instance from nova.virt.lxd import flavor class ToProfileTest(test.NoDBTestCase): """Tests for nova.virt.lxd.flavor.to_profile.""" def setUp(self): super(ToProfileTest, self).setUp() self.client = mock.Mock() self.client.host_info = { 'api_extensions': [], 'environment': { 'storage': 'zfs' } } self.patchers = [] CONF_patcher = mock.patch('nova.virt.lxd.driver.nova.conf.CONF') self.patchers.append(CONF_patcher) self.CONF = CONF_patcher.start() self.CONF.instances_path = '/i' CONF_patcher = mock.patch('nova.virt.lxd.flavor.CONF') self.patchers.append(CONF_patcher) self.CONF2 = CONF_patcher.start() self.CONF2.lxd.pool = None def tearDown(self): super(ToProfileTest, self).tearDown() for patcher in self.patchers: patcher.stop() def test_to_profile(self): """A profile configuration is requested of the LXD client.""" ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), } expected_devices = { 'root': { 'path': '/', 'size': '0GB', 'type': 'disk' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) def test_to_profile_lvm(self): """A profile configuration is requested of the LXD client.""" self.client.host_info['environment']['storage'] = 'lvm' ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), } expected_devices = { 'root': { 'path': '/', 'type': 'disk' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) def test_storage_pools(self): self.client.host_info['api_extensions'].append('storage') self.CONF2.lxd.pool = 'test_pool' ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)) } expected_devices = { 'root': { 'path': '/', 'type': 'disk', 'pool': 'test_pool', 'size': '0GB' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) def test_to_profile_security(self): self.client.host_info['api_extensions'].append('id_map') ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) instance.flavor.extra_specs = { 'lxd:nested_allowed': True, 'lxd:privileged_allowed': True, } network_info = [] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), 'security.nesting': 'True', 'security.privileged': 'True', } expected_devices = { 'root': { 'path': '/', 'size': '0GB', 'type': 'disk' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) def test_to_profile_idmap(self): self.client.host_info['api_extensions'].append('id_map') ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) instance.flavor.extra_specs = { 'lxd:isolated': True, } network_info = [] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'security.idmap.isolated': 'True', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), } expected_devices = { 'root': { 'path': '/', 'size': '0GB', 'type': 'disk' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) def test_to_profile_idmap_unsupported(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) instance.flavor.extra_specs = { 'lxd:isolated': True, } network_info = [] block_info = [] self.assertRaises( exception.NovaException, flavor.to_profile, self.client, instance, network_info, block_info) def test_to_profile_quota_extra_specs_bytes(self): """A profile configuration is requested of the LXD client.""" ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) instance.flavor.extra_specs = { 'quota:disk_read_bytes_sec': '3000000', 'quota:disk_write_bytes_sec': '4000000', } network_info = [] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), } expected_devices = { 'root': { 'limits.read': '2MB', 'limits.write': '3MB', 'path': '/', 'size': '0GB', 'type': 'disk' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) def test_to_profile_quota_extra_specs_iops(self): """A profile configuration is requested of the LXD client.""" ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) instance.flavor.extra_specs = { 'quota:disk_read_iops_sec': '300', 'quota:disk_write_iops_sec': '400', } network_info = [] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), } expected_devices = { 'root': { 'limits.read': '300iops', 'limits.write': '400iops', 'path': '/', 'size': '0GB', 'type': 'disk' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) def test_to_profile_quota_extra_specs_max_bytes(self): """A profile configuration is requested of the LXD client.""" ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) instance.flavor.extra_specs = { 'quota:disk_total_bytes_sec': '6000000', } network_info = [] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), } expected_devices = { 'root': { 'limits.max': '5MB', 'path': '/', 'size': '0GB', 'type': 'disk' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) def test_to_profile_quota_extra_specs_max_iops(self): """A profile configuration is requested of the LXD client.""" ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) instance.flavor.extra_specs = { 'quota:disk_total_iops_sec': '500', } network_info = [] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), } expected_devices = { 'root': { 'limits.max': '500iops', 'path': '/', 'size': '0GB', 'type': 'disk' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) @mock.patch('nova.virt.lxd.vif._is_no_op_firewall', return_value=False) def test_to_profile_network_config_average(self, _is_no_op_firewall): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) instance.flavor.extra_specs = { 'quota:vif_inbound_average': '1000000', 'quota:vif_outbound_average': '2000000', } network_info = [{ 'id': '0123456789abcdef', 'type': network_model.VIF_TYPE_OVS, 'address': '00:11:22:33:44:55', 'network': { 'bridge': 'fakebr'}, 'devname': 'tap0123456789a'}] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), } expected_devices = { 'tap0123456789a': { 'hwaddr': '00:11:22:33:44:55', 'nictype': 'physical', 'parent': 'tin0123456789a', 'type': 'nic', 'limits.egress': '16000Mbit', 'limits.ingress': '8000Mbit', }, 'root': { 'path': '/', 'size': '0GB', 'type': 'disk' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) @mock.patch('nova.virt.lxd.vif._is_no_op_firewall', return_value=False) def test_to_profile_network_config_peak(self, _is_no_op_firewall): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) instance.flavor.extra_specs = { 'quota:vif_inbound_peak': '3000000', 'quota:vif_outbound_peak': '4000000', } network_info = [{ 'id': '0123456789abcdef', 'type': network_model.VIF_TYPE_OVS, 'address': '00:11:22:33:44:55', 'network': { 'bridge': 'fakebr'}, 'devname': 'tap0123456789a'}] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), } expected_devices = { 'tap0123456789a': { 'hwaddr': '00:11:22:33:44:55', 'nictype': 'physical', 'parent': 'tin0123456789a', 'type': 'nic', 'limits.egress': '32000Mbit', 'limits.ingress': '24000Mbit', }, 'root': { 'path': '/', 'size': '0GB', 'type': 'disk' }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) @mock.patch('nova.virt.lxd.flavor.driver.block_device_info_get_ephemerals') def test_to_profile_ephemeral_storage(self, get_ephemerals): """A profile configuration is requested of the LXD client.""" get_ephemerals.return_value = [ {'virtual_name': 'ephemeral1'}, ] ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [] block_info = [] expected_config = { 'environment.product_name': 'OpenStack Nova', 'limits.cpu': '1', 'limits.memory': '0MB', 'raw.lxc': ( 'lxc.console.logfile=/var/log/lxd/{}/console.log\n'.format( instance.name)), } expected_devices = { 'root': { 'path': '/', 'size': '0GB', 'type': 'disk' }, 'ephemeral1': { 'type': 'disk', 'path': '/mnt', 'source': '/i/{}/storage/ephemeral1'.format(instance.name), }, } flavor.to_profile(self.client, instance, network_info, block_info) self.client.profiles.create.assert_called_once_with( instance.name, expected_config, expected_devices) nova-lxd-17.0.0/nova/tests/unit/virt/lxd/test_common.py0000666000175100017510000000431213246266025023123 0ustar zuulzuul00000000000000# Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import context from nova import test from nova.tests.unit import fake_instance from nova.virt.lxd import common class InstanceAttributesTest(test.NoDBTestCase): """Tests for InstanceAttributes.""" def setUp(self): super(InstanceAttributesTest, self).setUp() self.CONF_patcher = mock.patch('nova.virt.lxd.driver.nova.conf.CONF') self.CONF = self.CONF_patcher.start() self.CONF.instances_path = '/i' self.CONF.lxd.root_dir = '/c' def tearDown(self): super(InstanceAttributesTest, self).tearDown() self.CONF_patcher.stop() def test_instance_dir(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) attributes = common.InstanceAttributes(instance) self.assertEqual( '/i/instance-00000001', attributes.instance_dir) def test_console_path(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) attributes = common.InstanceAttributes(instance) self.assertEqual( '/var/log/lxd/instance-00000001/console.log', attributes.console_path) def test_storage_path(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) attributes = common.InstanceAttributes(instance) self.assertEqual( '/i/instance-00000001/storage', attributes.storage_path) nova-lxd-17.0.0/nova/tests/unit/virt/lxd/test_migrate.py0000666000175100017510000000717013246266025023270 0ustar zuulzuul00000000000000# Copyright 2015 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import nova.conf from nova import exception from nova import test from pylxd.deprecated import exceptions as lxd_exceptions from nova.virt.lxd import driver CONF = nova.conf.CONF class LXDTestLiveMigrate(test.NoDBTestCase): def setUp(self): super(LXDTestLiveMigrate, self).setUp() self.driver = driver.LXDDriver(None) self.context = 'fake_context' self.driver.session = mock.MagicMock() self.driver.config = mock.MagicMock() self.driver.operations = mock.MagicMock() @mock.patch.object(driver.LXDDriver, '_migrate') def test_live_migration(self, mock_migrate): """Verify that the correct live migration calls are made. """ self.flags(my_ip='fakeip') mock_post_method = mock.MagicMock() self.driver.live_migration( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock_post_method, mock.sentinel.recover_method, mock.sentinel.block_migration, mock.sentinel.migrate_data) mock_migrate.assert_called_once_with(mock.sentinel.dest, mock.sentinel.instance) mock_post_method.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock.sentinel.block_migration) @mock.patch.object(driver.LXDDriver, '_migrate') def test_live_migration_failed(self, mock_migrate): """Verify that an exception is raised when live-migration fails. """ self.flags(my_ip='fakeip') mock_migrate.side_effect = \ lxd_exceptions.APIError(500, 'Fake') self.assertRaises( lxd_exceptions.APIError, self.driver.live_migration, mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock.sentinel.recover_method, mock.sentinel.block_migration, mock.sentinel.migrate_data) def test_live_migration_not_allowed(self): """Verify an exception is raised when live migration is not allowed.""" self.flags(allow_live_migration=False, group='lxd') self.assertRaises(exception.MigrationPreCheckError, self.driver.check_can_live_migrate_source, mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest_check_data, mock.sentinel.block_device_info) def test_live_migration_allowed(self): """Verify live-migration is allowed when the allow_lvie_migrate flag is True. """ self.flags(allow_live_migration=True, group='lxd') self.assertEqual(mock.sentinel.dest_check_data, self.driver.check_can_live_migrate_source( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest_check_data, mock.sentinel.block_device_info)) nova-lxd-17.0.0/nova/tests/unit/virt/lxd/fake_api.py0000666000175100017510000002311213246266025022332 0ustar zuulzuul00000000000000# Copyright (c) 2015 Canonical Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def fake_standard_return(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": {} } def fake_host(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "api_compat": 1, "auth": "trusted", "config": {}, "environment": { "backing_fs": "ext4", "driver": "lxc", "kernel_version": "3.19.0-22-generic", "lxc_version": "1.1.2", "lxd_version": "0.12" } } } def fake_image_list_empty(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": [] } def fake_image_list(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": ['/1.0/images/trusty'] } def fake_image_info(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "aliases": [ { "target": "ubuntu", "description": "ubuntu" } ], "architecture": 2, "fingerprint": "04aac4257341478b49c25d22cea8a6ce" "0489dc6c42d835367945e7596368a37f", "filename": "", "properties": {}, "public": 0, "size": 67043148, "created_at": 0, "expires_at": 0, "uploaded_at": 1435669853 } } def fake_alias(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "target": "ubuntu", "description": "ubuntu" } } def fake_alias_list(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": [ "/1.0/images/aliases/ubuntu" ] } def fake_container_list(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": [ "/1.0/containers/trusty-1" ] } def fake_container_state(status): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "status_code": status } } def fake_container_log(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "log": "fake log" } } def fake_container_migrate(): return { "type": "async", "status": "Operation created", "status_code": 100, "metadata": { "id": "dbd9f22c-6da5-4066-8fca-c02f09f76738", "class": "websocket", "created_at": "2016-02-07T09:20:53.127321875-05:00", "updated_at": "2016-02-07T09:20:53.127321875-05:00", "status": "Running", "status_code": 103, "resources": { "containers": [ "/1.0/containers/instance-00000010" ] }, "metadata": { "control": "fake_control", "fs": "fake_fs" }, "may_cancel": 'false', "err": "" }, "operation": "/1.0/operations/dbd9f22c-6da5-4066-8fca-c02f09f76738" } def fake_snapshots_list(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": [ "/1.0/containers/trusty-1/snapshots/first" ] } def fake_certificate_list(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": [ "/1.0/certificates/ABCDEF01" ] } def fake_certificate(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "type": "client", "certificate": "ABCDEF01" } } def fake_profile_list(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": [ "/1.0/profiles/fake-profile" ] } def fake_profile(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": { "name": "fake-profile", "config": { "resources.memory": "2GB", "network.0.bridge": "lxcbr0" } } } def fake_operation_list(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": [ "/1.0/operations/1234" ] } def fake_operation(): return { "type": "async", "status": "OK", "status_code": 100, "operation": "/1.0/operation/1234", "metadata": { "created_at": "2015-06-09T19:07:24.379615253-06:00", "updated_at": "2015-06-09T19:07:23.379615253-06:00", "status": "Running", "status_code": 103, "resources": { "containers": ["/1.0/containers/1"] }, "metadata": {}, "may_cancel": True } } def fake_operation_info_ok(): return { "type": "async", "status": "OK", "status_code": 200, "operation": "/1.0/operation/1234", "metadata": { "created_at": "2015-06-09T19:07:24.379615253-06:00", "updated_at": "2015-06-09T19:07:23.379615253-06:00", "status": "Completed", "status_code": 200, "resources": { "containers": ["/1.0/containers/1"] }, "metadata": {}, "may_cancel": True } } def fake_operation_info_failed(): return { "type": "async", "status": "OK", "status_code": 200, "operation": "/1.0/operation/1234", "metadata": { "created_at": "2015-06-09T19:07:24.379615253-06:00", "updated_at": "2015-06-09T19:07:23.379615253-06:00", "status": "Failure", "status_code": 400, "resources": { "containers": ["/1.0/containers/1"] }, "metadata": "Invalid container name", "may_cancel": True } } def fake_network_list(): return { "type": "sync", "status": "Success", "status_code": 200, "metadata": [ "/1.0/networks/lxcbr0" ] } def fake_network(): return { "type": "async", "status": "OK", "status_code": 100, "operation": "/1.0/operation/1234", "metadata": { "name": "lxcbr0", "type": "bridge", "members": ["/1.0/containers/trusty-1"] } } def fake_container_config(): return { 'name': "my-container", 'profiles': ["default"], 'architecture': 2, 'config': {"limits.cpus": "3"}, 'expanded_config': {"limits.cpus": "3"}, 'devices': { 'rootfs': { 'type': "disk", 'path': "/", 'source': "UUID=8f7fdf5e-dc60-4524-b9fe-634f82ac2fb6" } }, 'expanded_devices': { 'rootfs': { 'type': "disk", 'path': "/", 'source': "UUID=8f7fdf5e-dc60-4524-b9fe-634f82ac2fb6"} }, "eth0": { "type": "nic", "parent": "lxcbr0", "hwaddr": "00:16:3e:f4:e7:1c", "name": "eth0", "nictype": "bridged", } } def fake_container_info(): return { 'name': "my-container", 'profiles': ["default"], 'architecture': 2, 'config': {"limits.cpus": "3"}, 'expanded_config': {"limits.cpus": "3"}, 'devices': { 'rootfs': { 'type': "disk", 'path': "/", 'source': "UUID=8f7fdf5e-dc60-4524-b9fe-634f82ac2fb6" } }, 'expanded_devices': { 'rootfs': { 'type': "disk", 'path': "/", 'source': "UUID=8f7fdf5e-dc60-4524-b9fe-634f82ac2fb6"} }, "eth0": { "type": "nic", "parent": "lxcbr0", "hwaddr": "00:16:3e:f4:e7:1c", "name": "eth0", "nictype": "bridged", }, 'status': { 'status': "Running", 'status_code': 103, 'ips': [{'interface': "eth0", 'protocol': "INET6", 'address': "2001:470:b368:1020:1::2", 'host_veth': "vethGMDIY9"}, {'interface': "eth0", 'protocol': "INET", 'address': "172.16.15.30", 'host_veth': "vethGMDIY9"}]}, } nova-lxd-17.0.0/nova/tests/unit/virt/lxd/__init__.py0000666000175100017510000000000013246266025022321 0ustar zuulzuul00000000000000nova-lxd-17.0.0/nova/tests/unit/virt/lxd/test_driver.py0000666000175100017510000015333413246266025023137 0ustar zuulzuul00000000000000# Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import json import base64 from contextlib import closing import eventlet from oslo_config import cfg import mock from nova import context from nova import exception from nova import utils from nova import test from nova.compute import manager from nova.compute import power_state from nova.compute import vm_states from nova.network import model as network_model from nova.tests.unit import fake_instance from pylxd import exceptions as lxdcore_exceptions import six from nova.virt.lxd import common from nova.virt.lxd import driver MockResponse = collections.namedtuple('Response', ['status_code']) MockContainer = collections.namedtuple('Container', ['name']) MockContainerState = collections.namedtuple( 'ContainerState', ['status', 'memory', 'status_code']) _VIF = { 'devname': 'lol0', 'type': 'bridge', 'id': '0123456789abcdef', 'address': 'ca:fe:de:ad:be:ef'} def fake_connection_info(volume, location, iqn, auth=False, transport=None): dev_name = 'ip-%s-iscsi-%s-lun-1' % (location, iqn) if transport is not None: dev_name = 'pci-0000:00:00.0-' + dev_name dev_path = '/dev/disk/by-path/%s' % (dev_name) ret = { 'driver_volume_type': 'iscsi', 'data': { 'volume_id': volume['id'], 'target_portal': location, 'target_iqn': iqn, 'target_lun': 1, 'device_path': dev_path, 'qos_specs': { 'total_bytes_sec': '102400', 'read_iops_sec': '200', } } } if auth: ret['data']['auth_method'] = 'CHAP' ret['data']['auth_username'] = 'foo' ret['data']['auth_password'] = 'bar' return ret class GetPowerStateTest(test.NoDBTestCase): """Tests for nova.virt.lxd.driver.LXDDriver.""" def test_running(self): state = driver._get_power_state(100) self.assertEqual(power_state.RUNNING, state) def test_shutdown(self): state = driver._get_power_state(102) self.assertEqual(power_state.SHUTDOWN, state) def test_nostate(self): state = driver._get_power_state(105) self.assertEqual(power_state.NOSTATE, state) def test_crashed(self): state = driver._get_power_state(108) self.assertEqual(power_state.CRASHED, state) def test_suspended(self): state = driver._get_power_state(109) self.assertEqual(power_state.SUSPENDED, state) def test_unknown(self): self.assertRaises(ValueError, driver._get_power_state, 69) class LXDDriverTest(test.NoDBTestCase): """Tests for nova.virt.lxd.driver.LXDDriver.""" def setUp(self): super(LXDDriverTest, self).setUp() self.Client_patcher = mock.patch('nova.virt.lxd.driver.pylxd.Client') self.Client = self.Client_patcher.start() self.client = mock.Mock() self.client.host_info = { 'environment': { 'storage': 'zfs', } } self.Client.return_value = self.client self.patchers = [] CONF_patcher = mock.patch('nova.virt.lxd.driver.CONF') self.patchers.append(CONF_patcher) self.CONF = CONF_patcher.start() self.CONF.instances_path = '/path/to/instances' self.CONF.my_ip = '0.0.0.0' self.CONF.config_drive_format = 'iso9660' # XXX: rockstar (03 Nov 2016) - This should be removed once # everything is where it should live. CONF2_patcher = mock.patch('nova.virt.lxd.driver.nova.conf.CONF') self.patchers.append(CONF2_patcher) self.CONF2 = CONF2_patcher.start() self.CONF2.lxd.root_dir = '/lxd' self.CONF2.lxd.pool = None self.CONF2.instances_path = '/i' # LXDDriver._after_reboot reads from the database and syncs container # state. These tests can't read from the database. after_reboot_patcher = mock.patch( 'nova.virt.lxd.driver.LXDDriver._after_reboot') self.patchers.append(after_reboot_patcher) self.after_reboot = after_reboot_patcher.start() bdige_patcher = mock.patch( 'nova.virt.lxd.driver.driver.block_device_info_get_ephemerals') self.patchers.append(bdige_patcher) self.block_device_info_get_ephemerals = bdige_patcher.start() self.block_device_info_get_ephemerals.return_value = [] vif_driver_patcher = mock.patch( 'nova.virt.lxd.driver.lxd_vif.LXDGenericVifDriver') self.patchers.append(vif_driver_patcher) self.LXDGenericVifDriver = vif_driver_patcher.start() self.vif_driver = mock.Mock() self.LXDGenericVifDriver.return_value = self.vif_driver vif_gc_patcher = mock.patch('nova.virt.lxd.driver.lxd_vif.get_config') self.patchers.append(vif_gc_patcher) self.get_config = vif_gc_patcher.start() self.get_config.return_value = { 'mac_address': '00:11:22:33:44:55', 'bridge': 'qbr0123456789a', } # NOTE: mock out fileutils to ensure that unit tests don't try # to manipulate the filesystem (breaks in package builds). driver.fileutils = mock.Mock() def tearDown(self): super(LXDDriverTest, self).tearDown() self.Client_patcher.stop() for patcher in self.patchers: patcher.stop() def test_init_host(self): """init_host initializes the pylxd Client.""" lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) self.Client.assert_called_once_with() self.assertEqual(self.client, lxd_driver.client) def test_init_host_fail(self): def side_effect(): raise lxdcore_exceptions.ClientConnectionFailed() self.Client.side_effect = side_effect self.Client.return_value = None lxd_driver = driver.LXDDriver(None) self.assertRaises(exception.HostNotFound, lxd_driver.init_host, None) def test_get_info(self): container = mock.Mock() container.state.return_value = MockContainerState( 'Running', {'usage': 4000, 'usage_peak': 4500}, 100) self.client.containers.get.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) info = lxd_driver.get_info(instance) self.assertEqual(power_state.RUNNING, info.state) def test_list_instances(self): self.client.containers.all.return_value = [ MockContainer('mock-instance-1'), MockContainer('mock-instance-2'), ] lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) instances = lxd_driver.list_instances() self.assertEqual(['mock-instance-1', 'mock-instance-2'], instances) @mock.patch('nova.virt.lxd.driver.IMAGE_API') @mock.patch('nova.virt.lxd.driver.lockutils.lock') def test_spawn_unified_image(self, lock, IMAGE_API=None): def image_get(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(404)) self.client.images.get_by_alias.side_effect = image_get self.client.images.exists.return_value = False image = {'name': mock.Mock(), 'disk_format': 'raw'} IMAGE_API.get.return_value = image def download_unified(*args, **kwargs): # unified image with metadata # structure is gzipped tarball, content: # / # metadata.yaml # rootfs/ unified_tgz = 'H4sIALpegVkAA+3SQQ7CIBCFYY7CCXRAppwHo66sTVpYeHsh0a'\ 'Ru1A2Lxv/bDGQmYZLHeM7plHLa3dN4NX1INQyhVRdV1vXFuIML'\ '4lVVopF28cZKp33elCWn2VpTjuWWy4e5L/2NmqcpX5Z91zdawD'\ 'HqT/kHrf/E+Xo0Vrtu9fTn+QMAAAAAAAAAAAAAAADYrgfk/3zn'\ 'ACgAAA==' with closing(open(kwargs['dest_path'], 'wb+')) as img: img.write(base64.b64decode(unified_tgz)) IMAGE_API.download = download_unified self.test_spawn() @mock.patch('nova.virt.configdrive.required_by') def test_spawn(self, configdrive, neutron_failure=None): def container_get(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(404)) self.client.containers.get.side_effect = container_get configdrive.return_value = False container = mock.Mock() self.client.containers.create.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) image_meta = mock.Mock() injected_files = mock.Mock() admin_password = mock.Mock() allocations = mock.Mock() network_info = [_VIF] block_device_info = mock.Mock() virtapi = manager.ComputeVirtAPI(mock.MagicMock()) lxd_driver = driver.LXDDriver(virtapi) lxd_driver.init_host(None) # XXX: rockstar (6 Jul 2016) - There are a number of XXX comments # related to these calls in spawn. They require some work before we # can take out these mocks and follow the real codepaths. lxd_driver.firewall_driver = mock.Mock() lxd_driver.spawn( ctx, instance, image_meta, injected_files, admin_password, allocations, network_info, block_device_info) self.vif_driver.plug.assert_called_once_with( instance, network_info[0]) fd = lxd_driver.firewall_driver fd.setup_basic_filtering.assert_called_once_with( instance, network_info) fd.apply_instance_filter.assert_called_once_with( instance, network_info) container.start.assert_called_once_with(wait=True) def test_spawn_already_exists(self): """InstanceExists is raised if the container already exists.""" ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) image_meta = mock.Mock() injected_files = mock.Mock() admin_password = mock.Mock() allocations = mock.Mock() lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) self.assertRaises( exception.InstanceExists, lxd_driver.spawn, ctx, instance, image_meta, injected_files, admin_password, allocations, None, None) @mock.patch('nova.virt.configdrive.required_by') def test_spawn_with_configdrive(self, configdrive): def container_get(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(404)) self.client.containers.get.side_effect = container_get configdrive.return_value = True ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) image_meta = mock.Mock() injected_files = mock.Mock() admin_password = mock.Mock() allocations = mock.Mock() network_info = [_VIF] block_device_info = mock.Mock() virtapi = manager.ComputeVirtAPI(mock.MagicMock()) lxd_driver = driver.LXDDriver(virtapi) lxd_driver.init_host(None) # XXX: rockstar (6 Jul 2016) - There are a number of XXX comments # related to these calls in spawn. They require some work before we # can take out these mocks and follow the real codepaths. lxd_driver.firewall_driver = mock.Mock() lxd_driver._add_configdrive = mock.Mock() lxd_driver.spawn( ctx, instance, image_meta, injected_files, admin_password, allocations, network_info, block_device_info) self.vif_driver.plug.assert_called_once_with( instance, network_info[0]) fd = lxd_driver.firewall_driver fd.setup_basic_filtering.assert_called_once_with( instance, network_info) fd.apply_instance_filter.assert_called_once_with( instance, network_info) configdrive.assert_called_once_with(instance) lxd_driver.client.profiles.get.assert_called_once_with(instance.name) @mock.patch('nova.virt.configdrive.required_by') def test_spawn_profile_fail(self, configdrive, neutron_failure=None): """Cleanup is called when profile creation fails.""" def container_get(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(404)) def profile_create(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(500)) self.client.containers.get.side_effect = container_get self.client.profiles.create.side_effect = profile_create configdrive.return_value = False ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) image_meta = mock.Mock() injected_files = mock.Mock() admin_password = mock.Mock() allocations = mock.Mock() network_info = [_VIF] block_device_info = mock.Mock() virtapi = manager.ComputeVirtAPI(mock.MagicMock()) lxd_driver = driver.LXDDriver(virtapi) lxd_driver.init_host(None) lxd_driver.cleanup = mock.Mock() self.assertRaises( lxdcore_exceptions.LXDAPIException, lxd_driver.spawn, ctx, instance, image_meta, injected_files, admin_password, allocations, network_info, block_device_info) lxd_driver.cleanup.assert_called_once_with( ctx, instance, network_info, block_device_info) @mock.patch('nova.virt.configdrive.required_by') def test_spawn_container_fail(self, configdrive, neutron_failure=None): """Cleanup is called when container creation fails.""" def container_get(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(404)) def container_create(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(500)) self.client.containers.get.side_effect = container_get self.client.containers.create.side_effect = container_create configdrive.return_value = False ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) image_meta = mock.Mock() injected_files = mock.Mock() admin_password = mock.Mock() allocations = mock.Mock() network_info = [_VIF] block_device_info = mock.Mock() virtapi = manager.ComputeVirtAPI(mock.MagicMock()) lxd_driver = driver.LXDDriver(virtapi) lxd_driver.init_host(None) lxd_driver.cleanup = mock.Mock() self.assertRaises( lxdcore_exceptions.LXDAPIException, lxd_driver.spawn, ctx, instance, image_meta, injected_files, admin_password, allocations, network_info, block_device_info) lxd_driver.cleanup.assert_called_once_with( ctx, instance, network_info, block_device_info) def test_spawn_container_start_fail(self, neutron_failure=None): def container_get(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(404)) def side_effect(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(200)) self.client.containers.get.side_effect = container_get container = mock.Mock() ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) image_meta = mock.Mock() injected_files = mock.Mock() admin_password = mock.Mock() allocations = mock.Mock() network_info = [_VIF] block_device_info = mock.Mock() virtapi = manager.ComputeVirtAPI(mock.MagicMock()) lxd_driver = driver.LXDDriver(virtapi) lxd_driver.init_host(None) lxd_driver.cleanup = mock.Mock() lxd_driver.client.containers.create = mock.Mock( side_effect=side_effect) container.start.side_effect = side_effect self.assertRaises( lxdcore_exceptions.LXDAPIException, lxd_driver.spawn, ctx, instance, image_meta, injected_files, admin_password, allocations, network_info, block_device_info) lxd_driver.cleanup.assert_called_once_with( ctx, instance, network_info, block_device_info) def _test_spawn_instance_with_network_events(self, neutron_failure=None): generated_events = [] def wait_timeout(): event = mock.MagicMock() if neutron_failure == 'timeout': raise eventlet.timeout.Timeout() elif neutron_failure == 'error': event.status = 'failed' else: event.status = 'completed' return event def fake_prepare(instance, event_name): m = mock.MagicMock() m.instance = instance m.event_name = event_name m.wait.side_effect = wait_timeout generated_events.append(m) return m virtapi = manager.ComputeVirtAPI(mock.MagicMock()) prepare = virtapi._compute.instance_events.prepare_for_instance_event prepare.side_effect = fake_prepare drv = driver.LXDDriver(virtapi) instance_href = fake_instance.fake_instance_obj( context.get_admin_context(), name='test', memory_mb=0) @mock.patch.object(drv, 'plug_vifs') @mock.patch('nova.virt.configdrive.required_by') def test_spawn(configdrive, plug_vifs): def container_get(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(404)) self.client.containers.get.side_effect = container_get configdrive.return_value = False ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) image_meta = mock.Mock() injected_files = mock.Mock() admin_password = mock.Mock() allocations = mock.Mock() network_info = [_VIF] block_device_info = mock.Mock() drv.init_host(None) drv.spawn( ctx, instance, image_meta, injected_files, admin_password, allocations, network_info, block_device_info) test_spawn() if cfg.CONF.vif_plugging_timeout and utils.is_neutron(): prepare.assert_has_calls([ mock.call(instance_href, 'network-vif-plugged-vif1'), mock.call(instance_href, 'network-vif-plugged-vif2')]) for event in generated_events: if neutron_failure and generated_events.index(event) != 0: self.assertEqual(0, event.call_count) else: self.assertEqual(0, prepare.call_count) @mock.patch('nova.utils.is_neutron', return_value=True) def test_spawn_instance_with_network_events(self, is_neutron): self.flags(vif_plugging_timeout=0) self._test_spawn_instance_with_network_events() @mock.patch('nova.utils.is_neutron', return_value=True) def test_spawn_instance_with_events_neutron_failed_nonfatal_timeout( self, is_neutron): self.flags(vif_plugging_timeout=0) self.flags(vif_plugging_is_fatal=False) self._test_spawn_instance_with_network_events( neutron_failure='timeout') def test_destroy(self): mock_container = mock.Mock() mock_container.status = 'Running' self.client.containers.get.return_value = mock_container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [_VIF] lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.cleanup = mock.Mock() # There is a separate cleanup test lxd_driver.destroy(ctx, instance, network_info) lxd_driver.cleanup.assert_called_once_with( ctx, instance, network_info, None) lxd_driver.client.containers.get.assert_called_once_with(instance.name) mock_container.stop.assert_called_once_with(wait=True) mock_container.delete.assert_called_once_with(wait=True) def test_destroy_when_in_rescue(self): mock_stopped_container = mock.Mock() mock_stopped_container.status = 'Stopped' mock_rescued_container = mock.Mock() mock_rescued_container.status = 'Running' ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [_VIF] lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.cleanup = mock.Mock() # set the vm_state on the fake instance to RESCUED instance.vm_state = vm_states.RESCUED # set up the containers.get to return the stopped container and then # the rescued container self.client.containers.get.side_effect = [ mock_stopped_container, mock_rescued_container] lxd_driver.destroy(ctx, instance, network_info) lxd_driver.cleanup.assert_called_once_with( ctx, instance, network_info, None) lxd_driver.client.containers.get.assert_has_calls([ mock.call(instance.name), mock.call('{}-rescue'.format(instance.name))]) mock_stopped_container.stop.assert_not_called() mock_stopped_container.delete.assert_called_once_with(wait=True) mock_rescued_container.stop.assert_called_once_with(wait=True) mock_rescued_container.delete.assert_called_once_with(wait=True) def test_destroy_without_instance(self): def side_effect(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(404)) self.client.containers.get.side_effect = side_effect ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [_VIF] lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.cleanup = mock.Mock() # There is a separate cleanup test lxd_driver.destroy(ctx, instance, network_info) lxd_driver.cleanup.assert_called_once_with( ctx, instance, network_info, None) @mock.patch('nova.virt.lxd.driver.network') @mock.patch('os.path.exists', mock.Mock(return_value=True)) @mock.patch('pwd.getpwuid') @mock.patch('shutil.rmtree') @mock.patch.object(driver.utils, 'execute') def test_cleanup(self, execute, rmtree, getpwuid, _): mock_profile = mock.Mock() self.client.profiles.get.return_value = mock_profile pwuid = mock.Mock() pwuid.pw_name = 'user' getpwuid.return_value = pwuid ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [_VIF] instance_dir = common.InstanceAttributes(instance).instance_dir block_device_info = mock.Mock() lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.firewall_driver = mock.Mock() lxd_driver.cleanup(ctx, instance, network_info, block_device_info) self.vif_driver.unplug.assert_called_once_with( instance, network_info[0]) lxd_driver.firewall_driver.unfilter_instance.assert_called_once_with( instance, network_info) execute.assert_called_once_with( 'chown', '-R', 'user:user', instance_dir, run_as_root=True) rmtree.assert_called_once_with(instance_dir) mock_profile.delete.assert_called_once_with() def test_reboot(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.reboot(ctx, instance, None, None) self.client.containers.get.assert_called_once_with(instance.name) @mock.patch('nova.virt.lxd.driver.network') @mock.patch('pwd.getpwuid', mock.Mock(return_value=mock.Mock(pw_uid=1234))) @mock.patch('os.getuid', mock.Mock()) @mock.patch('os.path.exists', mock.Mock(return_value=True)) @mock.patch('six.moves.builtins.open') @mock.patch.object(driver.utils, 'execute') def test_get_console_output(self, execute, _open, _): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) expected_calls = [ mock.call( 'chown', '1234:1234', '/var/log/lxd/{}/console.log'.format( instance.name), run_as_root=True), mock.call( 'chmod', '755', '/lxd/containers/{}'.format( instance.name), run_as_root=True), ] _open.return_value.__enter__.return_value = six.BytesIO(b'output') lxd_driver = driver.LXDDriver(None) contents = lxd_driver.get_console_output(context, instance) self.assertEqual(b'output', contents) self.assertEqual(expected_calls, execute.call_args_list) def test_get_host_ip_addr(self): lxd_driver = driver.LXDDriver(None) result = lxd_driver.get_host_ip_addr() self.assertEqual('0.0.0.0', result) def test_attach_interface(self): expected = { 'hwaddr': '00:11:22:33:44:55', 'parent': 'tin0123456789a', 'nictype': 'physical', 'type': 'nic', } profile = mock.Mock() profile.devices = { 'eth0': { 'name': 'eth0', 'nictype': 'bridged', 'parent': 'lxdbr0', 'type': 'nic' }, 'root': { 'path': '/', 'type': 'disk' }, } self.client.profiles.get.return_value = profile ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) image_meta = None vif = { 'id': '0123456789abcdef', 'type': network_model.VIF_TYPE_OVS, 'address': '00:11:22:33:44:55', 'network': { 'bridge': 'fakebr'}, 'devname': 'tap0123456789a'} lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.firewall_driver = mock.Mock() lxd_driver.attach_interface(ctx, instance, image_meta, vif) self.assertTrue('tap0123456789a' in profile.devices) self.assertEqual(expected, profile.devices['tap0123456789a']) profile.save.assert_called_once_with(wait=True) def test_detach_interface_legacy(self): profile = mock.Mock() profile.devices = { 'eth0': { 'nictype': 'bridged', 'parent': 'lxdbr0', 'hwaddr': '00:11:22:33:44:55', 'type': 'nic' }, 'root': { 'path': '/', 'type': 'disk' }, } self.client.profiles.get.return_value = profile ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) vif = { 'id': '0123456789abcdef', 'type': network_model.VIF_TYPE_OVS, 'address': '00:11:22:33:44:55', 'network': { 'bridge': 'fakebr'}} lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.detach_interface(ctx, instance, vif) self.vif_driver.unplug.assert_called_once_with( instance, vif) self.assertEqual(['root'], sorted(profile.devices.keys())) profile.save.assert_called_once_with(wait=True) def test_detach_interface(self): profile = mock.Mock() profile.devices = { 'tap0123456789a': { 'nictype': 'physical', 'parent': 'tin0123456789a', 'hwaddr': '00:11:22:33:44:55', 'type': 'nic' }, 'root': { 'path': '/', 'type': 'disk' }, } self.client.profiles.get.return_value = profile ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) vif = { 'id': '0123456789abcdef', 'type': network_model.VIF_TYPE_OVS, 'address': '00:11:22:33:44:55', 'network': { 'bridge': 'fakebr'}} lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.detach_interface(ctx, instance, vif) self.vif_driver.unplug.assert_called_once_with( instance, vif) self.assertEqual(['root'], sorted(profile.devices.keys())) profile.save.assert_called_once_with(wait=True) def test_migrate_disk_and_power_off(self): container = mock.Mock() self.client.containers.get.return_value = container profile = mock.Mock() self.client.profiles.get.return_value = profile ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) dest = '0.0.0.0' flavor = mock.Mock() network_info = [] lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.migrate_disk_and_power_off( ctx, instance, dest, flavor, network_info) profile.save.assert_called_once_with() container.stop.assert_called_once_with(wait=True) def test_migrate_disk_and_power_off_different_host(self): """Migrating to a different host only shuts down the container.""" container = mock.Mock() self.client.containers.get.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) dest = '0.0.0.1' flavor = mock.Mock() network_info = [] lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.migrate_disk_and_power_off( ctx, instance, dest, flavor, network_info) self.assertEqual(0, self.client.profiles.get.call_count) container.stop.assert_called_once_with(wait=True) @mock.patch('nova.virt.lxd.driver.network') @mock.patch('os.major') @mock.patch('os.minor') @mock.patch('os.stat') @mock.patch('os.path.realpath') def test_attach_volume(self, realpath, stat, minor, major, _): profile = mock.Mock() self.client.profiles.get.return_value = profile realpath.return_value = '/dev/sdc' stat.return_value.st_rdev = 2080 minor.return_value = 32 major.return_value = 8 ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) connection_info = fake_connection_info( {'id': 1, 'name': 'volume-00000001'}, '10.0.2.15:3260', 'iqn.2010-10.org.openstack:volume-00000001', auth=True) mountpoint = '/dev/sdd' driver.brick_get_connector = mock.MagicMock() driver.brick_get_connector_properties = mock.MagicMock() lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) # driver.brick_get_connector = mock.MagicMock() # lxd_driver.storage_driver.connect_volume = mock.MagicMock() lxd_driver.attach_volume( ctx, connection_info, instance, mountpoint, None, None, None) lxd_driver.client.profiles.get.assert_called_once_with(instance.name) # driver.brick_get_connector.connect_volume.assert_called_once_with( # connection_info['data']) profile.save.assert_called_once_with() def test_detach_volume(self): profile = mock.Mock() profile.devices = { 'eth0': { 'name': 'eth0', 'nictype': 'bridged', 'parent': 'lxdbr0', 'type': 'nic' }, 'root': { 'path': '/', 'type': 'disk' }, 1: { 'path': '/dev/sdc', 'type': 'unix-block' }, } expected = { 'eth0': { 'name': 'eth0', 'nictype': 'bridged', 'parent': 'lxdbr0', 'type': 'nic' }, 'root': { 'path': '/', 'type': 'disk' }, } self.client.profiles.get.return_value = profile ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) connection_info = fake_connection_info( {'id': 1, 'name': 'volume-00000001'}, '10.0.2.15:3260', 'iqn.2010-10.org.openstack:volume-00000001', auth=True) mountpoint = mock.Mock() lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) driver.brick_get_connector = mock.MagicMock() driver.brick_get_connector_properties = mock.MagicMock() lxd_driver.detach_volume(connection_info, instance, mountpoint, None) lxd_driver.client.profiles.get.assert_called_once_with(instance.name) self.assertEqual(expected, profile.devices) profile.save.assert_called_once_with() def test_pause(self): container = mock.Mock() self.client.containers.get.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.pause(instance) self.client.containers.get.assert_called_once_with(instance.name) container.freeze.assert_called_once_with(wait=True) def test_unpause(self): container = mock.Mock() self.client.containers.get.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.unpause(instance) self.client.containers.get.assert_called_once_with(instance.name) container.unfreeze.assert_called_once_with(wait=True) def test_suspend(self): container = mock.Mock() self.client.containers.get.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.suspend(ctx, instance) self.client.containers.get.assert_called_once_with(instance.name) container.freeze.assert_called_once_with(wait=True) def test_resume(self): container = mock.Mock() self.client.containers.get.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.resume(ctx, instance, None, None) self.client.containers.get.assert_called_once_with(instance.name) container.unfreeze.assert_called_once_with(wait=True) def test_resume_state_on_host_boot(self): container = mock.Mock() state = mock.Mock() state.memory = dict({'usage': 0, 'usage_peak': 0}) state.status_code = 102 container.state.return_value = state self.client.containers.get.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.resume_state_on_host_boot(ctx, instance, None, None) container.start.assert_called_once_with(wait=True) def test_rescue(self): profile = mock.Mock() profile.devices = { 'root': { 'type': 'disk', 'path': '/', 'size': '1GB' } } container = mock.Mock() self.client.profiles.get.return_value = profile self.client.containers.get.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) profile.name = instance.name network_info = [_VIF] image_meta = mock.Mock() rescue_password = mock.Mock() rescue = '%s-rescue' % instance.name lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.rescue( ctx, instance, network_info, image_meta, rescue_password) lxd_driver.client.containers.get.assert_called_once_with(instance.name) container.rename.assert_called_once_with(rescue, wait=True) lxd_driver.client.profiles.get.assert_called_once_with(instance.name) lxd_driver.client.containers.create.assert_called_once_with( {'name': instance.name, 'profiles': [profile.name], 'source': {'type': 'image', 'alias': None}, }, wait=True) self.assertTrue('rescue' in profile.devices) def test_unrescue(self): container = mock.Mock() container.status = 'Running' self.client.containers.get.return_value = container profile = mock.Mock() profile.devices = { 'root': { 'type': 'disk', 'path': '/', 'size': '1GB' }, 'rescue': { 'source': '/path', 'path': '/mnt', 'type': 'disk' } } self.client.profiles.get.return_value = profile ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [_VIF] rescue = '%s-rescue' % instance.name lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.unrescue(instance, network_info) container.stop.assert_called_once_with(wait=True) container.delete.assert_called_once_with(wait=True) lxd_driver.client.profiles.get.assert_called_once_with(instance.name) profile.save.assert_called_once_with() lxd_driver.client.containers.get.assert_called_with(rescue) container.rename.assert_called_once_with(instance.name, wait=True) container.start.assert_called_once_with(wait=True) self.assertTrue('rescue' not in profile.devices) def test_power_off(self): container = mock.Mock() container.status = 'Running' self.client.containers.get.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.power_off(instance) self.client.containers.get.assert_called_once_with(instance.name) container.stop.assert_called_once_with(wait=True) def test_power_on(self): container = mock.Mock() container.status = 'Stopped' self.client.containers.get.return_value = container ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.power_on(ctx, instance, None) self.client.containers.get.assert_called_once_with(instance.name) container.start.assert_called_once_with(wait=True) @mock.patch('socket.gethostname', mock.Mock(return_value='fake_hostname')) @mock.patch('os.statvfs', return_value=mock.Mock( f_blocks=131072000, f_bsize=8192, f_bavail=65536000)) @mock.patch('nova.virt.lxd.driver.open') @mock.patch.object(driver.utils, 'execute') def test_get_available_resource(self, execute, open, statvfs): expected = { 'cpu_info': { "features": "fake flag goes here", "model": "Fake CPU", "topology": {"sockets": "10", "threads": "4", "cores": "5"}, "arch": "x86_64", "vendor": "FakeVendor" }, 'hypervisor_hostname': 'fake_hostname', 'hypervisor_type': 'lxd', 'hypervisor_version': '011', 'local_gb': 1000, 'local_gb_used': 500, 'memory_mb': 10000, 'memory_mb_used': 8000, 'numa_topology': None, 'supported_instances': [ ('i686', 'lxd', 'exe'), ('x86_64', 'lxd', 'exe'), ('i686', 'lxc', 'exe'), ('x86_64', 'lxc', 'exe')], 'vcpus': 200, 'vcpus_used': 0} execute.return_value = ( 'Model name: Fake CPU\n' 'Vendor ID: FakeVendor\n' 'Socket(s): 10\n' 'Core(s) per socket: 5\n' 'Thread(s) per core: 4\n\n', None) meminfo = mock.MagicMock() meminfo.__enter__.return_value = six.moves.cStringIO( 'MemTotal: 10240000 kB\n' 'MemFree: 2000000 kB\n' 'Buffers: 24000 kB\n' 'Cached: 24000 kB\n') open.side_effect = [ six.moves.cStringIO('flags: fake flag goes here\n' 'processor: 2\n' '\n'), meminfo, ] lxd_config = { 'environment': { 'storage': 'dir', }, 'config': {} } lxd_driver = driver.LXDDriver(None) lxd_driver.client = mock.MagicMock() lxd_driver.client.host_info = lxd_config value = lxd_driver.get_available_resource(None) # This is funky, but json strings make for fragile tests. value['cpu_info'] = json.loads(value['cpu_info']) self.assertEqual(expected, value) @mock.patch('socket.gethostname', mock.Mock(return_value='fake_hostname')) @mock.patch('nova.virt.lxd.driver.open') @mock.patch.object(driver.utils, 'execute') def test_get_available_resource_zfs(self, execute, open): expected = { 'cpu_info': { "features": "fake flag goes here", "model": "Fake CPU", "topology": {"sockets": "10", "threads": "4", "cores": "5"}, "arch": "x86_64", "vendor": "FakeVendor" }, 'hypervisor_hostname': 'fake_hostname', 'hypervisor_type': 'lxd', 'hypervisor_version': '011', 'local_gb': 2222, 'local_gb_used': 200, 'memory_mb': 10000, 'memory_mb_used': 8000, 'numa_topology': None, 'supported_instances': [ ('i686', 'lxd', 'exe'), ('x86_64', 'lxd', 'exe'), ('i686', 'lxc', 'exe'), ('x86_64', 'lxc', 'exe')], 'vcpus': 200, 'vcpus_used': 0} execute.side_effect = [ ('Model name: Fake CPU\n' 'Vendor ID: FakeVendor\n' 'Socket(s): 10\n' 'Core(s) per socket: 5\n' 'Thread(s) per core: 4\n\n', None), ('2.17T\n', None), ('200.4G\n', None), ('1.8T\n', None) ] meminfo = mock.MagicMock() meminfo.__enter__.return_value = six.moves.cStringIO( 'MemTotal: 10240000 kB\n' 'MemFree: 2000000 kB\n' 'Buffers: 24000 kB\n' 'Cached: 24000 kB\n') open.side_effect = [ six.moves.cStringIO('flags: fake flag goes here\n' 'processor: 2\n' '\n'), meminfo, ] lxd_config = { 'environment': { 'storage': 'zfs', }, 'config': { 'storage.zfs_pool_name': 'lxd', } } lxd_driver = driver.LXDDriver(None) lxd_driver.client = mock.MagicMock() lxd_driver.client.host_info = lxd_config value = lxd_driver.get_available_resource(None) # This is funky, but json strings make for fragile tests. value['cpu_info'] = json.loads(value['cpu_info']) self.assertEqual(expected, value) def test_refresh_instance_security_rules(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) firewall = mock.Mock() lxd_driver = driver.LXDDriver(None) lxd_driver.firewall_driver = firewall lxd_driver.refresh_instance_security_rules(instance) firewall.refresh_instance_security_rules.assert_called_once_with( instance) def test_ensure_filtering_rules_for_instance(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) firewall = mock.Mock() network_info = object() lxd_driver = driver.LXDDriver(None) lxd_driver.firewall_driver = firewall lxd_driver.ensure_filtering_rules_for_instance(instance, network_info) firewall.ensure_filtering_rules_for_instance.assert_called_once_with( instance, network_info) def test_filter_defer_apply_on(self): firewall = mock.Mock() lxd_driver = driver.LXDDriver(None) lxd_driver.firewall_driver = firewall lxd_driver.filter_defer_apply_on() firewall.filter_defer_apply_on.assert_called_once_with() def test_filter_defer_apply_off(self): firewall = mock.Mock() lxd_driver = driver.LXDDriver(None) lxd_driver.firewall_driver = firewall lxd_driver.filter_defer_apply_off() firewall.filter_defer_apply_off.assert_called_once_with() def test_unfilter_instance(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) firewall = mock.Mock() network_info = object() lxd_driver = driver.LXDDriver(None) lxd_driver.firewall_driver = firewall lxd_driver.unfilter_instance(instance, network_info) firewall.unfilter_instance.assert_called_once_with( instance, network_info) @mock.patch.object(driver.utils, 'execute') def test_get_host_uptime(self, execute): expected = '00:00:00 up 0 days, 0:00 , 0 users, load average: 0' execute.return_value = (expected, 'stderr') lxd_driver = driver.LXDDriver(None) result = lxd_driver.get_host_uptime() self.assertEqual(expected, result) @mock.patch('nova.virt.lxd.driver.psutil.cpu_times') @mock.patch('nova.virt.lxd.driver.open') @mock.patch.object(driver.utils, 'execute') def test_get_host_cpu_stats(self, execute, open, cpu_times): cpu_times.return_value = [ '1', 'b', '2', '3', '4' ] execute.return_value = ( 'Model name: Fake CPU\n' 'Vendor ID: FakeVendor\n' 'Socket(s): 10\n' 'Core(s) per socket: 5\n' 'Thread(s) per core: 4\n\n', None) open.return_value = six.moves.cStringIO( 'flags: fake flag goes here\n' 'processor: 2\n\n') expected = { 'user': 1, 'iowait': 4, 'frequency': 0, 'kernel': 2, 'idle': 3} lxd_driver = driver.LXDDriver(None) result = lxd_driver.get_host_cpu_stats() self.assertEqual(expected, result) def test_get_volume_connector(self): expected = { 'host': 'fakehost', 'initiator': 'fake', 'ip': self.CONF.my_block_storage_ip } ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) lxd_driver = driver.LXDDriver(None) result = lxd_driver.get_volume_connector(instance) self.assertEqual(expected, result) @mock.patch('nova.virt.lxd.driver.socket.gethostname') def test_get_available_nodes(self, gethostname): gethostname.return_value = 'nova-lxd' expected = ['nova-lxd'] lxd_driver = driver.LXDDriver(None) result = lxd_driver.get_available_nodes() self.assertEqual(expected, result) @mock.patch('nova.virt.lxd.driver.IMAGE_API') @mock.patch('nova.virt.lxd.driver.lockutils.lock') def test_snapshot(self, lock, IMAGE_API): update_task_state_expected = [ mock.call(task_state='image_pending_upload'), mock.call( expected_state='image_pending_upload', task_state='image_uploading'), ] container = mock.Mock() self.client.containers.get.return_value = container image = mock.Mock() container.publish.return_value = image data = mock.Mock() image.export.return_value = data ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) image_id = mock.Mock() update_task_state = mock.Mock() snapshot = {'name': mock.Mock()} IMAGE_API.get.return_value = snapshot lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.snapshot(ctx, instance, image_id, update_task_state) self.assertEqual( update_task_state_expected, update_task_state.call_args_list) IMAGE_API.get.assert_called_once_with(ctx, image_id) IMAGE_API.update.assert_called_once_with( ctx, image_id, { 'name': snapshot['name'], 'disk_format': 'raw', 'container_format': 'bare'}, data) def test_finish_revert_migration(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [] container = mock.Mock() self.client.containers.get.return_value = container lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.finish_revert_migration(ctx, instance, network_info) container.start.assert_called_once_with(wait=True) def test_check_can_live_migrate_destination(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) src_compute_info = mock.Mock() dst_compute_info = mock.Mock() def container_get(*args, **kwargs): raise lxdcore_exceptions.LXDAPIException(MockResponse(404)) self.client.containers.get.side_effect = container_get lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) retval = lxd_driver.check_can_live_migrate_destination( ctx, instance, src_compute_info, dst_compute_info) self.assertIsInstance(retval, driver.LXDLiveMigrateData) def test_confirm_migration(self): migration = mock.Mock() instance = fake_instance.fake_instance_obj( context.get_admin_context, name='test', memory_mb=0) network_info = [] profile = mock.Mock() container = mock.Mock() self.client.profiles.get.return_value = profile self.client.containers.get.return_value = container lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.confirm_migration(migration, instance, network_info) profile.delete.assert_called_once_with() container.delete.assert_called_once_with(wait=True) def test_post_live_migration(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) container = mock.Mock() self.client.containers.get.return_value = container lxd_driver = driver.LXDDriver(None) lxd_driver.init_host(None) lxd_driver.post_live_migration(context, instance, None) container.delete.assert_called_once_with(wait=True) def test_post_live_migration_at_source(self): ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) network_info = [] profile = mock.Mock() self.client.profiles.get.return_value = profile lxd_driver = driver.LXDDriver(None) lxd_driver.cleanup = mock.Mock() lxd_driver.init_host(None) lxd_driver.post_live_migration_at_source( ctx, instance, network_info) profile.delete.assert_called_once_with() lxd_driver.cleanup.assert_called_once_with(ctx, instance, network_info) nova-lxd-17.0.0/nova/tests/unit/virt/lxd/test_session.py0000666000175100017510000000727413246266025023330 0ustar zuulzuul00000000000000# Copyright 2015 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. See the License for the specific language governing # permissions and limitations under the License. """ Unit tests for ContinerMixin class The following tests the ContainerMixin class for nova-lxd. """ import ddt import mock from nova import exception from nova import test from pylxd.deprecated import exceptions as lxd_exceptions from nova.virt.lxd import session import fake_api import stubs @ddt.ddt class SessionContainerTest(test.NoDBTestCase): def setUp(self): super(SessionContainerTest, self).setUp() """This is so we can mock out pylxd API calls.""" self.ml = stubs.lxd_mock() lxd_patcher = mock.patch('pylxd.deprecated.api.API', mock.Mock(return_value=self.ml)) lxd_patcher.start() self.addCleanup(lxd_patcher.stop) self.session = session.LXDAPISession() @stubs.annotated_data( ('1', (200, fake_api.fake_operation_info_ok())) ) def test_container_init(self, tag, side_effect): """ conatainer_init creates a container based on given config for a container. Check to see if we are returning the right pylxd calls for the LXD API. """ config = mock.Mock() instance = stubs._fake_instance() self.ml.container_init.return_value = side_effect self.ml.operation_info.return_value = \ (200, fake_api.fake_container_state(200)) self.assertIsNone(self.session.container_init(config, instance)) calls = [mock.call.container_init(config), mock.call.wait_container_operation( '/1.0/operation/1234', 200, -1), mock.call.operation_info('/1.0/operation/1234')] self.assertEqual(calls, self.ml.method_calls) @stubs.annotated_data( ('api_fail', lxd_exceptions.APIError(500, 'Fake'), exception.NovaException), ) def test_container_init_fail(self, tag, side_effect, expected): """ continer_init create as container on a given LXD host. Make sure that we reaise an exception.NovaException if there is an APIError from the LXD API. """ config = mock.Mock() instance = stubs._fake_instance() self.ml.container_init.side_effect = side_effect self.assertRaises(expected, self.session.container_init, config, instance) @ddt.ddt class SessionEventTest(test.NoDBTestCase): def setUp(self): super(SessionEventTest, self).setUp() self.ml = stubs.lxd_mock() lxd_patcher = mock.patch('pylxd.deprecated.api.API', mock.Mock(return_value=self.ml)) lxd_patcher.start() self.addCleanup(lxd_patcher.stop) self.session = session.LXDAPISession() def test_container_wait(self): instance = stubs._fake_instance() operation_id = mock.Mock() self.ml.wait_container_operation.return_value = True self.assertIsNone(self.session.operation_wait(operation_id, instance)) self.ml.wait_container_operation.assert_called_with(operation_id, 200, -1) nova-lxd-17.0.0/nova/tests/unit/virt/lxd/stubs.py0000666000175100017510000000633613246266025021744 0ustar zuulzuul00000000000000# Copyright (c) 2015 Canonical Ltd # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ddt import mock from nova import context from nova.tests.unit import fake_instance class MockConf(mock.Mock): def __init__(self, lxd_args=(), lxd_kwargs={}, *args, **kwargs): default = { 'config_drive_format': None, 'instances_path': '/fake/instances/path', 'image_cache_subdirectory_name': '/fake/image/cache', 'vif_plugging_timeout': 10, 'my_ip': '1.2.3.4', 'vlan_interface': 'vlanif', 'flat_interface': 'flatif', } default.update(kwargs) super(MockConf, self).__init__(*args, **default) lxd_default = { 'root_dir': '/fake/lxd/root', 'timeout': 20, 'retry_interval': 2, } lxd_default.update(lxd_kwargs) self.lxd = mock.Mock(lxd_args, **lxd_default) class MockInstance(mock.Mock): def __init__(self, name='fake-uuid', uuid='fake-uuid', image_ref='mock_image', ephemeral_gb=0, memory_mb=-1, vcpus=0, *args, **kwargs): super(MockInstance, self).__init__( uuid=uuid, image_ref=image_ref, ephemeral_gb=ephemeral_gb, *args, **kwargs) self.uuid = uuid self.name = name self.flavor = mock.Mock(memory_mb=memory_mb, vcpus=vcpus) def lxd_mock(*args, **kwargs): default = { 'profile_list.return_value': ['fake_profile'], 'container_list.return_value': ['mock-instance-1', 'mock-instance-2'], 'host_ping.return_value': True, } default.update(kwargs) return mock.Mock(*args, **default) def annotated_data(*args): class List(list): pass class Dict(dict): pass new_args = [] for arg in args: if isinstance(arg, (list, tuple)): new_arg = List(arg) new_arg.__name__ = arg[0] elif isinstance(arg, dict): new_arg = Dict(arg) new_arg.__name__ = arg['tag'] else: raise TypeError('annotate_data can only handle dicts, ' 'lists and tuples') new_args.append(new_arg) return lambda func: ddt.data(*new_args)(ddt.unpack(func)) def _fake_instance(): ctxt = context.get_admin_context() _instance_values = { 'display_name': 'fake_display_name', 'name': 'fake_name', 'uuid': 'fake_uuid', 'image_ref': 'fake_image', 'vcpus': 1, 'memory_mb': 512, 'root_gb': 10, 'host': 'fake_host', 'expected_attrs': ['system_metadata'], } return fake_instance.fake_instance_obj( ctxt, **_instance_values) nova-lxd-17.0.0/nova/tests/unit/virt/lxd/test_storage.py0000666000175100017510000002256213246266025023306 0ustar zuulzuul00000000000000# Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import context from nova import test from nova.tests.unit import fake_instance from nova.virt.lxd import storage class TestAttachEphemeral(test.NoDBTestCase): """Tests for nova.virt.lxd.storage.attach_ephemeral.""" def setUp(self): super(TestAttachEphemeral, self).setUp() self.patchers = [] CONF_patcher = mock.patch('nova.virt.lxd.common.conf.CONF') self.patchers.append(CONF_patcher) self.CONF = CONF_patcher.start() self.CONF.instances_path = '/i' self.CONF.lxd.root_dir = '/var/lib/lxd' def tearDown(self): super(TestAttachEphemeral, self).tearDown() for patcher in self.patchers: patcher.stop() @mock.patch.object(storage.utils, 'execute') @mock.patch( 'nova.virt.lxd.storage.driver.block_device_info_get_ephemerals') def test_add_ephemerals_with_zfs( self, block_device_info_get_ephemerals, execute): ctx = context.get_admin_context() block_device_info_get_ephemerals.return_value = [ {'virtual_name': 'ephemerals0'}] instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) block_device_info = mock.Mock() lxd_config = {'environment': {'storage': 'zfs'}, 'config': {'storage.zfs_pool_name': 'zfs'}} container = mock.Mock() container.config = { 'volatile.last_state.idmap': '[{"Isuid":true,"Isgid":false,' '"Hostid":165536,"Nsid":0,' '"Maprange":65536}]' } client = mock.Mock() client.containers.get.return_value = container storage.attach_ephemeral( client, block_device_info, lxd_config, instance) block_device_info_get_ephemerals.assert_called_once_with( block_device_info) expected_calls = [ mock.call( 'zfs', 'create', '-o', 'mountpoint=/i/instance-00000001/storage/ephemerals0', '-o', 'quota=0G', 'zfs/instance-00000001-ephemeral', run_as_root=True), mock.call( 'chown', '165536', '/i/instance-00000001/storage/ephemerals0', run_as_root=True) ] self.assertEqual(expected_calls, execute.call_args_list) @mock.patch.object(storage.utils, 'execute') @mock.patch( 'nova.virt.lxd.storage.driver.block_device_info_get_ephemerals') def test_add_ephemerals_with_btrfs( self, block_device_info_get_ephemerals, execute): ctx = context.get_admin_context() block_device_info_get_ephemerals.return_value = [ {'virtual_name': 'ephemerals0'}] instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) instance.ephemeral_gb = 1 block_device_info = mock.Mock() lxd_config = {'environment': {'storage': 'btrfs'}} profile = mock.Mock() profile.devices = { 'root': { 'path': '/', 'type': 'disk', 'size': '1G' }, 'ephemerals0': { 'optional': 'True', 'path': '/mnt', 'source': '/path/fake_path', 'type': 'disk' } } client = mock.Mock() client.profiles.get.return_value = profile container = mock.Mock() container.config = { 'volatile.last_state.idmap': '[{"Isuid":true,"Isgid":false,' '"Hostid":165536,"Nsid":0,' '"Maprange":65536}]' } client.containers.get.return_value = container storage.attach_ephemeral( client, block_device_info, lxd_config, instance) block_device_info_get_ephemerals.assert_called_once_with( block_device_info) profile.save.assert_called_once_with() expected_calls = [ mock.call( 'btrfs', 'subvolume', 'create', '/var/lib/lxd/containers/instance-00000001/ephemerals0', run_as_root=True), mock.call( 'btrfs', 'qgroup', 'limit', '1g', '/var/lib/lxd/containers/instance-00000001/ephemerals0', run_as_root=True), mock.call( 'chown', '165536', '/var/lib/lxd/containers/instance-00000001/ephemerals0', run_as_root=True) ] self.assertEqual(expected_calls, execute.call_args_list) self.assertEqual( profile.devices['ephemerals0']['source'], '/var/lib/lxd/containers/instance-00000001/ephemerals0') @mock.patch.object(storage.utils, 'execute') @mock.patch( 'nova.virt.lxd.storage.driver.block_device_info_get_ephemerals') def test_ephemeral_with_lvm( self, block_device_info_get_ephemerals, execute): ctx = context.get_admin_context() block_device_info_get_ephemerals.return_value = [ {'virtual_name': 'ephemerals0'}] instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) block_device_info = mock.Mock() lxd_config = {'environment': {'storage': 'lvm'}, 'config': {'storage.lvm_vg_name': 'lxd'}} storage.fileutils = mock.Mock() container = mock.Mock() container.config = { 'volatile.last_state.idmap': '[{"Isuid":true,"Isgid":false,' '"Hostid":165536,"Nsid":0,' '"Maprange":65536}]' } client = mock.Mock() client.containers.get.return_value = container storage.attach_ephemeral( client, block_device_info, lxd_config, instance) block_device_info_get_ephemerals.assert_called_once_with( block_device_info) expected_calls = [ mock.call( 'lvcreate', '-L', '0G', '-n', 'instance-00000001-ephemerals0', 'lxd', attempts=3, run_as_root=True), mock.call( 'mkfs', '-t', 'ext4', '/dev/lxd/instance-00000001-ephemerals0', run_as_root=True), mock.call( 'mount', '-t', 'ext4', '/dev/lxd/instance-00000001-ephemerals0', '/i/instance-00000001/storage/ephemerals0', run_as_root=True), mock.call( 'chown', '165536', '/i/instance-00000001/storage/ephemerals0', run_as_root=True)] self.assertEqual(expected_calls, execute.call_args_list) class TestDetachEphemeral(test.NoDBTestCase): """Tests for nova.virt.lxd.storage.detach_ephemeral.""" @mock.patch.object(storage.utils, 'execute') @mock.patch( 'nova.virt.lxd.storage.driver.block_device_info_get_ephemerals') def test_remove_ephemeral_with_zfs( self, block_device_info_get_ephemerals, execute): block_device_info_get_ephemerals.return_value = [ {'virtual_name': 'ephemerals0'}] ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) block_device_info = mock.Mock() lxd_config = {'environment': {'storage': 'zfs'}, 'config': {'storage.zfs_pool_name': 'zfs'}} storage.detach_ephemeral(block_device_info, lxd_config, instance) block_device_info_get_ephemerals.assert_called_once_with( block_device_info) expected_calls = [ mock.call('zfs', 'destroy', 'zfs/instance-00000001-ephemeral', run_as_root=True) ] self.assertEqual(expected_calls, execute.call_args_list) @mock.patch.object(storage.utils, 'execute') @mock.patch( 'nova.virt.lxd.storage.driver.block_device_info_get_ephemerals') def test_remove_ephemeral_with_lvm( self, block_device_info_get_ephemerals, execute): block_device_info_get_ephemerals.return_value = [ {'virtual_name': 'ephemerals0'}] ctx = context.get_admin_context() instance = fake_instance.fake_instance_obj( ctx, name='test', memory_mb=0) block_device_info = mock.Mock() lxd_config = {'environment': {'storage': 'lvm'}, 'config': {'storage.lvm_vg_name': 'lxd'}} storage.detach_ephemeral(block_device_info, lxd_config, instance) block_device_info_get_ephemerals.assert_called_once_with( block_device_info) expected_calls = [ mock.call( 'umount', '/dev/lxd/instance-00000001-ephemerals0', run_as_root=True), mock.call('lvremove', '-f', '/dev/lxd/instance-00000001-ephemerals0', run_as_root=True) ] self.assertEqual(expected_calls, execute.call_args_list) nova-lxd-17.0.0/nova/tests/unit/virt/lxd/test_vif.py0000666000175100017510000003100413246266025022415 0ustar zuulzuul00000000000000# Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import context from nova import exception from nova.network import model as network_model from nova import test from nova.tests.unit import fake_instance from nova.virt.lxd import vif GATEWAY = network_model.IP(address='101.168.1.1', type='gateway') DNS_BRIDGE = network_model.IP(address='8.8.8.8', type=None) SUBNET = network_model.Subnet( cidr='101.168.1.0/24', dns=[DNS_BRIDGE], gateway=GATEWAY, routes=None, dhcp_server='191.168.1.1') NETWORK = network_model.Network( id='ab7b876b-2c1c-4bb2-afa1-f9f4b6a28053', bridge='br0', label=None, subnets=[SUBNET], bridge_interface=None, vlan=99, mtu=1000) OVS_VIF = network_model.VIF( id='da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', address='ca:fe:de:ad:be:ef', network=NETWORK, type=network_model.VIF_TYPE_OVS, devname='tapda5cc4bf-f1', ovs_interfaceid='7b6812a6-b044-4596-b3c5-43a8ec431638', details={network_model.VIF_DETAILS_OVS_HYBRID_PLUG: False}) OVS_HYBRID_VIF = network_model.VIF( id='da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', address='ca:fe:de:ad:be:ef', network=NETWORK, type=network_model.VIF_TYPE_OVS, devname='tapda5cc4bf-f1', ovs_interfaceid='7b6812a6-b044-4596-b3c5-43a8ec431638', details={network_model.VIF_DETAILS_OVS_HYBRID_PLUG: True}) TAP_VIF = network_model.VIF( id='da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', address='ca:fe:de:ad:be:ee', network=NETWORK, type=network_model.VIF_TYPE_TAP, devname='tapda5cc4bf-f1', details={'mac_address': 'aa:bb:cc:dd:ee:ff'}) LB_VIF = network_model.VIF( id='da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', address='ca:fe:de:ad:be:ed', network=NETWORK, type=network_model.VIF_TYPE_BRIDGE, devname='tapda5cc4bf-f1') INSTANCE = fake_instance.fake_instance_obj( context.get_admin_context(), name='test') class GetVifDevnameTest(test.NoDBTestCase): """Tests for get_vif_devname.""" def test_get_vif_devname_devname_exists(self): an_vif = { 'id': 'da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', 'devname': 'oth1', } devname = vif.get_vif_devname(an_vif) self.assertEqual('oth1', devname) def test_get_vif_devname_devname_nonexistent(self): an_vif = { 'id': 'da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', } devname = vif.get_vif_devname(an_vif) self.assertEqual('nicda5cc4bf-f1', devname) class GetConfigTest(test.NoDBTestCase): """Tests for get_config.""" def setUp(self): super(GetConfigTest, self).setUp() self.CONF_patcher = mock.patch('nova.virt.lxd.vif.CONF') self.CONF = self.CONF_patcher.start() self.CONF.firewall_driver = 'nova.virt.firewall.NoopFirewallDriver' def tearDown(self): super(GetConfigTest, self).tearDown() self.CONF_patcher.stop() def test_get_config_bad_vif_type(self): """Unsupported vif types raise an exception.""" an_vif = network_model.VIF( id='da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', address='ca:fe:de:ad:be:ef', network=NETWORK, type='invalid', devname='tapda5cc4bf-f1', ovs_interfaceid='7b6812a6-b044-4596-b3c5-43a8ec431638') self.assertRaises( exception.NovaException, vif.get_config, an_vif) def test_get_config_bridge(self): expected = {'bridge': 'br0', 'mac_address': 'ca:fe:de:ad:be:ef'} an_vif = network_model.VIF( id='da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', address='ca:fe:de:ad:be:ef', network=NETWORK, type='bridge', devname='tapda5cc4bf-f1', ovs_interfaceid='7b6812a6-b044-4596-b3c5-43a8ec431638') config = vif.get_config(an_vif) self.assertEqual(expected, config) def test_get_config_ovs_bridge(self): expected = { 'bridge': 'br0', 'mac_address': 'ca:fe:de:ad:be:ef'} an_vif = network_model.VIF( id='da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', address='ca:fe:de:ad:be:ef', network=NETWORK, type='ovs', devname='tapda5cc4bf-f1', ovs_interfaceid='7b6812a6-b044-4596-b3c5-43a8ec431638') config = vif.get_config(an_vif) self.assertEqual(expected, config) def test_get_config_ovs_hybrid(self): self.CONF.firewall_driver = 'AnFirewallDriver' expected = { 'bridge': 'qbrda5cc4bf-f1', 'mac_address': 'ca:fe:de:ad:be:ef'} an_vif = network_model.VIF( id='da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', address='ca:fe:de:ad:be:ef', network=NETWORK, type='ovs', devname='tapda5cc4bf-f1', ovs_interfaceid='7b6812a6-b044-4596-b3c5-43a8ec431638') config = vif.get_config(an_vif) self.assertEqual(expected, config) def test_get_config_tap(self): expected = {'mac_address': 'ca:fe:de:ad:be:ef'} an_vif = network_model.VIF( id='da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', address='ca:fe:de:ad:be:ef', network=NETWORK, type='tap', devname='tapda5cc4bf-f1', ovs_interfaceid='7b6812a6-b044-4596-b3c5-43a8ec431638') config = vif.get_config(an_vif) self.assertEqual(expected, config) class LXDGenericVifDriverTest(test.NoDBTestCase): """Tests for LXDGenericVifDriver.""" def setUp(self): super(LXDGenericVifDriverTest, self).setUp() self.vif_driver = vif.LXDGenericVifDriver() @mock.patch.object(vif, '_post_plug_wiring') @mock.patch('nova.virt.lxd.vif.linux_net') @mock.patch('nova.virt.lxd.vif.os_vif') def test_plug_ovs(self, os_vif, linux_net, _post_plug_wiring): self.vif_driver.plug(INSTANCE, OVS_VIF) self.assertEqual( 'tapda5cc4bf-f1', os_vif.plug.call_args[0][0].vif_name) self.assertEqual( 'instance-00000001', os_vif.plug.call_args[0][1].name) _post_plug_wiring.assert_called_with(INSTANCE, OVS_VIF) @mock.patch.object(vif, '_post_unplug_wiring') @mock.patch('nova.virt.lxd.vif.linux_net') @mock.patch('nova.virt.lxd.vif.os_vif') def test_unplug_ovs(self, os_vif, linux_net, _post_unplug_wiring): self.vif_driver.unplug(INSTANCE, OVS_VIF) self.assertEqual( 'tapda5cc4bf-f1', os_vif.unplug.call_args[0][0].vif_name) self.assertEqual( 'instance-00000001', os_vif.unplug.call_args[0][1].name) _post_unplug_wiring.assert_called_with(INSTANCE, OVS_VIF) @mock.patch.object(vif, '_post_plug_wiring') @mock.patch.object(vif, '_create_veth_pair') @mock.patch('nova.virt.lxd.vif.os_vif') def test_plug_tap(self, os_vif, _create_veth_pair, _post_plug_wiring): self.vif_driver.plug(INSTANCE, TAP_VIF) os_vif.plug.assert_not_called() _create_veth_pair.assert_called_with('tapda5cc4bf-f1', 'tinda5cc4bf-f1', 1000) _post_plug_wiring.assert_called_with(INSTANCE, TAP_VIF) @mock.patch.object(vif, '_post_unplug_wiring') @mock.patch('nova.virt.lxd.vif.linux_net') @mock.patch('nova.virt.lxd.vif.os_vif') def test_unplug_tap(self, os_vif, linux_net, _post_unplug_wiring): self.vif_driver.unplug(INSTANCE, TAP_VIF) os_vif.plug.assert_not_called() linux_net.delete_net_dev.assert_called_with('tapda5cc4bf-f1') _post_unplug_wiring.assert_called_with(INSTANCE, TAP_VIF) class PostPlugTest(test.NoDBTestCase): """Tests for post plug operations""" def setUp(self): super(PostPlugTest, self).setUp() @mock.patch('nova.virt.lxd.vif._create_veth_pair') @mock.patch('nova.virt.lxd.vif._add_bridge_port') @mock.patch('nova.virt.lxd.vif.linux_net') def test_post_plug_ovs_hybrid(self, linux_net, add_bridge_port, create_veth_pair): linux_net.device_exists.return_value = False vif._post_plug_wiring(INSTANCE, OVS_HYBRID_VIF) linux_net.device_exists.assert_called_with('tapda5cc4bf-f1') create_veth_pair.assert_called_with('tapda5cc4bf-f1', 'tinda5cc4bf-f1', 1000) add_bridge_port.assert_called_with('qbrda5cc4bf-f1', 'tapda5cc4bf-f1') @mock.patch('nova.virt.lxd.vif._create_veth_pair') @mock.patch('nova.virt.lxd.vif._add_bridge_port') @mock.patch('nova.virt.lxd.vif.linux_net') def test_post_plug_ovs(self, linux_net, add_bridge_port, create_veth_pair): linux_net.device_exists.return_value = False vif._post_plug_wiring(INSTANCE, OVS_VIF) linux_net.device_exists.assert_called_with('tapda5cc4bf-f1') create_veth_pair.assert_called_with('tapda5cc4bf-f1', 'tinda5cc4bf-f1', 1000) add_bridge_port.assert_not_called() linux_net.create_ovs_vif_port.assert_called_with( 'br0', 'tapda5cc4bf-f1', 'da5cc4bf-f16c-4807-a0b6-911c7c67c3f8', 'ca:fe:de:ad:be:ef', INSTANCE.uuid, 1000 ) @mock.patch('nova.virt.lxd.vif._create_veth_pair') @mock.patch('nova.virt.lxd.vif._add_bridge_port') @mock.patch('nova.virt.lxd.vif.linux_net') def test_post_plug_bridge(self, linux_net, add_bridge_port, create_veth_pair): linux_net.device_exists.return_value = False vif._post_plug_wiring(INSTANCE, LB_VIF) linux_net.device_exists.assert_called_with('tapda5cc4bf-f1') create_veth_pair.assert_called_with('tapda5cc4bf-f1', 'tinda5cc4bf-f1', 1000) add_bridge_port.assert_called_with('br0', 'tapda5cc4bf-f1') @mock.patch('nova.virt.lxd.vif._create_veth_pair') @mock.patch('nova.virt.lxd.vif._add_bridge_port') @mock.patch('nova.virt.lxd.vif.linux_net') def test_post_plug_tap(self, linux_net, add_bridge_port, create_veth_pair): linux_net.device_exists.return_value = False vif._post_plug_wiring(INSTANCE, TAP_VIF) linux_net.device_exists.assert_not_called() class PostUnplugTest(test.NoDBTestCase): """Tests for post unplug operations""" @mock.patch('nova.virt.lxd.vif.linux_net') def test_post_unplug_ovs_hybrid(self, linux_net): vif._post_unplug_wiring(INSTANCE, OVS_HYBRID_VIF) linux_net.delete_net_dev.assert_called_with('tapda5cc4bf-f1') @mock.patch('nova.virt.lxd.vif.linux_net') def test_post_unplug_ovs(self, linux_net): vif._post_unplug_wiring(INSTANCE, OVS_VIF) linux_net.delete_ovs_vif_port.assert_called_with('br0', 'tapda5cc4bf-f1', True) @mock.patch('nova.virt.lxd.vif.linux_net') def test_post_unplug_bridge(self, linux_net): vif._post_unplug_wiring(INSTANCE, LB_VIF) linux_net.delete_net_dev.assert_called_with('tapda5cc4bf-f1') class MiscHelpersTest(test.NoDBTestCase): """Misc tests for vif module""" def test_is_ovs_vif_port(self): self.assertTrue(vif._is_ovs_vif_port(OVS_VIF)) self.assertFalse(vif._is_ovs_vif_port(OVS_HYBRID_VIF)) self.assertFalse(vif._is_ovs_vif_port(TAP_VIF)) @mock.patch.object(vif, 'utils') def test_add_bridge_port(self, utils): vif._add_bridge_port('br-int', 'tapXYZ') utils.execute.assert_called_with('brctl', 'addif', 'br-int', 'tapXYZ', run_as_root=True) nova-lxd-17.0.0/nova/__init__.py0000666000175100017510000000007013246266025016434 0ustar zuulzuul00000000000000__import__('pkg_resources').declare_namespace(__name__) nova-lxd-17.0.0/nova_lxd.egg-info/0000775000175100017510000000000013246266437016674 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd.egg-info/not-zip-safe0000664000175100017510000000000113246266401021111 0ustar zuulzuul00000000000000 nova-lxd-17.0.0/nova_lxd.egg-info/SOURCES.txt0000664000175100017510000000534113246266437020563 0ustar zuulzuul00000000000000.coveragerc .mailmap .stestr.conf .zuul.yaml AUTHORS CONTRIBUTING.rst ChangeLog HACKING.rst LICENSE MANIFEST.in README.md babel.cfg openstack-common.conf requirements.txt run_tests.sh setup.cfg setup.py test-requirements.txt tox.ini contrib/ci/post_test_hook.sh contrib/ci/pre_test_hook.sh contrib/glance_metadefs/compute-lxd-flavor.json contrib/tempest/README.1st contrib/tempest/run_tempest_lxd.sh devstack/local.conf.sample devstack/override-defaults devstack/plugin.sh devstack/settings devstack/tempest-dsvm-lxd-rc doc/source/conf.py doc/source/contributing.rst doc/source/exclusive_machine.rst doc/source/index.rst doc/source/usage.rst doc/source/vif_wiring.rst etc/nova/rootwrap.d/lxd.filters nova/__init__.py nova/tests/unit/virt/lxd/__init__.py nova/tests/unit/virt/lxd/fake_api.py nova/tests/unit/virt/lxd/stubs.py nova/tests/unit/virt/lxd/test_common.py nova/tests/unit/virt/lxd/test_driver.py nova/tests/unit/virt/lxd/test_flavor.py nova/tests/unit/virt/lxd/test_migrate.py nova/tests/unit/virt/lxd/test_session.py nova/tests/unit/virt/lxd/test_storage.py nova/tests/unit/virt/lxd/test_vif.py nova/virt/__init__.py nova/virt/lxd/__init__.py nova/virt/lxd/common.py nova/virt/lxd/driver.py nova/virt/lxd/flavor.py nova/virt/lxd/session.py nova/virt/lxd/storage.py nova/virt/lxd/vif.py nova_lxd.egg-info/PKG-INFO nova_lxd.egg-info/SOURCES.txt nova_lxd.egg-info/dependency_links.txt nova_lxd.egg-info/entry_points.txt nova_lxd.egg-info/not-zip-safe nova_lxd.egg-info/pbr.json nova_lxd.egg-info/requires.txt nova_lxd.egg-info/top_level.txt nova_lxd_tempest_plugin/README nova_lxd_tempest_plugin/__init__.py nova_lxd_tempest_plugin/plugin.py nova_lxd_tempest_plugin/tests/__init__.py nova_lxd_tempest_plugin/tests/api/__init__.py nova_lxd_tempest_plugin/tests/api/compute/__init__.py nova_lxd_tempest_plugin/tests/api/compute/servers/__init__.py nova_lxd_tempest_plugin/tests/api/compute/servers/test_create_server.py nova_lxd_tempest_plugin/tests/api/compute/servers/test_servers.py nova_lxd_tempest_plugin/tests/api/compute/volumes/__init__.py nova_lxd_tempest_plugin/tests/api/compute/volumes/test_attach_volume.py nova_lxd_tempest_plugin/tests/scenario/__init__.py nova_lxd_tempest_plugin/tests/scenario/manager.py nova_lxd_tempest_plugin/tests/scenario/test_server_basic_ops.py nova_lxd_tempest_plugin/tests/scenario/test_volume_ops.py specs/todo.txt tools/abandon_old_reviews.sh tools/clean-vlans tools/colorizer.py tools/enable-pre-commit-hook.sh tools/install_venv.py tools/install_venv_common.py tools/nova-manage.bash_completion tools/pretty_tox.sh tools/regression_tester.py tools/with_venv.sh tools/config/README tools/config/analyze_opts.py tools/config/check_uptodate.sh tools/config/generate_sample.sh tools/config/oslo.config.generator.rc tools/db/schema_diff.pynova-lxd-17.0.0/nova_lxd.egg-info/pbr.json0000664000175100017510000000005613246266434020350 0ustar zuulzuul00000000000000{"git_version": "bdf2752", "is_release": true}nova-lxd-17.0.0/nova_lxd.egg-info/entry_points.txt0000664000175100017510000000013213246266434022163 0ustar zuulzuul00000000000000[tempest.test_plugins] nova-lxd-tempest-plugin = nova_lxd_tempest_plugin.plugin:MyPlugin nova-lxd-17.0.0/nova_lxd.egg-info/top_level.txt0000664000175100017510000000006113246266434021420 0ustar zuulzuul00000000000000nova/tests nova/virt/lxd nova_lxd_tempest_plugin nova-lxd-17.0.0/nova_lxd.egg-info/PKG-INFO0000664000175100017510000000535513246266434017776 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: nova-lxd Version: 17.0.0 Summary: native lxd driver for openstack Home-page: https://www.openstack.org/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN Description-Content-Type: UNKNOWN Description: # nova-lxd [![Build Status](https://travis-ci.org/lxc/nova-lxd.svg?branch=master)](https://travis-ci.org/lxc/nova-lxd) An OpenStack Compute driver for managing containers using LXD. ## nova-lxd on Devstack For development purposes, nova-lxd provides a devstack plugin. To use it, just include the following in your devstack `local.conf`: ``` [[local|localrc]] enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd ``` Change git repositories as needed (it's probably not very useful to point to the main nova-lxd repo). If you have a local tree you'd like to use, you can symlink your tree to `/opt/stack/nova-lxd` and do your development from there. The devstack default images won't work with lxd, as lxd doesn't support them. Once your stack is up and you've configured authentication against your devstack, do the following:: ``` wget http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz glance image-create --name xenial --disk-format raw --container-format bare --file xenial-server-cloudimg-amd64-root.tar.gz ``` You can test your configuration using the exercise scripts in devstack. For instance, ``` DEFAULT_IMAGE_NAME=xenial ./exercises/volumes.sh ``` Please note: the exercise scripts in devstack likely won't work, as they have requirements for using the cirros images. # Support and discussions We use the LXC mailing-lists for developer and user discussions, you can find and subscribe to those at: https://lists.linuxcontainers.org If you prefer live discussions, some of us also hang out in [#lxcontainers](http://webchat.freenode.net/?channels=#lxcontainers) on irc.freenode.net. ## Bug reports Bug reports can be filed at https://bugs.launchpad.net/nova-lxd Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 nova-lxd-17.0.0/nova_lxd.egg-info/dependency_links.txt0000664000175100017510000000000113246266434022737 0ustar zuulzuul00000000000000 nova-lxd-17.0.0/nova_lxd.egg-info/requires.txt0000664000175100017510000000025013246266434021266 0ustar zuulzuul00000000000000pbr!=2.1.0,>=2.0.0 os-brick>=2.2.0 os-vif!=1.8.0,>=1.7.0 oslo.config>=5.1.0 oslo.concurrency>=3.25.0 oslo.utils>=3.33.0 oslo.i18n>=3.15.3 oslo.log>=3.36.0 pylxd>=2.2.4 nova-lxd-17.0.0/README.md0000666000175100017510000000331413246266025014643 0ustar zuulzuul00000000000000# nova-lxd [![Build Status](https://travis-ci.org/lxc/nova-lxd.svg?branch=master)](https://travis-ci.org/lxc/nova-lxd) An OpenStack Compute driver for managing containers using LXD. ## nova-lxd on Devstack For development purposes, nova-lxd provides a devstack plugin. To use it, just include the following in your devstack `local.conf`: ``` [[local|localrc]] enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd ``` Change git repositories as needed (it's probably not very useful to point to the main nova-lxd repo). If you have a local tree you'd like to use, you can symlink your tree to `/opt/stack/nova-lxd` and do your development from there. The devstack default images won't work with lxd, as lxd doesn't support them. Once your stack is up and you've configured authentication against your devstack, do the following:: ``` wget http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz glance image-create --name xenial --disk-format raw --container-format bare --file xenial-server-cloudimg-amd64-root.tar.gz ``` You can test your configuration using the exercise scripts in devstack. For instance, ``` DEFAULT_IMAGE_NAME=xenial ./exercises/volumes.sh ``` Please note: the exercise scripts in devstack likely won't work, as they have requirements for using the cirros images. # Support and discussions We use the LXC mailing-lists for developer and user discussions, you can find and subscribe to those at: https://lists.linuxcontainers.org If you prefer live discussions, some of us also hang out in [#lxcontainers](http://webchat.freenode.net/?channels=#lxcontainers) on irc.freenode.net. ## Bug reports Bug reports can be filed at https://bugs.launchpad.net/nova-lxd nova-lxd-17.0.0/devstack/0000775000175100017510000000000013246266437015174 5ustar zuulzuul00000000000000nova-lxd-17.0.0/devstack/settings0000666000175100017510000000023113246266025016746 0ustar zuulzuul00000000000000# Add nova-lxd to enabled services enable_service nova-lxd # LXD install/upgrade settings INSTALL_LXD=${INSTALL_LXD:-False} LXD_GROUP=${LXD_GROUP:-lxd} nova-lxd-17.0.0/devstack/tempest-dsvm-lxd-rc0000666000175100017510000001203713246266025020734 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # This script is executed in the OpenStack CI *tempest-dsvm-lxd job. # It's used to configure which tempest tests actually get run. You can find # the CI job configuration here: # # http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml # # Construct a regex to use when limiting scope of tempest # to avoid features unsupported by Nova's LXD support. # Note that several tests are disabled by the use of tempest # feature toggles in devstack/lib/tempest for an lxd config, # so this regex is not entirely representative of what's excluded. # When adding entries to the regex, add a comment explaining why # since this list should not grow. r="^(?!.*" r="$r(?:.*\[.*\bslow\b.*\])" # (zulcss) nova-lxd does not support booting ami/aki images r="$r|(?:tempest\.scenario\.test_minimum_basic\.TestMinimumBasicScenario\.test_minimum_basic_scenario)" # XXX: zulcss (18 Oct 2016) nova-lxd does not support booting from ebs volumes r="$r|(?:tempest\.scenario\.test_volume_boot_pattern.*)" r="$r|(?:tempest\.api\.compute\.servers\.test_create_server\.ServersTestBootFromVolume)" # XXX: zulcss (18 Oct 2016) tempest test only passes when there is more than 10 lines in the # console output, and cirros LXD consoles have only a single line of output r="$r|(?:tempest\.api\.compute\.servers\.test_server_actions\.ServerActionsTestJSON\.test_get_console_output_with_unlimited_size)" # tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_get_console_output_with_unlimited_size # also tempest get console fails for the following two for length of output reasons r="$r|(?:tempest\.api\.compute\.servers\.test_server_actions\.ServerActionsTestJSON\.test_get_console_output)" # tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_get_console_output r="$r|(?:tempest\.api\.compute\.servers\.test_server_actions\.ServerActionsTestJSON\.test_get_console_output_server_id_in_shutoff_status)" # tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_get_console_output_server_id_in_shutoff_status # XXX: jamespage (09 June 2017) veth pair nics not detected/configured by tempest # https://review.openstack.org/#/c/472641/ # XXX: jamespage (09 June 2017) instance not accessible via floating IP. r="$r|(?:tempest\.scenario\.test_network_v6\.TestGettingAddress\.test_dualnet_multi_prefix_dhcpv6_stateless)" r="$r|(?:tempest\.scenario\.test_network_v6\.TestGettingAddress\.test_dualnet_multi_prefix_slaac)" #tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_dhcpv6_stateless #tempest.scenario.test_network_v6.TestGettingAddress.test_dualnet_multi_prefix_slaac # XXX: zulcss (18 Oct 2016) Could not connect to instance #r="$r|(?:tempest\.scenario\.test_network_advanced_server_ops\.TestNetworkAdvancedServerOps\.test_server_connectivity_suspend_resume)" # XXX: jamespage (08 June 2017): test failures with a mismatch in the number of disks reported r="$r|(?:tempest\.api\.compute\.admin\.test_create_server\.ServersWithSpecificFlavorTestJSON\.test_verify_created_server_ephemeral_disk)" #tempest.api.compute.admin.test_create_server.ServersWithSpecificFlavorTestJSON.test_verify_created_server_ephemeral_disk # XXX: jamespage (08 June 2017): nova-lxd driver does not support device tagging r="$r|(?:tempest\.api\.compute\.servers\.test_device_tagging.*)" #tempest.api.compute.servers.test_device_tagging.DeviceTaggingTestV2_42.test_device_tagging #tempest.api.compute.servers.test_device_tagging.DeviceTaggingTestV2_42.test_device_tagging # XXX: jamespage (08 June 2017): mismatching output on LXD instance use-case #tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume #tempest.api.compute.volumes.test_attach_volume.AttachVolumeShelveTestJSON.test_attach_detach_volume r="$r|(?:tempest\.api\.compute\.volumes\.test_attach_volume\.AttachVolumeTestJSON\.test_attach_detach_volume)" r="$r|(?:tempest\.api\.compute\.volumes\.test_attach_volume\.AttachVolumeShelveTestJSON\.test_attach_detach_volume)" #testtools.matchers._impl.MismatchError: u'NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT\nsda 8:0 0 1073741824 0 disk \nsdb 8:16 0 1073741824 0 disk \nvda 253:0 0 85899345920 0 disk \nvdb 253:16 0 42949672960 0 disk ' matches Contains('\nsdb ') # XXX: jamespage (26 June 2017): disable diagnostic checks until driver implements them # https://bugs.launchpad.net/nova-lxd/+bug/1700516 r="$r|(?:.*test_get_server_diagnostics.*)" #test_get_server_diagnostics r="$r).*$" export DEVSTACK_GATE_TEMPEST_REGEX="$r" nova-lxd-17.0.0/devstack/local.conf.sample0000666000175100017510000000127113246266025020411 0ustar zuulzuul00000000000000[[local|localrc]] HOST_IP=10.5.18.185 # set this to your IP FLAT_INTERFACE=ens2 # change this to your eth0 DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=password ADMIN_PASSWORD=password # run the services you want to use ENABLED_SERVICES=rabbit,mysql,key ENABLED_SERVICES+=,g-api,g-reg ENABLED_SERVICES+=,n-cpu,n-api,n-crt,n-obj,n-cond,n-sch,n-novnc,n-cauth,placement-api,placement-client ENABLED_SERVICES+=,neutron,q-svc,q-agt,q-dhcp,q-meta,q-l3 ENABLED_SERVICES+=,cinder,c-sch,c-api,c-vol ENABLED_SERVICES+=,horizon # disabled services disable_service n-net # enable nova-lxd enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd nova-lxd-17.0.0/devstack/override-defaults0000666000175100017510000000004413246266025020534 0ustar zuulzuul00000000000000# Plug-in overrides VIRT_DRIVER=lxd nova-lxd-17.0.0/devstack/plugin.sh0000777000175100017510000001362513246266025017033 0ustar zuulzuul00000000000000#!/bin/bash # Save trace setting MY_XTRACE=$(set +o | grep xtrace) set +o xtrace # Defaults # -------- # Set up base directories NOVA_DIR=${NOVA_DIR:-$DEST/nova} NOVA_CONF_DIR=${NOVA_CONF_DIR:-/etc/nova} NOVA_CONF=${NOVA_CONF:-NOVA_CONF_DIR/nova.conf} # Configure LXD storage backends LXD_BACKEND_DRIVER=${LXD_BACKEND_DRIVER:default} LXD_DISK_IMAGE=${DATA_DIR}/lxd.img LXD_ZFS_ZPOOL=devstack LXD_LOOPBACK_DISK_SIZE=${LXD_LOOPBACK_DISK_SIZE:-8G} # nova-lxd directories NOVA_COMPUTE_LXD_DIR=${NOVA_COMPUTE_LXD_DIR:-${DEST}/nova-lxd} NOVA_COMPUTE_LXD_PLUGIN_DIR=$(readlink -f $(dirname ${BASH_SOURCE[0]})) # glance directories GLANCE_CONF_DIR=${GLANCE_CONF_DIR:-/etc/glance} GLANCE_API_CONF=$GLANCE_CONF_DIR/glance-api.conf function pre_install_nova-lxd() { # Install OS packages if necessary with "install_package ...". echo_summary "Installing LXD" if is_ubuntu; then if [ "$DISTRO" == "trusty" ]; then sudo add-apt-repository -y ppa:ubuntu-lxc/lxd-stable fi if ! ( is_package_installed lxd ); then install_package lxd fi add_user_to_group $STACK_USER $LXD_GROUP fi } function install_nova-lxd() { # Install the service. setup_develop $NOVA_COMPUTE_LXD_DIR } function configure_nova-lxd() { # Configure the service. iniset $NOVA_CONF DEFAULT compute_driver lxd.LXDDriver iniset $NOVA_CONF DEFAULT force_config_drive False if is_service_enabled glance; then iniset $GLANCE_API_CONF DEFAULT disk_formats "ami,ari,aki,vhd,raw,iso,qcow2,root-tar" iniset $GLANCE_API_CONF DEFAULT container_formats "ami,ari,aki,bare,ovf,tgz" fi # Install the rootwrap sudo install -o root -g root -m 644 $NOVA_COMPUTE_LXD_DIR/etc/nova/rootwrap.d/*.filters $NOVA_CONF_DIR/rootwrap.d } function init_nova-lxd() { # Initialize and start the service. mkdir -p $TOP_DIR/files # Download and install the cirros lxc image CIRROS_IMAGE_FILE=cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-lxc.tar.gz if [ ! -f $TOP_DIR/files/$CIRROS_IMAGE_FILE ]; then wget --progress=dot:giga \ -c http://download.cirros-cloud.net/${CIRROS_VERSION}/${CIRROS_IMAGE_FILE} \ -O $TOP_DIR/files/${CIRROS_IMAGE_FILE} fi openstack --os-cloud=devstack-admin \ --os-region-name="$REGION_NAME" image create "cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-lxd" \ --public --container-format bare \ --disk-format raw < $TOP_DIR/files/cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-lxc.tar.gz if is_service_enabled cinder; then # Enable user namespace for ext4, this has only been tested on xenial+ echo Y | sudo tee /sys/module/ext4/parameters/userns_mounts fi } function test_config_nova-lxd() { # Configure tempest or other tests as required if is_service_enabled tempest; then TEMPEST_CONFIG=${TEMPEST_CONFIG:-$TEMPEST_DIR/etc/tempest.conf} TEMPEST_IMAGE=`openstack image list | grep cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-lxd | awk {'print $2'}` TEMPEST_IMAGE_ALT=$TEMPEST_IMAGE iniset $TEMPEST_CONFIG image disk_formats "ami,ari,aki,vhd,raw,iso,root-tar" iniset $TEMPEST_CONFIG compute volume_device_name sdb # TODO(jamespage): Review and update iniset $TEMPEST_CONFIG compute-feature-enabled shelve False iniset $TEMPEST_CONFIG compute-feature-enabled resize False iniset $TEMPEST_CONFIG compute-feature-enabled config_drive False iniset $TEMPEST_CONFIG compute-feature-enabled attach_encrypted_volume False iniset $TEMPEST_CONFIG compute-feature-enabled vnc_console False iniset $TEMPEST_CONFIG compute image_ref $TEMPEST_IMAGE iniset $TEMPEST_CONFIG compute image_ref_alt $TEMPEST_IMAGE_ALT iniset $TEMPEST_CONFIG scenario img_file cirros-${CIRROS_VERSION}-${CIRROS_ARCH}-lxc.tar.gz fi } function configure_lxd_block() { echo_summary "Configure LXD storage backend" if is_ubuntu; then if [ "$LXD_BACKEND_DRIVER" == "default" ]; then echo "Nothing to be done" elif [ "$LXD_BACKEND_DRIVER" == "zfs" ]; then echo "Configuring ZFS backend" truncate -s $LXD_LOOPBACK_DISK_SIZE $LXD_DISK_IMAGE sudo apt-get install -y zfs lxd_dev=`sudo losetup --show -f ${LXD_DISK_IMAGE}` sudo lxd init --auto --storage-backend zfs --storage-pool $LXD_ZFS_ZPOOL \ --storage-create-device $lxd_dev fi fi } function shutdown_nova-lxd() { # Shut the service down. : } function cleanup_nova-lxd() { # Cleanup the service. if [ "$LXD_BACKEND_DRIVER" == "zfs" ]; then sudo zpool destroy ${LXD_ZFS_ZPOOL} fi } if is_service_enabled nova-lxd; then if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then # Set up system services echo_summary "Configuring system services nova-lxd" pre_install_nova-lxd configure_lxd_block elif [[ "$1" == "stack" && "$2" == "install" ]]; then # Perform installation of service source echo_summary "Installing nova-lxd" install_nova-lxd elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then # Configure after the other layer 1 and 2 services have been configured echo_summary "Configuring nova-lxd" configure_nova-lxd elif [[ "$1" == "stack" && "$2" == "extra" ]]; then # Initialize and start the nova-lxd service echo_summary "Initializing nova-lxd" init_nova-lxd elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then # Configure any testing configuration echo_summary "Test configuration - nova-lxd" test_config_nova-lxd fi if [[ "$1" == "unstack" ]]; then # Shut down nova-lxd services # no-op shutdown_nova-lxd fi if [[ "$1" == "clean" ]]; then # Remove state and transient data # Remember clean.sh first calls unstack.sh # no-op cleanup_nova-lxd fi fi nova-lxd-17.0.0/test-requirements.txt0000666000175100017510000000115613246266025017627 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0 ddt>=1.0.1 # MIT python-subunit>=1.0.0 # Apache-2.0/BSD sphinx!=1.6.6,>=1.6.2 # BSD oslosphinx>=4.7.0 # Apache-2.0 oslotest>=3.2.0 # Apache-2.0 testrepository>=0.0.18 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD testtools>=2.2.0 # MIT os-testr>=1.0.0 # Apache-2.0 nosexcover>=1.0.10 # BSD wsgi-intercept>=1.4.1 # MIT License nova-lxd-17.0.0/tox.ini0000666000175100017510000000341513246266025014701 0ustar zuulzuul00000000000000[tox] minversion = 2.0 envlist = py{35,27},pep8 skipsdist = True [testenv] usedevelop = True install_command = pip install -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages} setenv = VIRTUAL_ENV={envdir} EVENTLET_NO_GREENDNS=yes PYTHONDONTWRITEBYTECODE=1 LANGUAGE=en_US LC_ALL=en_US.utf-8 deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt -egit+https://github.com/openstack/nova.git#egg=nova whitelist_externals = bash find rm env commands = find . -type f -name "*.pyc" -delete rm -Rf .testrepository/times.dbm passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY OS_DEBUG GENERATE_HASHES [testenv:py27] commands = {[testenv]commands} /bin/cp -r {toxinidir}/nova/virt/lxd/ {toxinidir}/.tox/py27/src/nova/nova/virt/ ostestr '{posargs}' [testenv:py35] commands = {[testenv]commands} /bin/cp -r {toxinidir}/nova/virt/lxd/ {toxinidir}/.tox/py35/src/nova/nova/virt/ ostestr '{posargs}' [testenv:pep8] basepython = python2.7 deps = {[testenv]deps} commands = flake8 {toxinidir}/nova [testenv:venv] commands = {posargs} [testenv:cover] # Also do not run test_coverage_ext tests while gathering coverage as those # tests conflict with coverage. commands = coverage erase find . -type f -name "*.pyc" -delete python setup.py testr --coverage --testr-args='{posargs}' coverage report [testenv:docs] commands = python setup.py build_sphinx [flake8] # H803 skipped on purpose per list discussion. # E123, E125 skipped as they are invalid PEP-8. show-source = True ignore = E123,E125,H803,H904,H405,H404,H305,H306,H307 builtins = _ exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build,tools/colorizer.py nova-lxd-17.0.0/openstack-common.conf0000666000175100017510000000020713246266025017506 0ustar zuulzuul00000000000000[DEFAULT] # The list of modules to copy from oslo-incubator.git # The base module to hold the copy of openstack.common base=nova-lxd nova-lxd-17.0.0/PKG-INFO0000664000175100017510000000535513246266437014475 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: nova-lxd Version: 17.0.0 Summary: native lxd driver for openstack Home-page: https://www.openstack.org/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN Description-Content-Type: UNKNOWN Description: # nova-lxd [![Build Status](https://travis-ci.org/lxc/nova-lxd.svg?branch=master)](https://travis-ci.org/lxc/nova-lxd) An OpenStack Compute driver for managing containers using LXD. ## nova-lxd on Devstack For development purposes, nova-lxd provides a devstack plugin. To use it, just include the following in your devstack `local.conf`: ``` [[local|localrc]] enable_plugin nova-lxd https://git.openstack.org/openstack/nova-lxd ``` Change git repositories as needed (it's probably not very useful to point to the main nova-lxd repo). If you have a local tree you'd like to use, you can symlink your tree to `/opt/stack/nova-lxd` and do your development from there. The devstack default images won't work with lxd, as lxd doesn't support them. Once your stack is up and you've configured authentication against your devstack, do the following:: ``` wget http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-root.tar.gz glance image-create --name xenial --disk-format raw --container-format bare --file xenial-server-cloudimg-amd64-root.tar.gz ``` You can test your configuration using the exercise scripts in devstack. For instance, ``` DEFAULT_IMAGE_NAME=xenial ./exercises/volumes.sh ``` Please note: the exercise scripts in devstack likely won't work, as they have requirements for using the cirros images. # Support and discussions We use the LXC mailing-lists for developer and user discussions, you can find and subscribe to those at: https://lists.linuxcontainers.org If you prefer live discussions, some of us also hang out in [#lxcontainers](http://webchat.freenode.net/?channels=#lxcontainers) on irc.freenode.net. ## Bug reports Bug reports can be filed at https://bugs.launchpad.net/nova-lxd Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 nova-lxd-17.0.0/LICENSE0000666000175100017510000002363613246266025014402 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. nova-lxd-17.0.0/.coveragerc0000666000175100017510000000013613246266025015504 0ustar zuulzuul00000000000000[run] branch = True source = nova.virt.lxd omit = nova/tests/* [report] ignore_errors = True nova-lxd-17.0.0/setup.cfg0000666000175100017510000000155013246266437015214 0ustar zuulzuul00000000000000[metadata] name = nova-lxd summary = native lxd driver for openstack description-file = README.md author = OpenStack author-email = openstack-dev@lists.openstack.org home-page = https://www.openstack.org/ classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 [files] packages = nova/virt/lxd nova/tests nova_lxd_tempest_plugin [entry_points] tempest.test_plugins = nova-lxd-tempest-plugin = nova_lxd_tempest_plugin.plugin:MyPlugin [build_sphinx] source-dir = doc/source build-dir = doc/build all_files = 1 [upload_sphinx] upload-dir = doc/build/html [egg_info] tag_build = tag_date = 0 nova-lxd-17.0.0/tools/0000775000175100017510000000000013246266437014530 5ustar zuulzuul00000000000000nova-lxd-17.0.0/tools/nova-manage.bash_completion0000666000175100017510000000214013246266025022001 0ustar zuulzuul00000000000000# bash completion for openstack nova-manage _nova_manage_opts="" # lazy init _nova_manage_opts_exp="" # lazy init # dict hack for bash 3 _set_nova_manage_subopts () { eval _nova_manage_subopts_"$1"='$2' } _get_nova_manage_subopts () { eval echo '${_nova_manage_subopts_'"$1"'#_nova_manage_subopts_}' } _nova_manage() { local cur prev subopts COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" if [ "x$_nova_manage_opts" == "x" ] ; then _nova_manage_opts="`nova-manage bash-completion 2>/dev/null`" _nova_manage_opts_exp="`echo $_nova_manage_opts | sed -e "s/\s/|/g"`" fi if [[ " `echo $_nova_manage_opts` " =~ " $prev " ]] ; then if [ "x$(_get_nova_manage_subopts "$prev")" == "x" ] ; then subopts="`nova-manage bash-completion $prev 2>/dev/null`" _set_nova_manage_subopts "$prev" "$subopts" fi COMPREPLY=($(compgen -W "$(_get_nova_manage_subopts "$prev")" -- ${cur})) elif [[ ! " ${COMP_WORDS[@]} " =~ " "($_nova_manage_opts_exp)" " ]] ; then COMPREPLY=($(compgen -W "${_nova_manage_opts}" -- ${cur})) fi return 0 } complete -F _nova_manage nova-manage nova-lxd-17.0.0/tools/config/0000775000175100017510000000000013246266437015775 5ustar zuulzuul00000000000000nova-lxd-17.0.0/tools/config/oslo.config.generator.rc0000666000175100017510000000022213246266025022517 0ustar zuulzuul00000000000000NOVA_CONFIG_GENERATOR_EXTRA_LIBRARIES="oslo.messaging oslo.db oslo.concurrency" NOVA_CONFIG_GENERATOR_EXTRA_MODULES=keystonemiddleware.auth_token nova-lxd-17.0.0/tools/config/generate_sample.sh0000777000175100017510000000651513246266025021471 0ustar zuulzuul00000000000000#!/usr/bin/env bash print_hint() { echo "Try \`${0##*/} --help' for more information." >&2 } PARSED_OPTIONS=$(getopt -n "${0##*/}" -o hb:p:m:l:o: \ --long help,base-dir:,package-name:,output-dir:,module:,library: -- "$@") if [ $? != 0 ] ; then print_hint ; exit 1 ; fi eval set -- "$PARSED_OPTIONS" while true; do case "$1" in -h|--help) echo "${0##*/} [options]" echo "" echo "options:" echo "-h, --help show brief help" echo "-b, --base-dir=DIR project base directory" echo "-p, --package-name=NAME project package name" echo "-o, --output-dir=DIR file output directory" echo "-m, --module=MOD extra python module to interrogate for options" echo "-l, --library=LIB extra library that registers options for discovery" exit 0 ;; -b|--base-dir) shift BASEDIR=`echo $1 | sed -e 's/\/*$//g'` shift ;; -p|--package-name) shift PACKAGENAME=`echo $1` shift ;; -o|--output-dir) shift OUTPUTDIR=`echo $1 | sed -e 's/\/*$//g'` shift ;; -m|--module) shift MODULES="$MODULES -m $1" shift ;; -l|--library) shift LIBRARIES="$LIBRARIES -l $1" shift ;; --) break ;; esac done BASEDIR=${BASEDIR:-`pwd`} if ! [ -d $BASEDIR ] then echo "${0##*/}: missing project base directory" >&2 ; print_hint ; exit 1 elif [[ $BASEDIR != /* ]] then BASEDIR=$(cd "$BASEDIR" && pwd) fi PACKAGENAME=${PACKAGENAME:-${BASEDIR##*/}} TARGETDIR=$BASEDIR/$PACKAGENAME if ! [ -d $TARGETDIR ] then echo "${0##*/}: invalid project package name" >&2 ; print_hint ; exit 1 fi OUTPUTDIR=${OUTPUTDIR:-$BASEDIR/etc} # NOTE(bnemec): Some projects put their sample config in etc/, # some in etc/$PACKAGENAME/ if [ -d $OUTPUTDIR/$PACKAGENAME ] then OUTPUTDIR=$OUTPUTDIR/$PACKAGENAME elif ! [ -d $OUTPUTDIR ] then echo "${0##*/}: cannot access \`$OUTPUTDIR': No such file or directory" >&2 exit 1 fi BASEDIRESC=`echo $BASEDIR | sed -e 's/\//\\\\\//g'` find $TARGETDIR -type f -name "*.pyc" -delete FILES=$(find $TARGETDIR -type f -name "*.py" ! -path "*/tests/*" \ -exec grep -l "Opt(" {} + | sed -e "s/^$BASEDIRESC\///g" | sort -u) RC_FILE="`dirname $0`/oslo.config.generator.rc" if test -r "$RC_FILE" then source "$RC_FILE" fi for mod in ${NOVA_CONFIG_GENERATOR_EXTRA_MODULES}; do MODULES="$MODULES -m $mod" done for lib in ${NOVA_CONFIG_GENERATOR_EXTRA_LIBRARIES}; do LIBRARIES="$LIBRARIES -l $lib" done export EVENTLET_NO_GREENDNS=yes OS_VARS=$(set | sed -n '/^OS_/s/=[^=]*$//gp' | xargs) [ "$OS_VARS" ] && eval "unset \$OS_VARS" DEFAULT_MODULEPATH=nova.openstack.common.config.generator MODULEPATH=${MODULEPATH:-$DEFAULT_MODULEPATH} OUTPUTFILE=$OUTPUTDIR/$PACKAGENAME.conf.sample python -m $MODULEPATH $MODULES $LIBRARIES $FILES > $OUTPUTFILE # Hook to allow projects to append custom config file snippets CONCAT_FILES=$(ls $BASEDIR/tools/config/*.conf.sample 2>/dev/null) for CONCAT_FILE in $CONCAT_FILES; do cat $CONCAT_FILE >> $OUTPUTFILE done nova-lxd-17.0.0/tools/config/analyze_opts.py0000777000175100017510000000536013246266025021061 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2012, Cloudscaling # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. ''' find_unused_options.py Compare the nova.conf file with the nova.conf.sample file to find any unused options or default values in nova.conf ''' from __future__ import print_function import argparse import os import sys from oslo.config import iniparser sys.path.append(os.getcwd()) class PropertyCollecter(iniparser.BaseParser): def __init__(self): super(PropertyCollecter, self).__init__() self.key_value_pairs = {} def assignment(self, key, value): self.key_value_pairs[key] = value def new_section(self, section): pass @classmethod def collect_properties(cls, lineiter, sample_format=False): def clean_sample(f): for line in f: if line.startswith("#") and not line.startswith("# "): line = line[1:] yield line pc = cls() if sample_format: lineiter = clean_sample(lineiter) pc.parse(lineiter) return pc.key_value_pairs if __name__ == '__main__': parser = argparse.ArgumentParser(description='''Compare the nova.conf file with the nova.conf.sample file to find any unused options or default values in nova.conf''') parser.add_argument('-c', action='store', default='/etc/nova/nova.conf', help='path to nova.conf' ' (defaults to /etc/nova/nova.conf)') parser.add_argument('-s', default='./etc/nova/nova.conf.sample', help='path to nova.conf.sample' ' (defaults to ./etc/nova/nova.conf.sample') options = parser.parse_args() conf_file_options = PropertyCollecter.collect_properties(open(options.c)) sample_conf_file_options = PropertyCollecter.collect_properties( open(options.s), sample_format=True) for k, v in sorted(conf_file_options.items()): if k not in sample_conf_file_options: print("Unused:", k) for k, v in sorted(conf_file_options.items()): if k in sample_conf_file_options and v == sample_conf_file_options[k]: print("Default valued:", k) nova-lxd-17.0.0/tools/config/check_uptodate.sh0000777000175100017510000000125413246266025021313 0ustar zuulzuul00000000000000#!/usr/bin/env bash PROJECT_NAME=${PROJECT_NAME:-nova} CFGFILE_NAME=${PROJECT_NAME}.conf.sample if [ -e etc/${PROJECT_NAME}/${CFGFILE_NAME} ]; then CFGFILE=etc/${PROJECT_NAME}/${CFGFILE_NAME} elif [ -e etc/${CFGFILE_NAME} ]; then CFGFILE=etc/${CFGFILE_NAME} else echo "${0##*/}: can not find config file" exit 1 fi TEMPDIR=`mktemp -d /tmp/${PROJECT_NAME}.XXXXXX` trap "rm -rf $TEMPDIR" EXIT tools/config/generate_sample.sh -b ./ -p ${PROJECT_NAME} -o ${TEMPDIR} if ! diff -u ${TEMPDIR}/${CFGFILE_NAME} ${CFGFILE} then echo "${0##*/}: ${PROJECT_NAME}.conf.sample is not up to date." echo "${0##*/}: Please run ${0%%${0##*/}}generate_sample.sh." exit 1 fi nova-lxd-17.0.0/tools/config/README0000666000175100017510000000134713246266025016655 0ustar zuulzuul00000000000000This generate_sample.sh tool is used to generate etc/nova/nova.conf.sample Run it from the top-level working directory i.e. $> ./tools/config/generate_sample.sh -b ./ -p nova -o etc/nova Watch out for warnings about modules like libvirt, qpid and zmq not being found - these warnings are significant because they result in options not appearing in the generated config file. The analyze_opts.py tool is used to find options which appear in /etc/nova/nova.conf but not in etc/nova/nova.conf.sample This helps identify options in the nova.conf file which are not used by nova. The tool also identifies any options which are set to the default value. Run it from the top-level working directory i.e. $> ./tools/config/analyze_opts.py nova-lxd-17.0.0/tools/clean-vlans0000777000175100017510000000214013246266025016651 0ustar zuulzuul00000000000000#!/usr/bin/env bash # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. export LC_ALL=C sudo ifconfig -a | grep br | grep -v bridge | cut -f1 -d" " | xargs -n1 -ifoo ifconfig foo down sudo ifconfig -a | grep br | grep -v bridge | cut -f1 -d" " | xargs -n1 -ifoo brctl delbr foo sudo ifconfig -a | grep vlan | cut -f1 -d" " | xargs -n1 -ifoo ifconfig foo down sudo ifconfig -a | grep vlan | cut -f1 -d" " | xargs -n1 -ifoo ip link del foo nova-lxd-17.0.0/tools/regression_tester.py0000777000175100017510000000672013246266025020653 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tool for checking if patch contains a regression test. By default runs against current patch but can be set to use any gerrit review as specified by change number (uses 'git review -d'). Idea: take tests from patch to check, and run against code from previous patch. If new tests pass, then no regression test, if new tests fails against old code then either * new tests depend on new code and cannot confirm regression test is valid (false positive) * new tests detects the bug being fixed (detect valid regression test) Due to the risk of false positives, the results from this need some human interpretation. """ from __future__ import print_function import optparse import string import subprocess import sys def run(cmd, fail_ok=False): print("running: %s" % cmd) obj = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) obj.wait() if obj.returncode != 0 and not fail_ok: print("The above command terminated with an error.") sys.exit(obj.returncode) return obj.stdout.read() def main(): usage = """ Tool for checking if a patch includes a regression test. Usage: %prog [options]""" parser = optparse.OptionParser(usage) parser.add_option("-r", "--review", dest="review", help="gerrit review number to test") (options, args) = parser.parse_args() if options.review: original_branch = run("git rev-parse --abbrev-ref HEAD") run("git review -d %s" % options.review) else: print("no gerrit review number specified, running on latest commit" "on current branch.") test_works = False # run new tests with old code run("git checkout HEAD^ nova") run("git checkout HEAD nova/tests") # identify which tests have changed tests = run("git whatchanged --format=oneline -1 | grep \"nova/tests\" " "| cut -f2").split() test_list = [] for test in tests: test_list.append(string.replace(test[0:-3], '/', '.')) if test_list == []: test_works = False expect_failure = "" else: # run new tests, expect them to fail expect_failure = run(("tox -epy27 %s 2>&1" % string.join(test_list)), fail_ok=True) if "FAILED (id=" in expect_failure: test_works = True # cleanup run("git checkout HEAD nova") if options.review: new_branch = run("git status | head -1 | cut -d ' ' -f 4") run("git checkout %s" % original_branch) run("git branch -D %s" % new_branch) print(expect_failure) print("") print("*******************************") if test_works: print("FOUND a regression test") else: print("NO regression test") sys.exit(1) if __name__ == "__main__": main() nova-lxd-17.0.0/tools/enable-pre-commit-hook.sh0000777000175100017510000000232513246266025021322 0ustar zuulzuul00000000000000#!/bin/sh # Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. PRE_COMMIT_SCRIPT=.git/hooks/pre-commit make_hook() { echo "exec ./run_tests.sh -N -p" >> $PRE_COMMIT_SCRIPT chmod +x $PRE_COMMIT_SCRIPT if [ -w $PRE_COMMIT_SCRIPT -a -x $PRE_COMMIT_SCRIPT ]; then echo "pre-commit hook was created successfully" else echo "unable to create pre-commit hook" fi } # NOTE(jk0): Make sure we are in nova's root directory before adding the hook. if [ ! -d ".git" ]; then echo "unable to find .git; moving up a directory" cd .. if [ -d ".git" ]; then make_hook else echo "still unable to find .git; hook not created" fi else make_hook fi nova-lxd-17.0.0/tools/install_venv_common.py0000666000175100017510000001350713246266025021157 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Provides methods needed by installation script for OpenStack development virtual environments. Since this script is used to bootstrap a virtualenv from the system's Python environment, it should be kept strictly compatible with Python 2.6. Synced in from openstack-common """ from __future__ import print_function import optparse import os import subprocess import sys class InstallVenv(object): def __init__(self, root, venv, requirements, test_requirements, py_version, project): self.root = root self.venv = venv self.requirements = requirements self.test_requirements = test_requirements self.py_version = py_version self.project = project def die(self, message, *args): print(message % args, file=sys.stderr) sys.exit(1) def check_python_version(self): if sys.version_info < (2, 6): self.die("Need Python Version >= 2.6") def run_command_with_code(self, cmd, redirect_output=True, check_exit_code=True): """Runs a command in an out-of-process shell. Returns the output of that command. Working directory is self.root. """ if redirect_output: stdout = subprocess.PIPE else: stdout = None proc = subprocess.Popen(cmd, cwd=self.root, stdout=stdout) output = proc.communicate()[0] if check_exit_code and proc.returncode != 0: self.die('Command "%s" failed.\n%s', ' '.join(cmd), output) return (output, proc.returncode) def run_command(self, cmd, redirect_output=True, check_exit_code=True): return self.run_command_with_code(cmd, redirect_output, check_exit_code)[0] def get_distro(self): if (os.path.exists('/etc/fedora-release') or os.path.exists('/etc/redhat-release')): return Fedora( self.root, self.venv, self.requirements, self.test_requirements, self.py_version, self.project) else: return Distro( self.root, self.venv, self.requirements, self.test_requirements, self.py_version, self.project) def check_dependencies(self): self.get_distro().install_virtualenv() def create_virtualenv(self, no_site_packages=True): """Creates the virtual environment and installs PIP. Creates the virtual environment and installs PIP only into the virtual environment. """ if not os.path.isdir(self.venv): print('Creating venv...', end=' ') if no_site_packages: self.run_command(['virtualenv', '-q', '--no-site-packages', self.venv]) else: self.run_command(['virtualenv', '-q', self.venv]) print('done.') else: print("venv already exists...") pass def pip_install(self, *args): self.run_command(['tools/with_venv.sh', 'pip', 'install', '--upgrade'] + list(args), redirect_output=False) def install_dependencies(self): print('Installing dependencies with pip (this can take a while)...') # First things first, make sure our venv has the latest pip and # setuptools and pbr self.pip_install('pip>=1.4') self.pip_install('setuptools') self.pip_install('pbr') self.pip_install('-r', self.requirements, '-r', self.test_requirements) def parse_args(self, argv): """Parses command-line arguments.""" parser = optparse.OptionParser() parser.add_option('-n', '--no-site-packages', action='store_true', help="Do not inherit packages from global Python " "install.") return parser.parse_args(argv[1:])[0] class Distro(InstallVenv): def check_cmd(self, cmd): return bool(self.run_command(['which', cmd], check_exit_code=False).strip()) def install_virtualenv(self): if self.check_cmd('virtualenv'): return if self.check_cmd('easy_install'): print('Installing virtualenv via easy_install...', end=' ') if self.run_command(['easy_install', 'virtualenv']): print('Succeeded') return else: print('Failed') self.die('ERROR: virtualenv not found.\n\n%s development' ' requires virtualenv, please install it using your' ' favorite package management tool' % self.project) class Fedora(Distro): """This covers all Fedora-based distributions. Includes: Fedora, RHEL, CentOS, Scientific Linux """ def check_pkg(self, pkg): return self.run_command_with_code(['rpm', '-q', pkg], check_exit_code=False)[1] == 0 def install_virtualenv(self): if self.check_cmd('virtualenv'): return if not self.check_pkg('python-virtualenv'): self.die("Please install 'python-virtualenv'.") super(Fedora, self).install_virtualenv() nova-lxd-17.0.0/tools/pretty_tox.sh0000777000175100017510000000021213246266025017276 0ustar zuulzuul00000000000000#!/usr/bin/env bash set -o pipefail TESTRARGS=$1 python setup.py testr --slowest --testr-args="--subunit $TESTRARGS" | subunit-trace -f nova-lxd-17.0.0/tools/abandon_old_reviews.sh0000777000175100017510000000513713246266025021074 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # # # before you run this modify your .ssh/config to create a # review.openstack.org entry: # # Host review.openstack.org # User # Port 29418 # # Note: due to gerrit bug somewhere, this double posts messages. :( # first purge the all reviews that are more than 4w old and blocked by a core -2 set -o errexit function abandon_review { local gitid=$1 shift local msg=$@ echo "Abandoning $gitid" ssh review.openstack.org gerrit review $gitid --abandon --message \"$msg\" } blocked_reviews=$(ssh review.openstack.org "gerrit query --current-patch-set --format json project:openstack/nova status:open age:4w label:Code-Review<=-2" | jq .currentPatchSet.revision | grep -v null | sed 's/"//g') blocked_msg=$(cat < 4 weeks without comment and currently blocked by a core reviewer with a -2. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and contacting the reviewer with the -2 on this review to ensure you address their concerns. EOF ) # For testing, put in a git rev of something you own and uncomment # blocked_reviews="b6c4218ae4d75b86c33fa3d37c27bc23b46b6f0f" for review in $blocked_reviews; do # echo ssh review.openstack.org gerrit review $review --abandon --message \"$msg\" echo "Blocked review $review" abandon_review $review $blocked_msg done # then purge all the reviews that are > 4w with no changes and Jenkins has -1ed failing_reviews=$(ssh review.openstack.org "gerrit query --current-patch-set --format json project:openstack/nova status:open age:4w NOT label:Verified>=1,jenkins" | jq .currentPatchSet.revision | grep -v null | sed 's/"//g') failing_msg=$(cat < 4 weeks without comment, and failed Jenkins the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results. EOF ) for review in $failing_reviews; do echo "Failing review $review" abandon_review $review $failing_msg done nova-lxd-17.0.0/tools/colorizer.py0000777000175100017510000002701113246266025017111 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright (c) 2013, Nebula, Inc. # Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # Colorizer Code is borrowed from Twisted: # Copyright (c) 2001-2010 Twisted Matrix Laboratories. # # Permission is hereby granted, free of charge, to any person obtaining # a copy of this software and associated documentation files (the # "Software"), to deal in the Software without restriction, including # without limitation the rights to use, copy, modify, merge, publish, # distribute, sublicense, and/or sell copies of the Software, and to # permit persons to whom the Software is furnished to do so, subject to # the following conditions: # # The above copyright notice and this permission notice shall be # included in all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """Display a subunit stream through a colorized unittest test runner.""" import heapq import sys import unittest import subunit import testtools class _AnsiColorizer(object): """A colorizer is an object that loosely wraps around a stream, allowing callers to write text to the stream in a particular color. Colorizer classes must implement C{supported()} and C{write(text, color)}. """ _colors = dict(black=30, red=31, green=32, yellow=33, blue=34, magenta=35, cyan=36, white=37) def __init__(self, stream): self.stream = stream def supported(cls, stream=sys.stdout): """A class method that returns True if the current platform supports coloring terminal output using this method. Returns False otherwise. """ if not stream.isatty(): return False # auto color only on TTYs try: import curses except ImportError: return False else: try: try: return curses.tigetnum("colors") > 2 except curses.error: curses.setupterm() return curses.tigetnum("colors") > 2 except Exception: # guess false in case of error return False supported = classmethod(supported) def write(self, text, color): """Write the given text to the stream in the given color. @param text: Text to be written to the stream. @param color: A string label for a color. e.g. 'red', 'white'. """ color = self._colors[color] self.stream.write('\x1b[%s;1m%s\x1b[0m' % (color, text)) class _Win32Colorizer(object): """See _AnsiColorizer docstring.""" def __init__(self, stream): import win32console red, green, blue, bold = (win32console.FOREGROUND_RED, win32console.FOREGROUND_GREEN, win32console.FOREGROUND_BLUE, win32console.FOREGROUND_INTENSITY) self.stream = stream self.screenBuffer = win32console.GetStdHandle( win32console.STD_OUT_HANDLE) self._colors = { 'normal': red | green | blue, 'red': red | bold, 'green': green | bold, 'blue': blue | bold, 'yellow': red | green | bold, 'magenta': red | blue | bold, 'cyan': green | blue | bold, 'white': red | green | blue | bold } def supported(cls, stream=sys.stdout): try: import win32console screenBuffer = win32console.GetStdHandle( win32console.STD_OUT_HANDLE) except ImportError: return False import pywintypes try: screenBuffer.SetConsoleTextAttribute( win32console.FOREGROUND_RED | win32console.FOREGROUND_GREEN | win32console.FOREGROUND_BLUE) except pywintypes.error: return False else: return True supported = classmethod(supported) def write(self, text, color): color = self._colors[color] self.screenBuffer.SetConsoleTextAttribute(color) self.stream.write(text) self.screenBuffer.SetConsoleTextAttribute(self._colors['normal']) class _NullColorizer(object): """See _AnsiColorizer docstring.""" def __init__(self, stream): self.stream = stream def supported(cls, stream=sys.stdout): return True supported = classmethod(supported) def write(self, text, color): self.stream.write(text) def get_elapsed_time_color(elapsed_time): if elapsed_time > 1.0: return 'red' elif elapsed_time > 0.25: return 'yellow' else: return 'green' class NovaTestResult(testtools.TestResult): def __init__(self, stream, descriptions, verbosity): super(NovaTestResult, self).__init__() self.stream = stream self.showAll = verbosity > 1 self.num_slow_tests = 10 self.slow_tests = [] # this is a fixed-sized heap self.colorizer = None # NOTE(vish): reset stdout for the terminal check stdout = sys.stdout sys.stdout = sys.__stdout__ for colorizer in [_Win32Colorizer, _AnsiColorizer, _NullColorizer]: if colorizer.supported(): self.colorizer = colorizer(self.stream) break sys.stdout = stdout self.start_time = None self.last_time = {} self.results = {} self.last_written = None def _writeElapsedTime(self, elapsed): color = get_elapsed_time_color(elapsed) self.colorizer.write(" %.2f" % elapsed, color) def _addResult(self, test, *args): try: name = test.id() except AttributeError: name = 'Unknown.unknown' test_class, test_name = name.rsplit('.', 1) elapsed = (self._now() - self.start_time).total_seconds() item = (elapsed, test_class, test_name) if len(self.slow_tests) >= self.num_slow_tests: heapq.heappushpop(self.slow_tests, item) else: heapq.heappush(self.slow_tests, item) self.results.setdefault(test_class, []) self.results[test_class].append((test_name, elapsed) + args) self.last_time[test_class] = self._now() self.writeTests() def _writeResult(self, test_name, elapsed, long_result, color, short_result, success): if self.showAll: self.stream.write(' %s' % str(test_name).ljust(66)) self.colorizer.write(long_result, color) if success: self._writeElapsedTime(elapsed) self.stream.writeln() else: self.colorizer.write(short_result, color) def addSuccess(self, test): super(NovaTestResult, self).addSuccess(test) self._addResult(test, 'OK', 'green', '.', True) def addFailure(self, test, err): if test.id() == 'process-returncode': return super(NovaTestResult, self).addFailure(test, err) self._addResult(test, 'FAIL', 'red', 'F', False) def addError(self, test, err): super(NovaTestResult, self).addFailure(test, err) self._addResult(test, 'ERROR', 'red', 'E', False) def addSkip(self, test, reason=None, details=None): super(NovaTestResult, self).addSkip(test, reason, details) self._addResult(test, 'SKIP', 'blue', 'S', True) def startTest(self, test): self.start_time = self._now() super(NovaTestResult, self).startTest(test) def writeTestCase(self, cls): if not self.results.get(cls): return if cls != self.last_written: self.colorizer.write(cls, 'white') self.stream.writeln() for result in self.results[cls]: self._writeResult(*result) del self.results[cls] self.stream.flush() self.last_written = cls def writeTests(self): time = self.last_time.get(self.last_written, self._now()) if not self.last_written or (self._now() - time).total_seconds() > 2.0: diff = 3.0 while diff > 2.0: classes = self.results.keys() oldest = min(classes, key=lambda x: self.last_time[x]) diff = (self._now() - self.last_time[oldest]).total_seconds() self.writeTestCase(oldest) else: self.writeTestCase(self.last_written) def done(self): self.stopTestRun() def stopTestRun(self): for cls in list(self.results.iterkeys()): self.writeTestCase(cls) self.stream.writeln() self.writeSlowTests() def writeSlowTests(self): # Pare out 'fast' tests slow_tests = [item for item in self.slow_tests if get_elapsed_time_color(item[0]) != 'green'] if slow_tests: slow_total_time = sum(item[0] for item in slow_tests) slow = ("Slowest %i tests took %.2f secs:" % (len(slow_tests), slow_total_time)) self.colorizer.write(slow, 'yellow') self.stream.writeln() last_cls = None # sort by name for elapsed, cls, name in sorted(slow_tests, key=lambda x: x[1] + x[2]): if cls != last_cls: self.colorizer.write(cls, 'white') self.stream.writeln() last_cls = cls self.stream.write(' %s' % str(name).ljust(68)) self._writeElapsedTime(elapsed) self.stream.writeln() def printErrors(self): if self.showAll: self.stream.writeln() self.printErrorList('ERROR', self.errors) self.printErrorList('FAIL', self.failures) def printErrorList(self, flavor, errors): for test, err in errors: self.colorizer.write("=" * 70, 'red') self.stream.writeln() self.colorizer.write(flavor, 'red') self.stream.writeln(": %s" % test.id()) self.colorizer.write("-" * 70, 'red') self.stream.writeln() self.stream.writeln("%s" % err) test = subunit.ProtocolTestCase(sys.stdin, passthrough=None) if sys.version_info[0:2] <= (2, 6): runner = unittest.TextTestRunner(verbosity=2) else: runner = unittest.TextTestRunner(verbosity=2, resultclass=NovaTestResult) if runner.run(test).wasSuccessful(): exit_code = 0 else: exit_code = 1 sys.exit(exit_code) nova-lxd-17.0.0/tools/install_venv.py0000666000175100017510000000455413246266025017611 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Copyright 2010 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import print_function import os import sys import install_venv_common as install_venv def print_help(venv, root): help = """ Nova development environment setup is complete. Nova development uses virtualenv to track and manage Python dependencies while in development and testing. To activate the Nova virtualenv for the extent of your current shell session you can run: $ source %s/bin/activate Or, if you prefer, you can run commands in the virtualenv on a case by case basis by running: $ %s/tools/with_venv.sh Also, make test will automatically use the virtualenv. """ print(help % (venv, root)) def main(argv): root = os.path.dirname(os.path.dirname(os.path.realpath(__file__))) if os.environ.get('tools_path'): root = os.environ['tools_path'] venv = os.path.join(root, '.venv') if os.environ.get('venv'): venv = os.environ['venv'] pip_requires = os.path.join(root, 'requirements.txt') test_requires = os.path.join(root, 'test-requirements.txt') py_version = "python%s.%s" % (sys.version_info[0], sys.version_info[1]) project = 'Nova' install = install_venv.InstallVenv(root, venv, pip_requires, test_requires, py_version, project) options = install.parse_args(argv) install.check_python_version() install.check_dependencies() install.create_virtualenv(no_site_packages=options.no_site_packages) install.install_dependencies() print_help(venv, root) if __name__ == '__main__': main(sys.argv) nova-lxd-17.0.0/tools/with_venv.sh0000777000175100017510000000033213246266025017071 0ustar zuulzuul00000000000000#!/bin/bash tools_path=${tools_path:-$(dirname $0)} venv_path=${venv_path:-${tools_path}} venv_dir=${venv_name:-/../.venv} TOOLS=${tools_path} VENV=${venv:-${venv_path}/${venv_dir}} source ${VENV}/bin/activate && "$@" nova-lxd-17.0.0/tools/db/0000775000175100017510000000000013246266437015115 5ustar zuulzuul00000000000000nova-lxd-17.0.0/tools/db/schema_diff.py0000777000175100017510000001643713246266025017730 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Utility for diff'ing two versions of the DB schema. Each release cycle the plan is to compact all of the migrations from that release into a single file. This is a manual and, unfortunately, error-prone process. To ensure that the schema doesn't change, this tool can be used to diff the compacted DB schema to the original, uncompacted form. The database is specified by providing a SQLAlchemy connection URL WITHOUT the database-name portion (that will be filled in automatically with a temporary database name). The schema versions are specified by providing a git ref (a branch name or commit hash) and a SQLAlchemy-Migrate version number: Run like: MYSQL: ./tools/db/schema_diff.py mysql://root@localhost \ master:latest my_branch:82 POSTGRESQL: ./tools/db/schema_diff.py postgresql://localhost \ master:latest my_branch:82 """ from __future__ import print_function import datetime import glob from nova import i18n import os import subprocess import sys _ = i18n._ # Dump def dump_db(db_driver, db_name, db_url, migration_version, dump_filename): if not db_url.endswith('/'): db_url += '/' db_url += db_name db_driver.create(db_name) try: _migrate(db_url, migration_version) db_driver.dump(db_name, dump_filename) finally: db_driver.drop(db_name) # Diff def diff_files(filename1, filename2): pipeline = ['diff -U 3 %(filename1)s %(filename2)s' % {'filename1': filename1, 'filename2': filename2}] # Use colordiff if available if subprocess.call(['which', 'colordiff']) == 0: pipeline.append('colordiff') pipeline.append('less -R') cmd = ' | '.join(pipeline) subprocess.check_call(cmd, shell=True) # Database class Mysql(object): def create(self, name): subprocess.check_call(['mysqladmin', '-u', 'root', 'create', name]) def drop(self, name): subprocess.check_call(['mysqladmin', '-f', '-u', 'root', 'drop', name]) def dump(self, name, dump_filename): subprocess.check_call( 'mysqldump -u root %(name)s > %(dump_filename)s' % {'name': name, 'dump_filename': dump_filename}, shell=True) class Postgresql(object): def create(self, name): subprocess.check_call(['createdb', name]) def drop(self, name): subprocess.check_call(['dropdb', name]) def dump(self, name, dump_filename): subprocess.check_call( 'pg_dump %(name)s > %(dump_filename)s' % {'name': name, 'dump_filename': dump_filename}, shell=True) def _get_db_driver_class(db_url): try: return globals()[db_url.split('://')[0].capitalize()] except KeyError: raise Exception(_("database %s not supported") % db_url) # Migrate MIGRATE_REPO = os.path.join(os.getcwd(), "nova/db/sqlalchemy/migrate_repo") def _migrate(db_url, migration_version): earliest_version = _migrate_get_earliest_version() # NOTE(sirp): sqlalchemy-migrate currently cannot handle the skipping of # migration numbers. _migrate_cmd( db_url, 'version_control', str(earliest_version - 1)) upgrade_cmd = ['upgrade'] if migration_version != 'latest': upgrade_cmd.append(str(migration_version)) _migrate_cmd(db_url, *upgrade_cmd) def _migrate_cmd(db_url, *cmd): manage_py = os.path.join(MIGRATE_REPO, 'manage.py') args = ['python', manage_py] args += cmd args += ['--repository=%s' % MIGRATE_REPO, '--url=%s' % db_url] subprocess.check_call(args) def _migrate_get_earliest_version(): versions_glob = os.path.join(MIGRATE_REPO, 'versions', '???_*.py') versions = [] for path in glob.iglob(versions_glob): filename = os.path.basename(path) prefix = filename.split('_', 1)[0] try: version = int(prefix) except ValueError: pass versions.append(version) versions.sort() return versions[0] # Git def git_current_branch_name(): ref_name = git_symbolic_ref('HEAD', quiet=True) current_branch_name = ref_name.replace('refs/heads/', '') return current_branch_name def git_symbolic_ref(ref, quiet=False): args = ['git', 'symbolic-ref', ref] if quiet: args.append('-q') proc = subprocess.Popen(args, stdout=subprocess.PIPE) stdout, stderr = proc.communicate() return stdout.strip() def git_checkout(branch_name): subprocess.check_call(['git', 'checkout', branch_name]) def git_has_uncommited_changes(): return subprocess.call(['git', 'diff', '--quiet', '--exit-code']) == 1 # Command def die(msg): print("ERROR: %s" % msg, file=sys.stderr) sys.exit(1) def usage(msg=None): if msg: print("ERROR: %s" % msg, file=sys.stderr) prog = "schema_diff.py" args = ["", "", ""] print("usage: %s %s" % (prog, ' '.join(args)), file=sys.stderr) sys.exit(1) def parse_options(): try: db_url = sys.argv[1] except IndexError: usage("must specify DB connection url") try: orig_branch, orig_version = sys.argv[2].split(':') except IndexError: usage('original branch and version required (e.g. master:82)') try: new_branch, new_version = sys.argv[3].split(':') except IndexError: usage('new branch and version required (e.g. master:82)') return db_url, orig_branch, orig_version, new_branch, new_version def main(): timestamp = datetime.datetime.utcnow().strftime("%Y%m%d_%H%M%S") ORIG_DB = 'orig_db_%s' % timestamp NEW_DB = 'new_db_%s' % timestamp ORIG_DUMP = ORIG_DB + ".dump" NEW_DUMP = NEW_DB + ".dump" options = parse_options() db_url, orig_branch, orig_version, new_branch, new_version = options # Since we're going to be switching branches, ensure user doesn't have any # uncommited changes if git_has_uncommited_changes(): die("You have uncommited changes. Please commit them before running " "this command.") db_driver = _get_db_driver_class(db_url)() users_branch = git_current_branch_name() git_checkout(orig_branch) try: # Dump Original Schema dump_db(db_driver, ORIG_DB, db_url, orig_version, ORIG_DUMP) # Dump New Schema git_checkout(new_branch) dump_db(db_driver, NEW_DB, db_url, new_version, NEW_DUMP) diff_files(ORIG_DUMP, NEW_DUMP) finally: git_checkout(users_branch) if os.path.exists(ORIG_DUMP): os.unlink(ORIG_DUMP) if os.path.exists(NEW_DUMP): os.unlink(NEW_DUMP) if __name__ == "__main__": main() nova-lxd-17.0.0/.stestr.conf0000666000175100017510000000012313246266025015630 0ustar zuulzuul00000000000000[DEFAULT] test_path=./nova/tests/unit/virt/lxd top_dir=./nova/tests/unit/virt/lxd/ nova-lxd-17.0.0/specs/0000775000175100017510000000000013246266437014505 5ustar zuulzuul00000000000000nova-lxd-17.0.0/specs/todo.txt0000666000175100017510000001044713246266025016214 0ustar zuulzuul00000000000000nova-lxd todo list Taken from https://docs.openstack.org/nova/latest/support-matrix.html Feature Status Kilo Liberty Attach block optional X not started volume to instance ------------------------------------------------------ Detach block optional X not started volume from instance ------------------------------------------------------ Evacuate optional X complete instances from host -------------------------------------------------------- Guest instance mandatory started started status -------------------------------------------------------- Gust host optional started started status -------------------------------------------------------- Live migrate optional X not started instance across hosts --------------------------------------------------------- Launch mandatory complete complete instance -------------------------------------------------------- Stop instance optional complete complete CPUs -------------------------------------------------------- Reboot optional complete complete instance -------------------------------------------------------- Rescue optional X complete instance -------------------------------------------------------- Resize optional X not started instance -------------------------------------------------------- Restore optional X complete instance -------------------------------------------------------- Service optional X not started control (??) -------------------------------------------------------- Set instance optional X not started admin password -------------------------------------------------------- Save snapshot optional X complete of instance disk -------------------------------------------------------- Swap block optional X not applicable volumes ----------------------------------------------------------- Shutdown mandatory complete complete instance ----------------------------------------------------------- Resume optional X not applicable insance CPUs ---------------------------------------------------------- Config drive choice X complete support ---------------------------------------------------------- inject files optional X not started into disk image --------------------------------------------------------- inject guest optional X not started networking config --------------------------------------------------------- Remote choice X not applicable desktop over RDP ---------------------------------------------------------- View serial choice complete complete console logs ---------------------------------------------------------- Remote choice X not applicable desktp over SPICE ----------------------------------------------------------- Remote choice X not applicable desktop over VNC ---------------------------------------------------------- Block storage optional X not started support --------------------------------------------------------- Block storage optional X not started over iSCSI --------------------------------------------------------- CHAP optional X not started authenication for iSCIS --------------------------------------------------------- Image storage mandatory complete complete support --------------------------------------------------------- Network optional X complete firewall rules --------------------------------------------------------- Network optional complete complete routing --------------------------------------------------------- Network optional X complete security groups --------------------------------------------------------- Flat choice complete complete networking -------------------------------------------------------- VLAN choice complete complete networking nova-lxd-17.0.0/nova_lxd_tempest_plugin/0000775000175100017510000000000013246266437020321 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/0000775000175100017510000000000013246266437021463 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/0000775000175100017510000000000013246266437022234 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/__init__.py0000666000175100017510000000000013246266025024326 0ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/compute/0000775000175100017510000000000013246266437023710 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/compute/servers/0000775000175100017510000000000013246266437025401 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/compute/servers/__init__.py0000666000175100017510000000000013246266025027473 0ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/compute/servers/test_create_server.py0000666000175100017510000001402213246266025031635 0ustar zuulzuul00000000000000# Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from pylxd import client from tempest.api.compute import base from tempest import config from tempest.lib.common.utils import data_utils CONF = config.CONF class LXDServersTestJSON(base.BaseV2ComputeAdminTest): disk_config = 'AUTO' def __init__(self, *args, **kwargs): super(LXDServersTestJSON, self).__init__(*args, **kwargs) self.client = client.Client() @classmethod def setup_credentials(cls): cls.prepare_instance_network() super(LXDServersTestJSON, cls).setup_credentials() @classmethod def setup_clients(cls): super(LXDServersTestJSON, cls).setup_clients() cls.client = cls.os_admin.servers_client cls.flavors_client = cls.os_admin.flavors_client @classmethod def resource_setup(cls): cls.set_validation_resources() super(LXDServersTestJSON, cls).resource_setup() cls.meta = {'hello': 'world'} cls.accessIPv4 = '1.1.1.1' cls.accessIPv6 = '0000:0000:0000:0000:0000:babe:220.12.22.2' cls.name = data_utils.rand_name(cls.__name__ + '-server') cls.password = data_utils.rand_password() disk_config = cls.disk_config cls.server_initial = cls.create_test_server( validatable=True, wait_until='ACTIVE', name=cls.name, metadata=cls.meta, accessIPv4=cls.accessIPv4, accessIPv6=cls.accessIPv6, disk_config=disk_config, adminPass=cls.password) cls.server = ( cls.client.show_server(cls.server_initial['id'])['server']) def test_profile_configuration(self): # Verify that the profile was created profile = self.client.profiles.get( self.server['OS-EXT-SRV-ATTR:instance_name']) self.assertEqual( self.server['OS-EXT-SRV-ATTR:instance_name'], profile.name) self.assertIn('raw.lxc', profile.config) self.assertIn('boot.autostart', profile.config) self.assertIn('limits.cpu', profile.config) self.assertIn('limits.memory', profile.config) self.assertIn('root', profile.devices) def test_verify_created_server_vcpus(self): # Verify that the number of vcpus reported by the instance matches # the amount stated by the flavor flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor'] profile = self.client.profiles.get( self.server['OS-EXT-SRV-ATTR:instance_name']) self.assertEqual( '%s' % flavor['vcpus'], profile.config['limits.cpu']) def test_verify_created_server_memory(self): # Verify that the memory reported by the instance matches # the amount stated by the flavor flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor'] profile = self.client.profiles.get( self.server['OS-EXT-SRV-ATTR:instance_name']) self.assertEqual( '%sMB' % flavor['ram'], profile.config['limits.memory']) def test_verify_server_root_size(self): flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor'] profile = self.client.profiles.get( self.server['OS-EXT-SRV-ATTR:instance_name']) self.assertEqual( '%sGB' % flavor['disk'], profile.devices['root']['size']) def test_verify_console_log(self): # Verify that the console log for the container exists profile = self.client.profiles.get( self.server['OS-EXT-SRV-ATTR:instance_name']) self.assertIn('lxc.console.logfile', profile.config['raw.lxc']) def test_verify_network_configuration(self): # Verify network is configured for the instance profile = self.client.profiles.get( self.server['OS-EXT-SRV-ATTR:instance_name']) for device in profile.devices: if 'root' not in device: network_device = device self.assertEqual('nic', profile.devices[network_device]['type']) self.assertEqual('bridged', profile.devices[network_device]['nictype']) self.assertEqual( network_device, profile.devices[network_device]['parent']) def test_container_configuration_valid(self): # Verify container configuration is correct profile = self.client.profiles.get( self.server['OS-EXT-SRV-ATTR:instance_name']) container = self.client.containers.get( self.server['OS-EXT-SRV-ATTR:instance_name']) flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor'] self.assertEqual(profile.name, container.profiles[0]) self.assertIn('raw.lxc', container.expanded_config) self.assertEqual( '%s' % flavor['vcpus'], container.expanded_config['limits.cpu']) self.assertEqual( '%sMB' % flavor['ram'], container.expanded_config['limits.memory']) self.assertEqual( '%sGB' % flavor['disk'], container.expanded_devices['root']['size']) for device in profile.devices: if 'root' not in device: network_device = device self.assertIn(network_device, container.expanded_devices) self.assertEqual( 'nic', container.expanded_devices[network_device]['type']) self.assertEqual( 'bridged', container.expanded_devices[network_device]['nictype']) self.assertEqual( network_device, container.expanded_devices[network_device]['parent']) nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/compute/servers/test_servers.py0000666000175100017510000001051413246266025030477 0ustar zuulzuul00000000000000# Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from pylxd import client from tempest.api.compute import base from tempest import config from tempest.lib.common.utils import data_utils from tempest.lib.common.utils.linux import remote_client CONF = config.CONF class LXDServersWithSpecificFlavorTestJSON(base.BaseV2ComputeAdminTest): disk_config = 'AUTO' @classmethod def setup_credentials(cls): cls.prepare_instance_network() super(LXDServersWithSpecificFlavorTestJSON, cls).setup_credentials() @classmethod def setup_clients(cls): super(LXDServersWithSpecificFlavorTestJSON, cls).setup_clients() cls.flavor_client = cls.os_admin.flavors_client cls.client = cls.os_admin.servers_client @classmethod def resource_setup(cls): cls.set_validation_resources() super(LXDServersWithSpecificFlavorTestJSON, cls).resource_setup() def test_verify_created_server_ephemeral_disk(self): # Verify that the ephemeral disk is created when creating server flavor_base = self.flavors_client.show_flavor( self.flavor_ref)['flavor'] def create_flavor_with_extra_specs(): flavor_with_eph_disk_name = data_utils.rand_name('eph_flavor') flavor_with_eph_disk_id = data_utils.rand_int_id(start=1000) ram = flavor_base['ram'] vcpus = flavor_base['vcpus'] disk = flavor_base['disk'] # Create a flavor with extra specs flavor = (self.flavor_client. create_flavor(name=flavor_with_eph_disk_name, ram=ram, vcpus=vcpus, disk=disk, id=flavor_with_eph_disk_id, ephemeral=1))['flavor'] self.addCleanup(flavor_clean_up, flavor['id']) return flavor['id'] def create_flavor_without_extra_specs(): flavor_no_eph_disk_name = data_utils.rand_name('no_eph_flavor') flavor_no_eph_disk_id = data_utils.rand_int_id(start=1000) ram = flavor_base['ram'] vcpus = flavor_base['vcpus'] disk = flavor_base['disk'] # Create a flavor without extra specs flavor = (self.flavor_client. create_flavor(name=flavor_no_eph_disk_name, ram=ram, vcpus=vcpus, disk=disk, id=flavor_no_eph_disk_id))['flavor'] self.addCleanup(flavor_clean_up, flavor['id']) return flavor['id'] def flavor_clean_up(flavor_id): self.flavor_client.delete_flavor(flavor_id) self.flavor_client.wait_for_resource_deletion(flavor_id) flavor_with_eph_disk_id = create_flavor_with_extra_specs() admin_pass = self.image_ssh_password server_with_eph_disk = self.create_test_server( validatable=True, wait_until='ACTIVE', adminPass=admin_pass, flavor=flavor_with_eph_disk_id) server_with_eph_disk = self.client.show_server( server_with_eph_disk['id'])['server'] linux_client = remote_client.RemoteClient( self.get_server_ip(server_with_eph_disk), self.ssh_user, admin_pass, self.validation_resources['keypair']['private_key'], server=server_with_eph_disk, servers_client=self.client) cmd = 'sudo touch /mnt/tempest.txt' linux_client.exec_command(cmd) lxd = client.Client() profile = lxd.profiles.get(server_with_eph_disk[ 'OS-EXT-SRV-ATTR:instance_name']) tempfile = '%s/tempest.txt' % profile.devices['ephemeral0']['source'] self.assertTrue(os.path.exists(tempfile)) nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/compute/__init__.py0000666000175100017510000000000013246266025026002 0ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/compute/volumes/0000775000175100017510000000000013246266437025402 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/compute/volumes/__init__.py0000666000175100017510000000000013246266025027474 0ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/api/compute/volumes/test_attach_volume.py0000666000175100017510000000773113246266025031651 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from pylxd import client from tempest.api.compute import base from tempest.common import waiters from tempest import config from tempest.lib.common.utils import data_utils CONF = config.CONF class LXDVolumeTests(base.BaseV2ComputeAdminTest): disk_config = 'AUTO' def __init__(self, *args, **kwargs): super(LXDVolumeTests, self).__init__(*args, **kwargs) self.attachment = None self.client = client.Client() @classmethod def setup_credentials(cls): cls.prepare_instance_network() super(LXDVolumeTests, cls).setup_credentials() @classmethod def setup_clients(cls): super(LXDVolumeTests, cls).setup_clients() cls.client = cls.os_admin.servers_client cls.flavors_client = cls.os_admin.flavors_client @classmethod def resource_setup(cls): cls.set_validation_resources() super(LXDVolumeTests, cls).resource_setup() cls.meta = {'hello': 'world'} cls.accessIPv4 = '1.1.1.1' cls.accessIPv6 = '0000:0000:0000:0000:0000:babe:220.12.22.2' cls.name = data_utils.rand_name(cls.__name__ + '-server') cls.password = data_utils.rand_password() disk_config = cls.disk_config cls.server_initial = cls.create_test_server( validatable=True, wait_until='ACTIVE', name=cls.name, metadata=cls.meta, accessIPv4=cls.accessIPv4, accessIPv6=cls.accessIPv6, disk_config=disk_config, adminPass=cls.password) cls.server = ( cls.client.show_server(cls.server_initial['id'])['server']) cls.device = CONF.compute.volume_device_name def _detach(self, server_id, volume_id): if self.attachment: self.servers_client.detach_volume(server_id, volume_id) waiters.wait_for_volume_status(self.volumes_client, volume_id, 'available') def _create_and_attach_volume(self, server): # Create a volume and wait for it to become ready vol_name = data_utils.rand_name(self.__class__.__name__ + '-volume') volume = self.volumes_client.create_volume( size=CONF.volume.volume_size, display_name=vol_name)['volume'] self.addCleanup(self.delete_volume, volume['id']) waiters.wait_for_volume_status(self.volumes_client, volume['id'], 'available') # Attach the volume to the server self.attachment = self.servers_client.attach_volume( server['id'], volumeId=volume['id'], device='/dev/%s' % self.device)['volumeAttachment'] waiters.wait_for_volume_status(self.volumes_client, volume['id'], 'in-use') self.addCleanup(self._detach, server['id'], volume['id']) return volume def test_create_server_and_attach_volume(self): # Verify that LXD profile has the correct configuration # for volumes volume = self._create_and_attach_volume(self.server) profile = self.client.profiles.get( self.server['OS-EXT-SRV-ATTR:instance_name']) self.assertIn(volume['id'], [device for device in profile.devices]) self.assertEqual( '/dev/%s' % self.device, profile.devices[volume['id']]['path']) nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/__init__.py0000666000175100017510000000000013246266025023555 0ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/scenario/0000775000175100017510000000000013246266437023266 5ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/scenario/manager.py0000666000175100017510000007354013246266025025256 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import subprocess import netaddr from oslo_log import log from oslo_serialization import jsonutils as json from oslo_utils import netutils from tempest.common import compute from tempest.common import image as common_image from tempest.common.utils.linux import remote_client from tempest.common.utils import net_utils from tempest.common import waiters from tempest import config from tempest import exceptions from tempest.lib.common.utils import data_utils from tempest.lib.common.utils import test_utils from tempest.lib import exceptions as lib_exc import tempest.test CONF = config.CONF LOG = log.getLogger(__name__) class ScenarioTest(tempest.test.BaseTestCase): """Base class for scenario tests. Uses tempest own clients. """ credentials = ['primary'] @classmethod def setup_clients(cls): super(ScenarioTest, cls).setup_clients() # Clients (in alphabetical order) cls.flavors_client = cls.os_primary.flavors_client cls.compute_floating_ips_client = ( cls.os_primary.compute_floating_ips_client) if CONF.service_available.glance: # Check if glance v1 is available to determine which client to use. if CONF.image_feature_enabled.api_v1: cls.image_client = cls.os_primary.image_client elif CONF.image_feature_enabled.api_v2: cls.image_client = cls.os_primary.image_client_v2 else: raise lib_exc.InvalidConfiguration( 'Either api_v1 or api_v2 must be True in ' '[image-feature-enabled].') # Compute image client cls.compute_images_client = cls.os_primary.compute_images_client cls.keypairs_client = cls.os_primary.keypairs_client # Nova security groups client cls.compute_security_groups_client = ( cls.os_primary.compute_security_groups_client) cls.compute_security_group_rules_client = ( cls.os_primary.compute_security_group_rules_client) cls.servers_client = cls.os_primary.servers_client cls.interface_client = cls.os_primary.interfaces_client # Neutron network client cls.networks_client = cls.os_primary.networks_client cls.ports_client = cls.os_primary.ports_client cls.routers_client = cls.os_primary.routers_client cls.subnets_client = cls.os_primary.subnets_client cls.floating_ips_client = cls.os_primary.floating_ips_client cls.security_groups_client = cls.os_primary.security_groups_client cls.security_group_rules_client = ( cls.os_primary.security_group_rules_client) if CONF.volume_feature_enabled.api_v2: cls.volumes_client = cls.os_primary.volumes_v2_client cls.snapshots_client = cls.os_primary.snapshots_v2_client else: cls.volumes_client = cls.os_primary.volumes_client cls.snapshots_client = cls.os_primary.snapshots_client # ## Test functions library # # The create_[resource] functions only return body and discard the # resp part which is not used in scenario tests def _create_port(self, network_id, client=None, namestart='port-quotatest', **kwargs): if not client: client = self.ports_client name = data_utils.rand_name(namestart) result = client.create_port( name=name, network_id=network_id, **kwargs) self.assertIsNotNone(result, 'Unable to allocate port') port = result['port'] self.addCleanup(test_utils.call_and_ignore_notfound_exc, client.delete_port, port['id']) return port def create_keypair(self, client=None): if not client: client = self.keypairs_client name = data_utils.rand_name(self.__class__.__name__) # We don't need to create a keypair by pubkey in scenario body = client.create_keypair(name=name) self.addCleanup(client.delete_keypair, name) return body['keypair'] def create_server(self, name=None, image_id=None, flavor=None, validatable=False, wait_until='ACTIVE', clients=None, **kwargs): """Wrapper utility that returns a test server. This wrapper utility calls the common create test server and returns a test server. The purpose of this wrapper is to minimize the impact on the code of the tests already using this function. """ # NOTE(jlanoux): As a first step, ssh checks in the scenario # tests need to be run regardless of the run_validation and # validatable parameters and thus until the ssh validation job # becomes voting in CI. The test resources management and IP # association are taken care of in the scenario tests. # Therefore, the validatable parameter is set to false in all # those tests. In this way create_server just return a standard # server and the scenario tests always perform ssh checks. # Needed for the cross_tenant_traffic test: if clients is None: clients = self.os_primary if name is None: name = data_utils.rand_name(self.__class__.__name__ + "-server") vnic_type = CONF.network.port_vnic_type # If vnic_type is configured create port for # every network if vnic_type: ports = [] create_port_body = {'binding:vnic_type': vnic_type, 'namestart': 'port-smoke'} if kwargs: # Convert security group names to security group ids # to pass to create_port if 'security_groups' in kwargs: security_groups = \ clients.security_groups_client.list_security_groups( ).get('security_groups') sec_dict = dict([(s['name'], s['id']) for s in security_groups]) sec_groups_names = [s['name'] for s in kwargs.pop( 'security_groups')] security_groups_ids = [sec_dict[s] for s in sec_groups_names] if security_groups_ids: create_port_body[ 'security_groups'] = security_groups_ids networks = kwargs.pop('networks', []) else: networks = [] # If there are no networks passed to us we look up # for the project's private networks and create a port. # The same behaviour as we would expect when passing # the call to the clients with no networks if not networks: networks = clients.networks_client.list_networks( **{'router:external': False, 'fields': 'id'})['networks'] # It's net['uuid'] if networks come from kwargs # and net['id'] if they come from # clients.networks_client.list_networks for net in networks: net_id = net.get('uuid', net.get('id')) if 'port' not in net: port = self._create_port(network_id=net_id, client=clients.ports_client, **create_port_body) ports.append({'port': port['id']}) else: ports.append({'port': net['port']}) if ports: kwargs['networks'] = ports self.ports = ports tenant_network = self.get_tenant_network() body, servers = compute.create_test_server( clients, tenant_network=tenant_network, wait_until=wait_until, name=name, flavor=flavor, image_id=image_id, **kwargs) self.addCleanup(waiters.wait_for_server_termination, clients.servers_client, body['id']) self.addCleanup(test_utils.call_and_ignore_notfound_exc, clients.servers_client.delete_server, body['id']) server = clients.servers_client.show_server(body['id'])['server'] return server def create_volume(self, size=None, name=None, snapshot_id=None, imageRef=None, volume_type=None): if size is None: size = CONF.volume.volume_size if imageRef: image = self.compute_images_client.show_image(imageRef)['image'] min_disk = image.get('minDisk') size = max(size, min_disk) if name is None: name = data_utils.rand_name(self.__class__.__name__ + "-volume") kwargs = {'display_name': name, 'snapshot_id': snapshot_id, 'imageRef': imageRef, 'volume_type': volume_type, 'size': size} volume = self.volumes_client.create_volume(**kwargs)['volume'] self.addCleanup(self.volumes_client.wait_for_resource_deletion, volume['id']) self.addCleanup(test_utils.call_and_ignore_notfound_exc, self.volumes_client.delete_volume, volume['id']) # NOTE(e0ne): Cinder API v2 uses name instead of display_name if 'display_name' in volume: self.assertEqual(name, volume['display_name']) else: self.assertEqual(name, volume['name']) waiters.wait_for_volume_resource_status(self.volumes_client, volume['id'], 'available') # The volume retrieved on creation has a non-up-to-date status. # Retrieval after it becomes active ensures correct details. volume = self.volumes_client.show_volume(volume['id'])['volume'] return volume def create_volume_type(self, client=None, name=None, backend_name=None): if not client: client = self.admin_volume_types_client if not name: class_name = self.__class__.__name__ name = data_utils.rand_name(class_name + '-volume-type') randomized_name = data_utils.rand_name('scenario-type-' + name) LOG.debug("Creating a volume type: %s on backend %s", randomized_name, backend_name) extra_specs = {} if backend_name: extra_specs = {"volume_backend_name": backend_name} body = client.create_volume_type(name=randomized_name, extra_specs=extra_specs) volume_type = body['volume_type'] self.assertIn('id', volume_type) self.addCleanup(client.delete_volume_type, volume_type['id']) return volume_type def _create_loginable_secgroup_rule(self, secgroup_id=None): _client = self.compute_security_groups_client _client_rules = self.compute_security_group_rules_client if secgroup_id is None: sgs = _client.list_security_groups()['security_groups'] for sg in sgs: if sg['name'] == 'default': secgroup_id = sg['id'] # These rules are intended to permit inbound ssh and icmp # traffic from all sources, so no group_id is provided. # Setting a group_id would only permit traffic from ports # belonging to the same security group. rulesets = [ { # ssh 'ip_protocol': 'tcp', 'from_port': 22, 'to_port': 22, 'cidr': '0.0.0.0/0', }, { # ping 'ip_protocol': 'icmp', 'from_port': -1, 'to_port': -1, 'cidr': '0.0.0.0/0', } ] rules = list() for ruleset in rulesets: sg_rule = _client_rules.create_security_group_rule( parent_group_id=secgroup_id, **ruleset)['security_group_rule'] rules.append(sg_rule) return rules def _create_security_group(self): # Create security group sg_name = data_utils.rand_name(self.__class__.__name__) sg_desc = sg_name + " description" secgroup = self.compute_security_groups_client.create_security_group( name=sg_name, description=sg_desc)['security_group'] self.assertEqual(secgroup['name'], sg_name) self.assertEqual(secgroup['description'], sg_desc) self.addCleanup( test_utils.call_and_ignore_notfound_exc, self.compute_security_groups_client.delete_security_group, secgroup['id']) # Add rules to the security group self._create_loginable_secgroup_rule(secgroup['id']) return secgroup def get_remote_client(self, ip_address, username=None, private_key=None): """Get a SSH client to a remote server @param ip_address the server floating or fixed IP address to use for ssh validation @param username name of the Linux account on the remote server @param private_key the SSH private key to use @return a RemoteClient object """ if username is None: username = CONF.validation.image_ssh_user # Set this with 'keypair' or others to log in with keypair or # username/password. if CONF.validation.auth_method == 'keypair': password = None if private_key is None: private_key = self.keypair['private_key'] else: password = CONF.validation.image_ssh_password private_key = None linux_client = remote_client.RemoteClient(ip_address, username, pkey=private_key, password=password) try: linux_client.validate_authentication() except Exception as e: message = ('Initializing SSH connection to %(ip)s failed. ' 'Error: %(error)s' % {'ip': ip_address, 'error': e}) caller = test_utils.find_test_caller() if caller: message = '(%s) %s' % (caller, message) LOG.exception(message) self._log_console_output() raise return linux_client def _image_create(self, name, fmt, path, disk_format=None, properties=None): if properties is None: properties = {} name = data_utils.rand_name('%s-' % name) params = { 'name': name, 'container_format': fmt, 'disk_format': disk_format or fmt, } if CONF.image_feature_enabled.api_v1: params['is_public'] = 'False' params['properties'] = properties params = {'headers': common_image.image_meta_to_headers(**params)} else: params['visibility'] = 'private' # Additional properties are flattened out in the v2 API. params.update(properties) body = self.image_client.create_image(**params) image = body['image'] if 'image' in body else body self.addCleanup(self.image_client.delete_image, image['id']) self.assertEqual("queued", image['status']) with open(path, 'rb') as image_file: if CONF.image_feature_enabled.api_v1: self.image_client.update_image(image['id'], data=image_file) else: self.image_client.store_image_file(image['id'], image_file) return image['id'] def glance_image_create(self): img_path = CONF.scenario.img_dir + "/" + CONF.scenario.img_file aki_img_path = CONF.scenario.img_dir + "/" + CONF.scenario.aki_img_file ari_img_path = CONF.scenario.img_dir + "/" + CONF.scenario.ari_img_file ami_img_path = CONF.scenario.img_dir + "/" + CONF.scenario.ami_img_file img_container_format = CONF.scenario.img_container_format img_disk_format = CONF.scenario.img_disk_format img_properties = CONF.scenario.img_properties LOG.debug("paths: img: %s, container_format: %s, disk_format: %s, " "properties: %s, ami: %s, ari: %s, aki: %s", img_path, img_container_format, img_disk_format, img_properties, ami_img_path, ari_img_path, aki_img_path) try: image = self._image_create('scenario-img', img_container_format, img_path, disk_format=img_disk_format, properties=img_properties) except IOError: LOG.debug("A qcow2 image was not found. Try to get a uec image.") kernel = self._image_create('scenario-aki', 'aki', aki_img_path) ramdisk = self._image_create('scenario-ari', 'ari', ari_img_path) properties = {'kernel_id': kernel, 'ramdisk_id': ramdisk} image = self._image_create('scenario-ami', 'ami', path=ami_img_path, properties=properties) LOG.debug("image:%s", image) return image def _log_console_output(self, servers=None): if not CONF.compute_feature_enabled.console_output: LOG.debug('Console output not supported, cannot log') return if not servers: servers = self.servers_client.list_servers() servers = servers['servers'] for server in servers: try: console_output = self.servers_client.get_console_output( server['id'])['output'] LOG.debug('Console output for %s\nbody=\n%s', server['id'], console_output) except lib_exc.NotFound: LOG.debug("Server %s disappeared(deleted) while looking " "for the console log", server['id']) def _log_net_info(self, exc): # network debug is called as part of ssh init if not isinstance(exc, lib_exc.SSHTimeout): LOG.debug('Network information on a devstack host') def create_server_snapshot(self, server, name=None): # Glance client _image_client = self.image_client # Compute client _images_client = self.compute_images_client if name is None: name = data_utils.rand_name(self.__class__.__name__ + 'snapshot') LOG.debug("Creating a snapshot image for server: %s", server['name']) image = _images_client.create_image(server['id'], name=name) image_id = image.response['location'].split('images/')[1] waiters.wait_for_image_status(_image_client, image_id, 'active') self.addCleanup(_image_client.wait_for_resource_deletion, image_id) self.addCleanup(test_utils.call_and_ignore_notfound_exc, _image_client.delete_image, image_id) if CONF.image_feature_enabled.api_v1: # In glance v1 the additional properties are stored in the headers. resp = _image_client.check_image(image_id) snapshot_image = common_image.get_image_meta_from_headers(resp) image_props = snapshot_image.get('properties', {}) else: # In glance v2 the additional properties are flattened. snapshot_image = _image_client.show_image(image_id) image_props = snapshot_image bdm = image_props.get('block_device_mapping') if bdm: bdm = json.loads(bdm) if bdm and 'snapshot_id' in bdm[0]: snapshot_id = bdm[0]['snapshot_id'] self.addCleanup( self.snapshots_client.wait_for_resource_deletion, snapshot_id) self.addCleanup(test_utils.call_and_ignore_notfound_exc, self.snapshots_client.delete_snapshot, snapshot_id) waiters.wait_for_volume_resource_status(self.snapshots_client, snapshot_id, 'available') image_name = snapshot_image['name'] self.assertEqual(name, image_name) LOG.debug("Created snapshot image %s for server %s", image_name, server['name']) return snapshot_image def nova_volume_attach(self, server, volume_to_attach): volume = self.servers_client.attach_volume( server['id'], volumeId=volume_to_attach['id'], device='/dev/%s' % CONF.compute.volume_device_name)['volumeAttachment'] self.assertEqual(volume_to_attach['id'], volume['id']) waiters.wait_for_volume_resource_status(self.volumes_client, volume['id'], 'in-use') # Return the updated volume after the attachment return self.volumes_client.show_volume(volume['id'])['volume'] def nova_volume_detach(self, server, volume): self.servers_client.detach_volume(server['id'], volume['id']) waiters.wait_for_volume_resource_status(self.volumes_client, volume['id'], 'available') volume = self.volumes_client.show_volume(volume['id'])['volume'] self.assertEqual('available', volume['status']) def rebuild_server(self, server_id, image=None, preserve_ephemeral=False, wait=True, rebuild_kwargs=None): if image is None: image = CONF.compute.image_ref rebuild_kwargs = rebuild_kwargs or {} LOG.debug("Rebuilding server (id: %s, image: %s, preserve eph: %s)", server_id, image, preserve_ephemeral) self.servers_client.rebuild_server( server_id=server_id, image_ref=image, preserve_ephemeral=preserve_ephemeral, **rebuild_kwargs) if wait: waiters.wait_for_server_status(self.servers_client, server_id, 'ACTIVE') def ping_ip_address(self, ip_address, should_succeed=True, ping_timeout=None, mtu=None): timeout = ping_timeout or CONF.validation.ping_timeout cmd = ['ping', '-c1', '-w1'] if mtu: cmd += [ # don't fragment '-M', 'do', # ping receives just the size of ICMP payload '-s', str(net_utils.get_ping_payload_size(mtu, 4)) ] cmd.append(ip_address) def ping(): proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.communicate() return (proc.returncode == 0) == should_succeed caller = test_utils.find_test_caller() LOG.debug('%(caller)s begins to ping %(ip)s in %(timeout)s sec and the' ' expected result is %(should_succeed)s', { 'caller': caller, 'ip': ip_address, 'timeout': timeout, 'should_succeed': 'reachable' if should_succeed else 'unreachable' }) result = test_utils.call_until_true(ping, timeout, 1) LOG.debug('%(caller)s finishes ping %(ip)s in %(timeout)s sec and the ' 'ping result is %(result)s', { 'caller': caller, 'ip': ip_address, 'timeout': timeout, 'result': 'expected' if result else 'unexpected' }) return result def check_vm_connectivity(self, ip_address, username=None, private_key=None, should_connect=True, mtu=None): """Check server connectivity :param ip_address: server to test against :param username: server's ssh username :param private_key: server's ssh private key to be used :param should_connect: True/False indicates positive/negative test positive - attempt ping and ssh negative - attempt ping and fail if succeed :param mtu: network MTU to use for connectivity validation :raises: AssertError if the result of the connectivity check does not match the value of the should_connect param """ if should_connect: msg = "Timed out waiting for %s to become reachable" % ip_address else: msg = "ip address %s is reachable" % ip_address self.assertTrue(self.ping_ip_address(ip_address, should_succeed=should_connect, mtu=mtu), msg=msg) if should_connect: # no need to check ssh for negative connectivity self.get_remote_client(ip_address, username, private_key) def check_public_network_connectivity(self, ip_address, username, private_key, should_connect=True, msg=None, servers=None, mtu=None): # The target login is assumed to have been configured for # key-based authentication by cloud-init. LOG.debug('checking network connections to IP %s with user: %s', ip_address, username) try: self.check_vm_connectivity(ip_address, username, private_key, should_connect=should_connect, mtu=mtu) except Exception: ex_msg = 'Public network connectivity check failed' if msg: ex_msg += ": " + msg LOG.exception(ex_msg) self._log_console_output(servers) raise def create_floating_ip(self, thing, pool_name=None): """Create a floating IP and associates to a server on Nova""" if not pool_name: pool_name = CONF.network.floating_network_name floating_ip = (self.compute_floating_ips_client. create_floating_ip(pool=pool_name)['floating_ip']) self.addCleanup(test_utils.call_and_ignore_notfound_exc, self.compute_floating_ips_client.delete_floating_ip, floating_ip['id']) self.compute_floating_ips_client.associate_floating_ip_to_server( floating_ip['ip'], thing['id']) return floating_ip def create_timestamp(self, ip_address, dev_name=None, mount_path='/mnt', private_key=None): ssh_client = self.get_remote_client(ip_address, private_key=private_key) if dev_name is not None: ssh_client.make_fs(dev_name) ssh_client.mount(dev_name, mount_path) cmd_timestamp = 'sudo sh -c "date > %s/timestamp; sync"' % mount_path ssh_client.exec_command(cmd_timestamp) timestamp = ssh_client.exec_command('sudo cat %s/timestamp' % mount_path) if dev_name is not None: ssh_client.umount(mount_path) return timestamp def get_timestamp(self, ip_address, dev_name=None, mount_path='/mnt', private_key=None): ssh_client = self.get_remote_client(ip_address, private_key=private_key) if dev_name is not None: ssh_client.mount(dev_name, mount_path) timestamp = ssh_client.exec_command('sudo cat %s/timestamp' % mount_path) if dev_name is not None: ssh_client.umount(mount_path) return timestamp def get_server_ip(self, server): """Get the server fixed or floating IP. Based on the configuration we're in, return a correct ip address for validating that a guest is up. """ if CONF.validation.connect_method == 'floating': # The tests calling this method don't have a floating IP # and can't make use of the validation resources. So the # method is creating the floating IP there. return self.create_floating_ip(server)['ip'] elif CONF.validation.connect_method == 'fixed': # Determine the network name to look for based on config or creds # provider network resources. if CONF.validation.network_for_ssh: addresses = server['addresses'][ CONF.validation.network_for_ssh] else: creds_provider = self._get_credentials_provider() net_creds = creds_provider.get_primary_creds() network = getattr(net_creds, 'network', None) addresses = (server['addresses'][network['name']] if network else []) for address in addresses: if (address['version'] == CONF.validation.ip_version_for_ssh and address['OS-EXT-IPS:type'] == 'fixed'): return address['addr'] raise exceptions.ServerUnreachable(server_id=server['id']) else: raise lib_exc.InvalidConfiguration() nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/scenario/__init__.py0000666000175100017510000000000013246266025025360 0ustar zuulzuul00000000000000nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/scenario/test_server_basic_ops.py0000666000175100017510000001251013246266025030221 0ustar zuulzuul00000000000000# Copyright 2106 Canonical Ltd # Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import json from tempest.common import utils from tempest import config from tempest import exceptions from tempest.lib.common.utils import test_utils from tempest.lib import decorators from nova_lxd_tempest_plugin.tests.scenario import manager CONF = config.CONF class TestServerBasicOps(manager.ScenarioTest): """The test suite for server basic operations This smoke test case follows this basic set of operations: * Create a keypair for use in launching an instance * Create a security group to control network access in instance * Add simple permissive rules to the security group * Launch an instance * Perform ssh to instance * Verify metadata service * Verify metadata on config_drive * Terminate the instance """ def setUp(self): super(TestServerBasicOps, self).setUp() self.image_ref = CONF.compute.image_ref self.flavor_ref = CONF.compute.flavor_ref self.run_ssh = CONF.validation.run_validation self.ssh_user = CONF.validation.image_ssh_user def verify_ssh(self, keypair): if self.run_ssh: # Obtain a floating IP self.fip = self.create_floating_ip(self.instance)['ip'] # Check ssh self.ssh_client = self.get_remote_client( ip_address=self.fip, username=self.ssh_user, private_key=keypair['private_key']) def verify_metadata(self): if self.run_ssh and CONF.compute_feature_enabled.metadata_service: # Verify metadata service md_url = 'http://169.254.169.254/latest/meta-data/public-ipv4' def exec_cmd_and_verify_output(): cmd = 'curl ' + md_url result = self.ssh_client.exec_command(cmd) if result: msg = ('Failed while verifying metadata on server. Result ' 'of command "%s" is NOT "%s".' % (cmd, self.fip)) self.assertEqual(self.fip, result, msg) return 'Verification is successful!' if not test_utils.call_until_true(exec_cmd_and_verify_output, CONF.compute.build_timeout, CONF.compute.build_interval): raise exceptions.TimeoutException('Timed out while waiting to ' 'verify metadata on server. ' '%s is empty.' % md_url) def verify_metadata_on_config_drive(self): if self.run_ssh and CONF.compute_feature_enabled.config_drive: # Verify metadata on config_drive cmd_md = \ 'cat /var/lib/cloud/data/openstack/latest/meta_data.json' result = self.ssh_client.exec_command(cmd_md) result = json.loads(result) self.assertIn('meta', result) msg = ('Failed while verifying metadata on config_drive on server.' ' Result of command "%s" is NOT "%s".' % (cmd_md, self.md)) self.assertEqual(self.md, result['meta'], msg) def verify_networkdata_on_config_drive(self): if self.run_ssh and CONF.compute_feature_enabled.config_drive: # Verify network data on config_drive cmd_md = \ 'cat /var/lib/cloud/data/openstack/latest/network_data.json' result = self.ssh_client.exec_command(cmd_md) result = json.loads(result) self.assertIn('services', result) self.assertIn('links', result) self.assertIn('networks', result) # TODO(clarkb) construct network_data from known network # instance info and do direct comparison. @decorators.idempotent_id('7fff3fb3-91d8-4fd0-bd7d-0204f1f180ba') @decorators.attr(type='smoke') @utils.services('compute', 'network') def test_server_basic_ops(self): keypair = self.create_keypair() self.security_group = self._create_security_group() security_groups = [{'name': self.security_group['name']}] self.md = {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'} self.instance = self.create_server( image_id=self.image_ref, flavor=self.flavor_ref, key_name=keypair['name'], security_groups=security_groups, config_drive=CONF.compute_feature_enabled.config_drive, metadata=self.md, wait_until='ACTIVE') self.verify_ssh(keypair) self.verify_metadata() self.verify_metadata_on_config_drive() self.verify_networkdata_on_config_drive() self.servers_client.delete_server(self.instance['id']) nova-lxd-17.0.0/nova_lxd_tempest_plugin/tests/scenario/test_volume_ops.py0000666000175100017510000001026513246266025027066 0ustar zuulzuul00000000000000# Copyright 2013 NEC Corporation # Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from oslo_log import log as logging import testtools from tempest.common import waiters from tempest import config from tempest import exceptions from tempest.lib.common.utils import data_utils from tempest.lib.common.utils import test_utils from tempest.lib import decorators from tempest.lib import exceptions as lib_exc from nova_lxd_tempest_plugin.tests.scenario import manager CONF = config.CONF LOG = logging.getLogger(__name__) class LXDVolumeScenario(manager.ScenarioTest): """The test suite for attaching volume to an instance The following is the scenario outline: 1. Boot an instance "instance" 2. Create a volume "volume1" 3. Attach volume1 to instance 4. Create a filesystem on volume1 5. Mount volume1 6. Create a file which timestamp is written in volume1 7. Check for file on instnace1 7. Unmount volume1 8. Detach volume1 from instance1 """ def setUp(self): super(LXDVolumeScenario, self).setUp() self.image_ref = CONF.compute.image_ref self.flavor_ref = CONF.compute.flavor_ref self.run_ssh = CONF.validation.run_validation self.ssh_user = CONF.validation.image_ssh_user @classmethod def skip_checks(cls): super(LXDVolumeScenario, cls).skip_checks() def _wait_for_volume_available_on_the_system(self, ip_address, private_key): ssh = self.get_remote_client(ip_address, private_key=private_key) def _func(): part = ssh.get_partitions() LOG.debug("Partitions:%s" % part) return CONF.compute.volume_device_name in part if not test_utils.call_until_true(_func, CONF.compute.build_timeout, CONF.compute.build_interval): raise exceptions.TimeoutException def test_volume_attach(self): keypair = self.create_keypair() self.security_group = self._create_security_group() security_groups = [{'name': self.security_group['name']}] self.md = {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'} server = self.create_server( image_id=self.image_ref, flavor=self.flavor_ref, key_name=keypair['name'], security_groups=security_groups, config_drive=CONF.compute_feature_enabled.config_drive, metadata=self.md, wait_until='ACTIVE') volume = self.create_volume() # create and add floating IP to server1 ip_for_server = self.get_server_ip(server) self.nova_volume_attach(server, volume) self._wait_for_volume_available_on_the_system(ip_for_server, keypair['private_key']) ssh_client = self.get_remote_client( ip_address=ip_for_server, username=self.ssh_user, private_key=keypair['private_key']) ssh_client.exec_command( 'sudo /sbin/mke2fs -t ext4 /dev/%s' % CONF.compute.volume_device_name) ssh_client.exec_command( 'sudo /bin/mount -t ext4 /dev/%s /mnt' % CONF.compute.volume_device_name) ssh_client.exec_command( 'sudo sh -c "date > /mnt/timestamp; sync"') timestamp = ssh_client.exec_command( 'test -f /mnt/timestamp && echo ok') ssh_client.exec_command( 'sudo /bin/umount /mnt') self.nova_volume_detach(server, volume) self.assertEqual(u'ok\n', timestamp) nova-lxd-17.0.0/nova_lxd_tempest_plugin/plugin.py0000666000175100017510000000207413246266025022167 0ustar zuulzuul00000000000000# Copyright 2016 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from tempest.test_discover import plugins class MyPlugin(plugins.TempestPlugin): def load_tests(self): base_path = os.path.split(os.path.dirname( os.path.abspath(__file__)))[0] test_dir = "nova_lxd_tempest_plugin/tests" full_test_dir = os.path.join(base_path, test_dir) return full_test_dir, base_path def register_opts(self, conf): pass def get_opt_lists(self): pass nova-lxd-17.0.0/nova_lxd_tempest_plugin/README0000666000175100017510000000014613246266025021175 0ustar zuulzuul00000000000000To run tempest specific tests for nova-lxd run the following command: tox -e all-plugin -- nova_lxd nova-lxd-17.0.0/nova_lxd_tempest_plugin/__init__.py0000666000175100017510000000000013246266025022413 0ustar zuulzuul00000000000000