pax_global_header00006660000000000000000000000064133355565750014533gustar00rootroot0000000000000052 comment=2f5deb8b8d51a5201a387238b01b57bad9ee9a30 maas-2.4.2-7034-g2f5deb8b8.orig/000077500000000000000000000000001333555657500155515ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/.coveragerc000066400000000000000000000005631333555657500176760ustar00rootroot00000000000000[run] branch = True source = src omit = */testing.py */testing/* */tests/* [report] omit = src/*/migrations/* src/maastesting/* exclude_lines = # Have to re-enable the standard pragma pragma: no cover # Don't complain if tests don't hit defensive assertion code: raise NotImplementedError [html] directory = coverage title = Coverage for MAAS maas-2.4.2-7034-g2f5deb8b8.orig/.ctags000066400000000000000000000000711333555657500166510ustar00rootroot00000000000000--python-kinds=-iv --exclude=*.js --extra=+f --links=yes maas-2.4.2-7034-g2f5deb8b8.orig/.flake8000066400000000000000000000000461333555657500167240ustar00rootroot00000000000000[flake8] ignore = E123,E305,E402,E731 maas-2.4.2-7034-g2f5deb8b8.orig/CHANGELOG000077700000000000000000000000001333555657500223672docs/changelog.rstustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/HACKING.rst000066400000000000000000000525301333555657500173540ustar00rootroot00000000000000.. -*- mode: rst -*- ************ Hacking MAAS ************ Coding style ============ MAAS follows the `Launchpad Python Style Guide`_, except where it gets Launchpad specific, and where it talks about `method naming`_. MAAS instead adopts `PEP-8`_ naming in all cases, so method names should usually use the ``lowercase_with_underscores`` form. .. _Launchpad Python Style Guide: https://dev.launchpad.net/PythonStyleGuide .. _method naming: https://dev.launchpad.net/PythonStyleGuide#Naming .. _PEP-8: http://www.python.org/dev/peps/pep-0008/ Prerequisites ============= Container ^^^^^^^^^ There's a configure-lxd-profile script in utilities, that will set up a LXD profile that is configured properly. Dependencies ^^^^^^^^^^^^ You can grab MAAS's code manually from Launchpad but Git_ makes it easy to fetch the last version of the code. First of all, install Git:: $ sudo apt install git .. _Git: https://git-scm.com/ Then go into the directory where you want the code to reside and run:: $ git clone https://git.launchpad.net/maas && cd maas MAAS depends on Postgres, isc-dhcp, bind9, and many other packages. To install everything that's needed for running and developing MAAS, run:: $ make install-dependencies Careful: this will ``apt-get install`` many packages on your system, via ``sudo``. It may prompt you for your password. This will install ``bind9``. As a result you will have an extra daemon running. If you are a developer and don't intend to run BIND locally, you can disable the daemon by inserting ``exit 1`` at the top of ``/etc/default/bind9``. The package still needs to be installed for tests though. Python development dependencies are pulled automatically from `PyPI`_ when ``buildout`` runs. (``buildout`` will be automatically configured to create a cache, in order to improve build times. See ``utilities/configure-buildout``.) Javascript development dependencies are pulled automatically from `npm`_ when ``make`` runs. (``npm`` will be automatically configured to use a cache, in order to improve build times.) .. _PyPI: http://pypi.python.org/ .. _npm: https://www.npmjs.com/ Git Workflow ^^^^^^^^^^^^ You will want to adjust your git repository of lp:maas some before you start making changes to the code. This includes setting up your own copy of the repository and making your changes in branches. First you will want to rename the origin remote to upstream and create a new origin in your namespace. $ git remote rename origin upstream $ git remote add origin git+ssh://{launchpad-id}@git.launchpad.net/~{launchpad-id}/maas Now you can make a branch and start making changes. $ git checkout -b new-branch Once you have made the changes you want, you should commit and push the branch to your origin. $ git commit -m "My change" -a $ git push origin new-branch Now you can view that branch on Launchpad and propose it to the maas repository. Once the branch has been merged and your done with it you can update your git repository to remove the branch. $ git fetch upstream $ git checkout master $ git merge upstream/master $ git branch -d new-branch Optional ^^^^^^^^ The PyCharm_ IDE is a useful tool when developing MAAS. The MAAS team does not endorse any particular IDE, but ``.idea`` `project files are included with MAAS`_, so PyCharm_ is an easy choice. .. _PyCharm: https://www.jetbrains.com/pycharm/ .. _project files are included with MAAS: https://intellij-support.jetbrains.com/entries/23393067-How-to-manage-projects-under-Version-Control-Systems Running tests ============= To run the whole suite:: $ make test To run tests at a lower level of granularity:: $ ./bin/test.region src/maasserver/tests/test_api.py $ ./bin/test.region src/maasserver/tests/test_api.py:AnonymousEnlistmentAPITest The test runner is `nose`_, so you can pass in options like ``--with-coverage`` and ``--nocapture`` (short option: ``-s``). The latter is essential when using ``pdb`` so that stdout is not adulterated. .. _nose: http://readthedocs.org/docs/nose/en/latest/ .. Note:: When running ``make test`` through ssh from a machine with locales that are not set up on the machine that runs the tests, some tests will fail with a ``MismatchError`` and an "unsupported locale setting" message. Running ``locale-gen`` for the missing locales or changing your locales on your workstation to ones present on the server will solve the issue. Emitting subunit ^^^^^^^^^^^^^^^^ Pass the ``--with-subunit`` flag to any of the test runners (e.g. ``bin/test.rack``) to produce a `subunit`_ stream of test results. This may be useful for parallelising test runs, or to allow later analysis of a test run. The optional ``--subunit-fd`` flag can be used to direct the results to a different file descriptor, to ensure a clean stream. .. _subunit: https://launchpad.net/subunit/ Running JavaScript tests ^^^^^^^^^^^^^^^^^^^^^^^^ The JavaScript tests are run using Karma_. Chromium and PhantomJS are the default browser but any browser supported by Karma can be used to run the tests.:: $ ./bin/test.js If you want to run the JavaScript tests in debug mode so you can inspect the code inside of a running browser you can launch Karma_ manually.:: $ ./bin/karma start src/maastesting/karma.conf.js --browsers Chrome --no-single-run .. _Karma: http://karma-runner.github.io/ Production MAAS server debugging ================================ When MAAS is installed from packaging it can help to enable debugging features to triage issues. Log all API and UI exceptions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ By default MAAS only logs HTTP 500 - INTERNAL_SERVER_ERROR into the regiond.log. To enable logging of all exceptions even exceptions where MAAS will return the correct HTTP status code.:: $ sudo sed -i 's/DEBUG = False/DEBUG = True/g' \ > /usr/lib/python3/dist-packages/maasserver/djangosettings/settings.py $ sudo service maas-regiond restart Run regiond in foreground ^^^^^^^^^^^^^^^^^^^^^^^^^ It can help when debugging to run regiond a foreground process so you can interact with the regiond by placing a breakpoint in the code. Once you have placed a breakpoint into the code you want to inspect you can start the regiond process in the foreground.:: $ sudo service maas-regiond stop $ sudo -u maas -H \ > DJANGO_SETTINGS_MODULE=maasserver.djangosettings.settings \ > twistd3 --nodaemon --pidfile= maas-regiond .. Note:: By default a MAAS installation runs 4 regiond processes at the same time. This will change it to only run 1 process in the foreground. This should only be used for debugging. Once finished the breakpoint should be removed and maas-regiond service should be started. Run rackd in foreground ^^^^^^^^^^^^^^^^^^^^^^^^^ It can help when debugging to run rackd a foreground process so you can interact with the rackd by placing a breakpoint in the code. Once you have placed a breakpoint into the code you want to inspect you can start the rackd process in the foreground.:: $ sudo service maas-rackd stop $ sudo -u maas -H /usr/bin/authbind --deep /usr/bin/twistd3 --nodaemon --pidfile= maas-rackd Development MAAS server setup ============================= Access to the database is configured in ``src/maasserver/djangosettings/development.py``. The ``Makefile`` or the test suite sets up a development database cluster inside your branch. It lives in the ``db`` directory, which gets created on demand. You'll want to shut it down before deleting a branch; see below. First, set up the project. This fetches all the required dependencies and sets up some useful commands in ``bin/``:: $ make Create the database cluster and initialise the development database:: $ make syncdb Optionally, if all you want to do is to take a look around the UI and API, without interacting with real machines or VMs, populate your database with the sample data:: $ make sampledata You can login as a simple user using the test account (username: 'test', password: 'test') or the admin account (username: 'admin', password: 'test'). If you want to interact with real machines or VMs, it's better to use the snap. Instead of building a real snap, though, you can use 'snapcraft prime' to create the prime directory. That has all the contents of the snap, but it's in a plain directory insted of in a squashfs image. Using a directory is better for testing, since you can change the files in there and not rebuild the snap. There's a ``sync-dev-snap`` make target to automate this: $ make sync-dev-snap The ``sync-dev-snap`` target creates a clean copy of your working tree (so that you don't have to run 'make clean' before building the snap) in build/dev-snap and creates the snap directory in build/dev-snap/prime. You can now install the snap: $ sudo snap try --devmode build/dev-snap/prime Note that 'snap try' is used instead of 'snap install'. The maas snap should now be installed: $ snap list Name Version Rev Developer Notes core 16-2.27.5 2774 canonical core maas 2.3.0~alpha3-6225-gaa05ba6-snap x1 devmode,try Next you need to initialize the snap, just like you would normally do: $ sudo maas init And now you're ready to make changes to the code. After you've change some source files and want to test them out, run the ``sync-dev-snap`` target again: $ make sync-dev-snap You should now see that you files were synced to the prime directory. If you changed JS and HTML files only, you should see that changes straight away by just reloading the browser. If you changed Python files, you need to restart MAAS: $ sudo service snap.maas.supervisor restart VMs or even real machines can now PXE boot off your development snap. But of course, you need to set up the networking first. If you want to do some simple testing, the easiest is to create a networking in virt-manager that has NAT, but doesn't provide DHCP. If the name of the bridge that got created is `virbr1`, you can expose it to your container as eth1 using the following config: eth1: name: eth1 nictype: bridged parent: virbr1 type: nic Of course, you also need to configure that eth1 interface. Since MAAS is the one providing DHCP, you need to give it a static address on the network you created. For example:: auto eth1 iface eth1 inet static address 192.168.100.2 netmask 255.255.255.0 Note that your LXD host will have the .1 address and will act as a gateway for your VMs. To shut down the database cluster and clean up all other generated files in your branch:: $ make clean Downloading PXE boot resources ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To use PXE booting, each cluster controller needs to download several files relating to PXE booting. This process is automated, but it does not start by default. First create a superuser and start all MAAS services:: $ bin/maas-region createadmin $ make run Substitute your own email. The command will prompt for a choice of password. Next, get the superuser's API key on the `account preferences`_ page in the web UI, and use it to log into MAAS at the command-line:: $ bin/maas login dev http://localhost:5240/MAAS/ .. _`account preferences`: http://localhost:5240/MAAS/account/prefs/ Start downloading PXE boot resources:: $ bin/maas dev node-groups import-boot-images This sends jobs to each cluster controller, asking each to download the boot resources they require. This may download dozens or hundreds of megabytes, so it may take a while. To save bandwidth, set an HTTP proxy beforehand:: $ bin/maas dev maas set-config name=http_proxy value=http://... Running the built-in TFTP server ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You will need to run the built-in TFTP server on the real TFTP port (69) if you want to boot some real hardware. By default, it's set to start up on port 5244 for testing purposes. To make it run on port 69, set the MAAS_TFTP_PORT environment variable before running make run/start:: export MAAS_TFTP_PORT=69 Then you need install and configure the authbind, so that your user can bind to port 69:: * Install the ``authbind``package: $ sudo apt install authbind * Create a file ``/etc/authbind/byport/69`` that is *executable* by the user running MAAS. $ sudo touch /etc/authbind/byport/69 $ sudo chown $USER /etc/authbind/byport/69 $ sudo chmod u+x /etc/authbind/byport/69 Now when starting up the MAAS development webserver, "make run" and "make start" will detect authbind's presence and use it automatically. Running the BIND daemon for real ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ There's a BIND daemon that is started up as part of the development service but it runs on port 5246 by default. If you want to make it run as a real DNS server on the box then set the MAAS_BIND_PORT environment variable before running make run/start:: export MAAS_BIND_PORT=53 Then as for TFTP above, create an authbind authorisation:: $ sudo touch /etc/authbind/byport/53 $ sudo chown $USER /etc/authbind/byport/53 $ sudo chmod u+x /etc/authbind/byport/53 and run as normal. Running the cluster worker ^^^^^^^^^^^^^^^^^^^^^^^^^^ The cluster also needs authbind as it needs to bind a socket on UDP port 68 for DHCP probing:: $ sudo touch /etc/authbind/byport/68 $ sudo chown $USER /etc/authbind/byport/68 $ sudo chmod u+x /etc/authbind/byport/68 If you omit this, nothing else will break, but you will get an error in the cluster log because it can't bind to the port. Configuring DHCP ^^^^^^^^^^^^^^^^ MAAS requires a properly configured DHCP server so it can boot machines using PXE. MAAS can work with its own instance of the ISC DHCP server, if you install the maas-dhcp package:: $ sudo apt install maas-dhcp Note that maas-dhcpd service definition referencese the maas-rackd service, which won't be present if you run a development service. To workaround edit /lib/systemd/system/maas-dhcp.service and comment out this line: BindsTo=maas-rackd.service Development services ==================== The development environment uses *daemontools* to manage the various services that are required. These are all defined in subdirectories in ``services/``. There are familiar service-like commands:: $ make start $ make status $ make restart $ make stop The latter is a dependency of ``distclean`` so just running ``make distclean`` when you've finished with your branch is enough to stop everything. Individual services can be manipulated too:: $ make services/rackd/@start The ``@`` pattern works for any of the services. There's an additional special action, ``run``:: $ make run This starts all services up and tails their log files. When you're done, kill ``tail`` (e.g. Ctrl-c), and all the services will be stopped. However, when used with individual services:: $ make services/regiond/@run it does something even cooler. First it shuts down the service, then it restarts it in the foreground so you can see the logs in the console. More importantly, it allows you to use ``pdb``, for example. A note of caution: some of the services have slightly different behaviour when run in the foreground: * regiond (the *webapp* service) will be run with its auto-reloading enabled. There's a convenience target for hacking regiond that starts everything up, but with regiond in the foreground:: $ make run+regiond Apparently Django needs a lot of debugging ;) Adding new dependencies ======================= Since MAAS is distributed mainly as an Ubuntu package, all runtime dependencies should be packaged, and we should develop with the packaged version if possible. All dependencies, from a package or not, need to be added to ``setup.py`` and ``buildout.cfg``, and the version specified in ``versions.cfg`` (``allowed-picked-version`` is disabled, hence ``buildout`` must be given precise version information). If it is a development-only dependency (i.e. only needed for the test suite, or for developers' convenience), simply running ``buildout`` like this will make the necessary updates to ``versions.cfg``:: $ ./bin/buildout -v buildout:allow-picked-versions=true Adding new source files ======================= When creating a new source file, a Python module or test for example, always start with the appropriate template from the ``templates`` directory. Database information ==================== MAAS uses Django_ to manage changes to the database schema. .. _Django: https://www.djangoproject.com/ Be sure to have a look at `Django's migration documentation`_ before you make any change. .. _Django's migration documentation: https://docs.djangoproject.com/en/1.8/topics/migrations/ Changing the schema ^^^^^^^^^^^^^^^^^^^ Once you've made a model change (i.e. a change to a file in ``src//models/*.py``) you have to run Django's `makemigrations`_ command to create a migration file that will be stored in ``src//migrations/builtin/``. Note that if you want to add a new model class you'll need to import it in ``src//models/__init__.py`` .. _makemigrations: https://docs.djangoproject.com/en/1.8/ref/django-admin/#django-admin-makemigrations Generate the migration script with:: $ ./bin/maas-region makemigrations --name description_of_the_change maasserver This will generate a migration module named ``src/maasserver/migrations/builtin/_description_of_the_change.py``. Don't forget to add that file to the project with:: $ git add src/maasserver/migrations/builtin/_description_of_the_change.py To apply that migration, run:: $ make syncdb Performing data migration ^^^^^^^^^^^^^^^^^^^^^^^^^ If you need to perform data migration, very much in the same way, you will need to run Django's `makemigrations`_ command. For instance, if you want to perform changes to the ``maasserver`` application, run:: $ ./bin/maas-region makemigrations --empty --name description_of_the_change maasserver This will generate a migration module named ``src/maasserver/migrations/builtin/_description_of_the_change.py``. You will need to edit that file and fill the ``operations`` list with the options that need to be performed. Again, don't forget to add that file to the project:: $ git add src/maasserver/migrations/builtin/_description_of_the_change.py Once the operations have been added, apply that migration with:: $ make syncdb Migrations before MAAS 2.0 ^^^^^^^^^^^^^^^^^^^^^^^^^^ Previous version before MAAS 2.0 used South_ to perform database migrations. To support upgrading from any previous version of MAAS before 2.0 the South_ migrations are run. On upgrade of MAAS those migrations will be run before the new Django_ migrations are run. On a fresh installation of MAAS the South_ migrations will be skipped because the Django_ migrations already provide the entire schema in the initial migration. All of this logic is performed on upgrade by the `dbupgrade` command.:: $ bin/maas-region dbupgrade In some testing case you might need to always run the South_ migrations before the Django_ migrations on a clean database. Using the `always-south` option on the `dbupgrade` command allows this testing scenario.:: $ bin/maas-region dbupgrade --always-south .. Note:: When the South_ migrations run they are actually being ran under Django 1.6 and South that is provided in the MAAS source code in a tarball. Located at ``src/maasserver/migrations/south/django16_south.tar.gz`` this file is extracted into a temporary folder and imported by MAAS to run the South migrations. .. _South: http://south.aeracode.org/ Examining the database manually ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you need to get an interactive ``psql`` prompt, you can use `dbshell`_:: $ bin/maas-region dbshell .. _dbshell: https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell If you need to do the same thing with a version of MAAS you have installed from the package, you can use:: $ sudo maas-region dbshell --installed You can use the ``\dt`` command to list the tables in the MAAS database. You can also execute arbitrary SQL. For example::: maasdb=# select system_id, hostname from maasserver_node; system_id | hostname -------------------------------------------+-------------------- node-709703ec-c304-11e4-804c-00163e32e5b5 | gross-debt.local node-7069401a-c304-11e4-a64e-00163e32e5b5 | round-attack.local (2 rows) Viewing SQL queries during tests ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you need to view the SQL queries that are performed during a test, the `LogSQL` fixture can be used to output all the queries during the test.:: from maasserver.fixture import LogSQL self.useFixture(LogSQL()) Sometimes you need to see where in the code that query was performed.:: from maasserver.fixture import LogSQL self.useFixture(LogSQL(include_stacktrace=True)) Documentation ============= Use `reST`_ with the `convention for headings as used in the Python documentation`_. .. _reST: http://sphinx.pocoo.org/rest.html .. _convention for headings as used in the Python documentation: http://sphinx.pocoo.org/rest.html#sections Updating copyright notices ^^^^^^^^^^^^^^^^^^^^^^^^^^ Use the `Bazaar Copyright Updater`_:: bzr branch lp:bzr-update-copyright ~/.bazaar/plugins/update_copyright make copyright Then commit any changes. .. _Bazaar Copyright Updater: https://launchpad.net/bzr-update-copyright maas-2.4.2-7034-g2f5deb8b8.orig/INSTALL.txt000066400000000000000000000336221333555657500174260ustar00rootroot00000000000000.. -*- mode: rst -*- Installing MAAS =============== There are two main ways to install MAAS: * :ref:`From a package repository. ` * :ref:`As a fresh install from Ubuntu Server install media. ` * :ref:`Install MAAS in a LXC Container. ` MAAS Packages and Repositories ------------------------------ MAAS Packages ^^^^^^^^^^^^^ Installing MAAS from packages is straightforward. There are actually several packages that go into making up a working MAAS install, but for convenience, many of these have been gathered into a virtual package called 'maas' which will install the necessary components for a 'seed cloud', that is a single server that will directly control a group of nodes. The main packages are: * ``maas`` - seed cloud setup, which includes both the region controller and the rack controller below. * ``maas-region-controller`` - includes the web UI, API and database. * ``maas-rack-controller`` - controls a group of machines under a rack or multiple racks, including DHCP management. * ``maas-dhcp``/``maas-dns`` - required when managing dhcp/dns. * ``maas-proxy`` - required to provide a MAAS proxy. If you need to separate these services or want to deploy an additional rack controller, you should install the corresponding packages individually (see :ref:`the description of a typical setup ` for more background on how a typical hardware setup might be arranged). There are two suggested additional packages 'maas-dhcp' and 'maas-dns'. These set up MAAS-controlled DHCP and DNS services which greatly simplify deployment if you are running a typical setup where the MAAS controller can run the network (Note: These **must** be installed if you later set the options in the web interface to have MAAS manage DHCP/DNS). MAAS Package Repositories ^^^^^^^^^^^^^^^^^^^^^^^^^ While MAAS is available in the Ubuntu Archives per each release of Ubuntu, the version might not be the latest. However, if you would like to install a newer version of MAAS (the latest stable release), this is available in the following PPA: * `ppa:maas/stable`_ .. Note:: The MAAS team also releases the latest development release of MAAS. The development release is available in `ppa:maas/next`_. However, this is meant to be used for testing and at your own risk. Adding MAAS package repository is simple. At the command line, type:: $ sudo add-apt-repository ppa:maas/stable You will be asked to confirm whether you would like to add this repository, and its key. Upon configumation, the following needs to be typed at the command line:: $ sudo apt-get update .. _ppa:maas/stable: https://launchpad.net/~maas/+archive/ubuntu/stable .. _ppa:maas/next: https://launchpad.net/~maas/+archive/ubuntu/next .. _pkg-install: Installing MAAS from the command line ------------------------------------- Installing a Single Node MAAS ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ At the command line, type:: $ sudo apt-get install maas This will install both the MAAS Region Controller and the MAAS Rack Controller, and will select sane defaults for the communication between the Rack Controller and the Region Controller. After installation, you can access the Web Interface. Then, there are just a few more setup steps :ref:`post_install` Reconfiguring a MAAS Installation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You will see a list of packages and a confirmation message to proceed. The exact list will obviously depend on what you already have installed on your server, but expect to add about 200MB of files. The configuration for the MAAS controller will automatically run and pop up this config screen: .. image:: media/install_cluster-config.* Here you will need to enter the hostname for where the region controller can be contacted. In many scenarios, you may be running the region controller (i.e. the web and API interface) from a different network address, for example where a server has several network interfaces. Adding Rack Controllers ^^^^^^^^^^^^^^^^^^^^^^^ If you would like to add additional MAAS Rack Controllers to your MAAS setup, you can do so by following the instructions in :doc:`rack-configuration`. .. _disc-install: Installing MAAS from Ubuntu Server boot media --------------------------------------------- If you are installing MAAS as part of a fresh install it is easiest to choose the "Multiple Server install with MAAS" option from the installer and have pretty much everything set up for you. Boot from the Ubuntu Server media and you will be greeted with the usual language selection screen: .. image:: media/install_01.* On the next screen, you will see there is an entry in the menu called "Multiple server install with MAAS". Use the cursor keys to select this and then press Enter. .. image:: media/install_02.* The installer then runs through the usual language and keyboard options. Make your selections using Tab/Cursor keys/Enter to proceed through the install. The installer will then load various drivers, which may take a moment or two. .. image:: media/install_03.* The next screen asks for the hostname for this server. Choose something appropriate for your network. .. image:: media/install_04.* Finally we get to the MAAS part! Here there are just two options. We want to "Create a new MAAS on this server" so go ahead and choose that one. .. image:: media/install_05.* The install now continues as usual. Next you will be prompted to enter a username. This will be the admin user for the actual server that MAAS will be running on (not the same as the MAAS admin user!) .. image:: media/install_06.* As usual you will have the chance to encrypt your home directory. Continue to make selections based on whatever settings suit your usage. .. image:: media/install_07.* After making selections and partitioning storage, the system software will start to be installed. This part should only take a few minutes. .. image:: media/install_09.* Various packages will now be configured, including the package manager and update manager. It is important to set these up appropriately so you will receive timely updates of the MAAS server software, as well as other essential services that may run on this server. .. image:: media/install_10.* The configuration for MAAS will ask you to configure the host address of the server. This should be the IP address you will use to connect to the server (you may have additional interfaces e.g. to run node subnets) .. image:: media/install_cluster-config.* The next screen will confirm the web address that will be used to the web interface. .. image:: media/install_controller-config.* After configuring any other packages the installer will finally come to and end. At this point you should eject the boot media. .. image:: media/install_14.* After restarting, you should be able to login to the new server with the information you supplied during the install. The MAAS software will run automatically. .. image:: media/install_15.* **NOTE:** The maas-dhcp and maas-dns packages should be installed by default, but on older releases of MAAS they won't be. If you want to have MAAS run DHCP and DNS services, you should install these packages. Check whether they are installed with:: $ dpkg -l maas-dhcp maas-dns If they are missing, then:: $ sudo apt-get install maas-dhcp maas-dns And then proceed to the post-install setup. .. _container-install: Installing MAAS in a LXC container ---------------------------------- Installing MAAS in a container is a typical setup for those users who would like to take advantange of their machine for other users at the same time of using MAAS. In order to setup MAAS, you need some requirements: * Create a bridge (for example, it can be br0). * Install LXD and ZFS. * Create a Container profile for MAAS Install LXD and ZFS ^^^^^^^^^^^^^^^^^^^ The first thing to do is to install LXD and ZFS:: $ sudo apt-get install lxd zfsutils-linux $ sudo modprobe zfs $ sudo lxd init Create a LXC profile for MAAS ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ First, create a container profile for MAAS:: $ lxc profile create maas Second, bind the NIC inside the container (eth0) against the bridge on the physical host (br0):: $ lxc profile device set maas eth0 parent br0 Finally, create a root disk for the container to use:: $ lxc profile device add maas root disk path=/ pool=default Launch LXD container ^^^^^^^^^^^^^^^^^^^^ Once the profile has been created, you can now launch the LXC container:: $ lxc launch -p maas ubuntu-daily:18.04 bionic-maas Install MAAS ^^^^^^^^^^^^ Once the container is running, you can now install MAAS. First you need to access the container with:: $ lxc exec bionic-maas bash And you can proceed with the installation as above, :ref:`From a package repository. ` .. _post_install: Post-Install tasks ================== Your MAAS is now installed, but there are a few more things to be done. If you now use a web browser to connect to the region controller, you should see that MAAS is running, but there will also be some errors on the screen: .. image:: media/install_web-init.* The on screen messages will tell you that there are no boot images present, and that you can't login because there is no admin user. Create a superuser account -------------------------- Once MAAS is installed, you'll need to create an administrator account:: $ sudo maas createadmin --username=root --email=MYEMAIL@EXAMPLE.COM Substitute your own email address for MYEMAIL@EXAMPLE.COM. You may also use a different username for your administrator account, but "root" is a common convention and easy to remember. The command will prompt for a password to assign to the new user. You can run this command again for any further administrator accounts you may wish to create, but you need at least one. Log in on the server -------------------- Looking at the region controller's main web page again, you should now see a login screen. Log in using the user name and password which you have just created. .. image:: media/install-login.* Import the boot images ---------------------- Since version 1.7, MAAS stores the boot images in the region controller's database, from where the rack controllers will synchronise with the region and pull images from the region to the rack's local disk. This process is automatic and MAAS will check for and download new Ubuntu images every hour. However, on a new installation you'll need to start the import process manually once you have set up your MAAS region controller. There are two ways to start the import: through the web user interface, or through the remote API. To do it in the web user interface, go to the Images tab, check the boxes to say which images you want to import, and click the "Import images" button at the bottom of the Ubuntu section. .. image:: media/import-images.* A message will appear to let you know that the import has started, and after a while, the warnings about the lack of boot images will disappear. It may take a long time, depending on the speed of your Internet connection for import process to complete, as the images are several hundred megabytes. The import process will only download images that have changed since last import. You can check the progress of the import by hovering over the spinner next to each image. The other way to start the import is through the :ref:`region-controller API `, which you can invoke most conveniently through the :ref:`command-line interface `. To do this, connect to the MAAS API using the "maas" command-line client. See :ref:`Logging in ` for how to get set up with this tool. Then, run the command:: $ maas my-maas-session boot-resources import (Substitute a different profile name for 'my-maas-session' if you have named yours something else.) This will initiate the download, just as if you had clicked "Import images" in the web user interface. By default, the import is configured to download the most recent LTS release only for the amd64 architecture. Although this should suit most needs, you can change the selections on the Images tab, or over the API. Read :doc:`customise boot sources ` to see examples on how to do that. Speeding up repeated image imports by using a local mirror ---------------------------------------------------------- See :doc:`sstreams-mirror` for information on how to set up a mirror and configure MAAS to use it. Configure DHCP -------------- To enable MAAS to control DHCP, you can either: #. Follow the instructions at :doc:`rack-configuration` to use the web UI to set up your rack controller. #. Use the command line interface `maas` by first :ref:`logging in to the API ` and then :ref:`following this procedure ` Configure switches on the network --------------------------------- Some switches use Spanning-Tree Protocol (STP) to negotiate a loop-free path through a root bridge. While scanning, it can make each port wait up to 50 seconds before data is allowed to be sent on the port. This delay in turn can cause problems with some applications/protocols such as PXE, DHCP and DNS, of which MAAS makes extensive use. To alleviate this problem, you should enable `Portfast`_ for Cisco switches or its equivalent on other vendor equipment, which enables the ports to come up almost immediately. .. _Portfast: https://www.symantec.com/business/support/index?page=content&id=HOWTO6019 Traffic between the region contoller and rack controllers ------------------------------------------------------------ * Each rack controller must be able to: * Initiate TCP connections (for HTTP) to each region controller on port 80 or port 5240, the choice of which depends on the setting of the MAAS URL. * Initiate TCP connections (for RPC) to each region controller between port 5250 and 5259 inclusive. This permits up to 10 ``maas-regiond`` processes on each region controller host. At present this is not configurable. Once everything is set up and running, you are ready to :doc:`start enlisting nodes ` maas-2.4.2-7034-g2f5deb8b8.orig/LICENSE000066400000000000000000001043701333555657500165630ustar00rootroot00000000000000MAAS is Copyright 2012-2015 Canonical Ltd. Canonical Ltd ("Canonical") distributes the MAAS source code under the GNU Affero General Public License, version 3 ("AGPLv3"). The full text of this licence is given below. Third-party copyright in this distribution is noted where applicable. All rights not expressly granted are reserved. ========================================================================= GNU AFFERO GENERAL PUBLIC LICENSE Version 3, 19 November 2007 (http://www.gnu.org/licenses/agpl.html) Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU Affero General Public License is a free, copyleft license for software and other kinds of works, specifically designed to ensure cooperation with the community in the case of network server software. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. Developers that use our General Public Licenses protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License which gives you legal permission to copy, distribute and/or modify the software. A secondary benefit of defending all users' freedom is that improvements made in alternate versions of the program, if they receive widespread use, become available for other developers to incorporate. Many developers of free software are heartened and encouraged by the resulting cooperation. However, in the case of software used on network servers, this result may fail to come about. The GNU General Public License permits making a modified version and letting the public access it on a server without ever releasing its source code to the public. The GNU Affero General Public License is designed specifically to ensure that, in such cases, the modified source code becomes available to the community. It requires the operator of a network server to provide the source code of the modified version running there to the users of that server. Therefore, public use of a modified version, on a publicly accessible server, gives the public access to the source code of the modified version. An older license, called the Affero General Public License and published by Affero, was designed to accomplish similar goals. This is a different license, not a version of the Affero GPL, but Affero has released a new version of the Affero GPL which permits relicensing under this license. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU Affero General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Remote Network Interaction; Use with the GNU General Public License. Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software. This Corresponding Source shall include the Corresponding Source for any work covered by version 3 of the GNU General Public License that is incorporated pursuant to the following paragraph. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the work with which it is combined will remain governed by version 3 of the GNU General Public License. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU Affero General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details. You should have received a copy of the GNU Affero General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements. You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see . ========================================================================= maas-2.4.2-7034-g2f5deb8b8.orig/MANIFEST.in000066400000000000000000000005311333555657500173060ustar00rootroot00000000000000graft src/maasserver/static graft src/maasserver/templates graft src/metadataserver/fixtures graft src/metadataserver/user_data/templates graft src/provisioningserver/templates include src/maasserver/migrations/south/django16_south_maas19.tar.gz include src/provisioningserver/drivers/power/*.xml include src/metadataserver/builtin_scripts/*.sh maas-2.4.2-7034-g2f5deb8b8.orig/Makefile000066400000000000000000000572251333555657500172240ustar00rootroot00000000000000python := python3 snapcraft := snapcraft # pkg_resources makes some incredible noise about version numbers. They # are not indications of bugs in MAAS so we silence them everywhere. export PYTHONWARNINGS = \ ignore:You have iterated over the result:RuntimeWarning:pkg_resources: # Network activity can be suppressed by setting offline=true (or any # non-empty string) at the command-line. ifeq ($(offline),) buildout := bin/buildout else buildout := bin/buildout buildout:offline=true endif # If offline has been selected, attempt to further block HTTP/HTTPS # activity by setting bogus proxies in the environment. ifneq ($(offline),) export http_proxy := broken export https_proxy := broken endif # MAAS SASS stylesheets. The first input file (maas-styles.css) imports # the others, so is treated specially in the target definitions. scss_input := src/maasserver/static/scss/build.scss scss_deps := $(wildcard src/maasserver/static/scss/_*.scss) scss_output := src/maasserver/static/css/build.css javascript_deps := \ $(shell find src -name '*.js' -not -path '*/maasserver/static/js/bundle/*') \ package.json \ webpack.config.js \ yarn.lock javascript_output := \ src/maasserver/static/js/bundle/maas-min.js \ src/maasserver/static/js/bundle/maas-min.js.map \ src/maasserver/static/js/bundle/vendor-min.js \ src/maasserver/static/js/bundle/vendor-min.js.map # Prefix commands with this when they need access to the database. # Remember to add a dependency on bin/database from the targets in # which those commands appear. dbrun := bin/database --preserve run -- # Path to install local nodejs. mkfile_dir := $(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) nodejs_path := $(mkfile_dir)/include/nodejs/bin export PATH := $(nodejs_path):$(PATH) # For things that care, postgresfixture for example, we always want to # use the "maas" databases. export PGDATABASE := maas # For anything we start, we want to hint as to its root directory. export MAAS_ROOT := $(CURDIR)/.run build: \ bin/buildout \ bin/database \ bin/maas \ bin/maas-common \ bin/maas-rack \ bin/maas-region \ bin/rackd \ bin/regiond \ bin/test.cli \ bin/test.rack \ bin/test.region \ bin/test.region.legacy \ bin/test.testing \ bin/test.js \ bin/test.e2e \ bin/test.parallel \ bin/py bin/ipy \ pycharm all: build doc # Install all packages required for MAAS development & operation on # the system. This may prompt for a password. install-dependencies: release := $(shell lsb_release -c -s) install-dependencies: sudo DEBIAN_FRONTEND=noninteractive apt-get -y \ --no-install-recommends install $(shell sort -u \ $(addprefix required-packages/,base build dev doc) | sed '/^\#/d') sudo DEBIAN_FRONTEND=noninteractive apt-get -y \ purge $(shell sort -u required-packages/forbidden | sed '/^\#/d') .gitignore: sed 's:^[.]/:/:' $^ > $@ configure-buildout: utilities/configure-buildout sudoers: utilities/install-sudoers utilities/grant-nmap-permissions bin/buildout: bootstrap-buildout.py @utilities/configure-buildout --quiet $(python) bootstrap-buildout.py --allow-site-packages @touch --no-create $@ # Ensure it's newer than its dependencies. # buildout.cfg refers to .run and .run-e2e. buildout.cfg: .run .run-e2e bin/database: bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install database @touch --no-create $@ bin/test.parallel: \ bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install parallel-test @touch --no-create $@ bin/maas-region bin/regiond: \ bin/buildout buildout.cfg versions.cfg setup.py \ $(scss_output) $(javascript_output) $(buildout) install region @touch --no-create $@ bin/test.region: \ bin/buildout buildout.cfg versions.cfg setup.py \ bin/maas-region bin/maas-rack bin/maas-common $(buildout) install region-test @touch --no-create $@ bin/test.region.legacy: \ bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install region-test-legacy @touch --no-create $@ bin/maas: bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install cli @touch --no-create $@ bin/test.cli: bin/buildout buildout.cfg versions.cfg setup.py bin/maas $(buildout) install cli-test @touch --no-create $@ bin/test.js: bin/karma bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install js-test @touch --no-create $@ bin/test.e2e: \ bin/protractor bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install e2e-test @touch --no-create $@ # bin/maas-region is needed for South migration tests. bin/flake8 is needed for # checking lint and bin/node-sass is needed for checking css. bin/test.testing: \ bin/maas-region bin/flake8 bin/node-sass bin/buildout \ buildout.cfg versions.cfg setup.py $(buildout) install testing-test @touch --no-create $@ bin/maas-rack bin/rackd bin/maas-common: \ bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install rack @touch --no-create $@ bin/test.rack: \ bin/buildout buildout.cfg versions.cfg setup.py bin/maas-rack bin/py $(buildout) install rack-test @touch --no-create $@ bin/flake8: bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install flake8 @touch --no-create $@ bin/sphinx bin/sphinx-build: bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install sphinx @touch --no-create $@ bin/py bin/ipy: bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install repl @touch --no-create bin/py bin/ipy bin/coverage: bin/buildout buildout.cfg versions.cfg setup.py $(buildout) install coverage @touch --no-create bin/coverage include/nodejs/bin/node: mkdir -p include/nodejs wget -O include/nodejs/nodejs.tar.gz https://nodejs.org/dist/v8.9.3/node-v8.9.3-linux-x64.tar.gz tar -C include/nodejs/ -xf include/nodejs/nodejs.tar.gz --strip-components=1 include/nodejs/yarn.tar.gz: mkdir -p include/nodejs wget -O include/nodejs/yarn.tar.gz https://yarnpkg.com/latest.tar.gz include/nodejs/bin/yarn: include/nodejs/yarn.tar.gz tar -C include/nodejs/ -xf include/nodejs/yarn.tar.gz --strip-components=1 @touch --no-create $@ bin/yarn: include/nodejs/bin/yarn @mkdir -p bin ln -sf ../include/nodejs/bin/yarn $@ @touch --no-create $@ node_modules: include/nodejs/bin/node bin/yarn bin/yarn --frozen-lockfile @touch --no-create $@ define js_bins bin/karma bin/protractor bin/node-sass bin/webpack endef $(strip $(js_bins)): node_modules ln -sf ../node_modules/.bin/$(notdir $@) $@ @touch --no-create $@ js-update-macaroonbakery: mkdir -p src/masserver/static/js/macaroon wget -O src/maasserver/static/js/macaroon/js-macaroon.js \ 'https://raw.githubusercontent.com/juju/juju-gui/develop/jujugui/static/gui/src/app/assets/javascripts/js-macaroon.js' wget -O src/maasserver/static/js/macaroon/bakery.js \ 'https://raw.githubusercontent.com/juju/juju-gui/develop/jujugui/static/gui/src/app/jujulib/bakery.js' wget -O src/maasserver/static/js/macaroon/web-handler.js \ 'https://raw.githubusercontent.com/juju/juju-gui/develop/jujugui/static/gui/src/app/store/env/web-handler.js' define node_packages @babel/core @babel/preset-react @babel/preset-es2015 @types/prop-types @types/react @types/react-dom babel-loader@^8.0.0-beta.0 glob jasmine-core@=2.99.1 karma karma-chrome-launcher karma-failed-reporter karma-firefox-launcher karma-jasmine karma-ng-html2js-preprocessor karma-opera-launcher karma-phantomjs-launcher karma-sourcemap-loader node-sass phantomjs-prebuilt prop-types protractor react react-dom react2angular uglifyjs-webpack-plugin vanilla-framework vanilla-framework-react webpack webpack-cli webpack-merge endef force-yarn-update: bin/yarn $(RM) package.json yarn.lock bin/yarn add -D $(strip $(node_packages)) define test-scripts bin/test.cli bin/test.rack bin/test.region bin/test.region.legacy bin/test.testing bin/test.js endef lxd: utilities/configure-lxd-profile utilities/create-lxd-bionic-image test: bin/test.parallel bin/coverage @$(RM) .coverage .coverage.* @bin/test.parallel --with-coverage --subprocess-per-core @bin/coverage combine test-js: bin/test.js javascript @bin/test.js test-serial: $(strip $(test-scripts)) @bin/maas-region makemigrations --dry-run --exit && exit 1 ||: @$(RM) .coverage .coverage.* .failed $(foreach test,$^,$(test-template);) @test ! -f .failed test-failed: $(strip $(test-scripts)) @bin/maas-region makemigrations --dry-run --exit && exit 1 ||: @$(RM) .coverage .coverage.* .failed $(foreach test,$^,$(test-template-failed);) @test ! -f .failed clean-failed: $(RM) .noseids src/maasserver/testing/initial.maas_test.sql: bin/database syncdb bin/database --preserve run -- \ pg_dump maas --no-owner --no-privileges \ --format=plain > $@ test-initial-data: src/maasserver/testing/initial.maas_test.sql define test-template $(test) --with-xunit --xunit-file=xunit.$(notdir $(test)).xml || touch .failed endef define test-template-failed $(test) --with-xunit --xunit-file=xunit.$(notdir $(test)).xml --failed || \ $(test) --with-xunit --xunit-file=xunit.$(notdir $(test)).xml --failed || \ touch .failed endef smoke: lint bin/maas-region bin/test.rack @bin/maas-region makemigrations --dry-run --exit && exit 1 ||: @bin/test.rack --stop test-serial+coverage: export NOSE_WITH_COVERAGE = 1 test-serial+coverage: test-serial coverage-report: coverage/index.html sensible-browser $< > /dev/null 2>&1 & coverage.xml: bin/coverage .coverage bin/coverage xml -o $@ coverage/index.html: revno = $(or $(shell git rev-parse HEAD 2>/dev/null),???) coverage/index.html: bin/coverage .coverage @$(RM) -r $(@D) bin/coverage html \ --title "Coverage for MAAS rev $(revno)" \ --directory $(@D) .coverage: @$(error Use `$(MAKE) test` to generate coverage) lint: \ lint-py lint-py-complexity lint-py-imports \ lint-js lint-doc lint-rst # Only Unix line ends should be accepted @find src/ -type f -exec file "{}" ";" | \ awk '/CRLF/ { print $0; count++ } END {exit count}' || \ (echo "Lint check failed; run make format to fix DOS linefeeds."; false) pocketlint = $(call available,pocketlint,python-pocket-lint) # XXX jtv 2014-02-25: Clean up this lint, then make it part of "make lint". lint-css: sources = src/maasserver/static/css lint-css: @find $(sources) -type f \ -print0 | xargs -r0 $(pocketlint) --max-length=120 # Python lint checks are time-intensive, but flake8 now knows how to run # parallel jobs, and does so by default. lint-py: sources = setup.py src lint-py: bin/flake8 @find $(sources) -name '*.py' \ ! -path '*/migrations/*' ! -path '*/south_migrations/*' -print0 \ | xargs -r0 bin/flake8 --config=.flake8 # Ignore tests when checking complexity. The maximum complexity ought to # be close to 10 but MAAS has many functions that are over that so we # start with a much higher number. Over time we can ratchet it down. lint-py-complexity: maximum=26 lint-py-complexity: sources = setup.py src lint-py-complexity: bin/flake8 @find $(sources) -name '*.py' \ ! -path '*/migrations/*' ! -path '*/south_migrations/*' \ ! -path '*/tests/*' ! -path '*/testing/*' ! -name 'testing.py' \ -print0 | xargs -r0 bin/flake8 --config=.flake8 --max-complexity=$(maximum) # Statically check imports against policy. lint-py-imports: sources = setup.py src lint-py-imports: @utilities/check-imports @find $(sources) -name '*.py' \ ! -path '*/migrations/*' ! -path '*/south_migrations/*' \ -print0 | xargs -r0 utilities/find-early-imports lint-doc: @utilities/doc-lint # JavaScript lint is checked in parallel for speed. The -n20 -P4 setting # worked well on a multicore SSD machine with the files cached, roughly # doubling the speed, but it may need tuning for slower systems or cold caches. lint-js: sources = src/maasserver/static/js lint-js: @find $(sources) -type f -not -path '*/angular/3rdparty/*' -a \ -not -path '*-min.js' -a -not -name js-macaroon.js -a \ '(' -name '*.html' -o -name '*.js' ')' -print0 \ | xargs -r0 -n20 -P4 $(pocketlint) # Apply automated formatting to all Python files. format: sources = $(wildcard *.py contrib/*.py) src utilities etc format: @find $(sources) -name '*.py' -print0 | xargs -r0 utilities/format-imports @find src/ -type f -exec file "{}" ";" | grep CRLF | cut -d ':' -f1 | xargs dos2unix check: clean test docs/api.rst: bin/maas-region src/maasserver/api/doc_handler.py syncdb bin/maas-region generate_api_doc > $@ sampledata: bin/maas-region bin/database syncdb $(dbrun) bin/maas-region generate_sample_data doc: bin/sphinx docs/api.rst bin/sphinx docs/_build/html/index.html: doc doc-browse: docs/_build/html/index.html sensible-browser $< > /dev/null 2>&1 & doc-with-versions: bin/sphinx docs/api.rst $(MAKE) -C docs/_build SPHINXOPTS="-A add_version_switcher=true" html man: $(patsubst docs/man/%.rst,man/%,$(wildcard docs/man/*.rst)) man/%: docs/man/%.rst | bin/sphinx-build bin/sphinx-build -b man docs man $^ .run .run-e2e: run-skel @cp --archive --verbose $^ $@ .idea: contrib/pycharm @cp --archive --verbose $^ $@ pycharm: .idea styles: $(scss_output) force-styles: clean-styles $(scss_output) $(scss_output): bin/node-sass $(scss_input) $(scss_deps) bin/node-sass --include-path=src/maasserver/static/scss \ --output-style compressed $(scss_input) -o $(dir $@) clean-styles: $(RM) $(scss_output) javascript: node_modules $(javascript_output) force-javascript: clean-javascript node_modules $(javascript_output) lander-javascript: force-javascript git update-index -q --no-assume-unchanged $(strip $(javascript_output)) 2> /dev/null || true git add -f $(strip $(javascript_output)) 2> /dev/null || true # The $(subst ...) uses a pattern rule to ensure Webpack runs just once, # even if all four output files are out-of-date. $(subst .,%,$(javascript_output)): $(javascript_deps) node_modules/.bin/webpack @touch --no-create $(strip $(javascript_output)) @git update-index -q --assume-unchanged $(strip $(javascript_output)) 2> /dev/null || true clean-javascript: $(RM) -r src/maasserver/static/js/bundle clean: stop clean-failed find . -type f -name '*.py[co]' -print0 | xargs -r0 $(RM) find . -type d -name '__pycache__' -print0 | xargs -r0 $(RM) -r find . -type f -name '*~' -print0 | xargs -r0 $(RM) $(RM) -r media/demo/* media/development media/development.* $(RM) src/maasserver/data/templates.py $(RM) *.log $(RM) docs/api.rst $(RM) -r docs/_autosummary docs/_build $(RM) -r man/.doctrees $(RM) .coverage .coverage.* coverage.xml $(RM) -r coverage $(RM) -r .hypothesis $(RM) -r bin include lib local node_modules $(RM) -r eggs develop-eggs $(RM) -r build dist logs/* parts $(RM) tags TAGS .installed.cfg $(RM) -r *.egg *.egg-info src/*.egg-info $(RM) -r services/*/supervise $(RM) -r .run .run-e2e $(RM) -r .idea $(RM) xunit.*.xml $(RM) .failed clean+db: clean while fuser db --kill -TERM; do sleep 1; done $(RM) -r db $(RM) .db.lock distclean: clean $(warning 'distclean' is deprecated; use 'clean') harness: bin/maas-region bin/database $(dbrun) bin/maas-region shell \ --settings=maasserver.djangosettings.demo dbharness: bin/database bin/database --preserve shell syncdb: bin/maas-region bin/database $(dbrun) bin/maas-region dbupgrade define phony_targets build check clean clean+db clean-failed clean-javascript clean-styles configure-buildout coverage-report dbharness distclean doc doc-browse force-styles force-javascript force-yarn-update format harness install-dependencies javascript lander-javascript lint lint-css lint-doc lint-js lint-py lint-py-complexity lint-py-imports lint-rst lxd man print-% sampledata smoke styles sudoers syncdb sync-dev-snap test test+lxd test-failed test-initial-data test-serial test-serial+coverage endef # # Development services. # service_names_region := database dns regiond reloader service_names_rack := rackd reloader service_names_all := $(service_names_region) $(service_names_rack) # The following template is intended to be used with `call`, and it # accepts a single argument: a target name. The target name must # correspond to a service action (see "Pseudo-magic targets" below). A # region- and rack-specific variant of the target will be created, in # addition to the target itself. These can be used to apply the service # action to the region services, the rack services, or all services, at # the same time. define service_template $(1)-region: $(patsubst %,services/%/@$(1),$(service_names_region)) $(1)-rack: $(patsubst %,services/%/@$(1),$(service_names_rack)) $(1): $(1)-region $(1)-rack phony_services_targets += $(1)-region $(1)-rack $(1) endef # Expand out aggregate service targets using `service_template`. $(eval $(call service_template,pause)) $(eval $(call service_template,restart)) $(eval $(call service_template,start)) $(eval $(call service_template,status)) $(eval $(call service_template,stop)) $(eval $(call service_template,supervise)) # The `run` targets do not fit into the mould of the others. run-region: @services/run $(service_names_region) run-rack: @services/run $(service_names_rack) run: @services/run $(service_names_all) phony_services_targets += run-region run-rack run phony_services_targets += run+regiond # Convenient variables and functions for service control. setlock = $(call available,setlock,daemontools) supervise = $(call available,supervise,daemontools) svc = $(call available,svc,daemontools) svok = $(call available,svok,daemontools) svstat = $(call available,svstat,daemontools) service_lock = $(setlock) -n /run/lock/maas.dev.$(firstword $(1)) # Pseudo-magic targets for controlling individual services. services/%/@run: services/%/@stop services/%/@deps @$(call service_lock, $*) services/$*/run services/%/@start: services/%/@supervise @$(svc) -u $(@D) services/%/@pause: services/%/@supervise @$(svc) -d $(@D) services/%/@status: @$(svstat) $(@D) services/%/@restart: services/%/@supervise @$(svc) -du $(@D) services/%/@stop: @if $(svok) $(@D); then $(svc) -dx $(@D); fi @while $(svok) $(@D); do sleep 0.1; done services/%/@supervise: services/%/@deps @mkdir -p logs/$* @touch $(@D)/down @if ! $(svok) $(@D); then \ logdir=$(CURDIR)/logs/$* \ $(call service_lock, $*) $(supervise) $(@D) & fi @while ! $(svok) $(@D); do sleep 0.1; done # Dependencies for individual services. services/dns/@deps: bin/py services/database/@deps: bin/database services/rackd/@deps: bin/rackd bin/maas-rack bin/maas-common services/reloader/@deps: services/regiond/@deps: bin/maas-region bin/maas-rack bin/maas-common # # Package building # # This ought to be as simple as using # gbp buildpackage --git-debian-branch=packaging # but it is not: without investing more time, we manually pre-build the source # tree and run debuild. packaging-repo = https://git.launchpad.net/maas/ packaging-branch = "packaging" packaging-build-area := $(abspath ../build-area) packaging-version = $(shell \ utilities/calc-snap-version | sed s/[-]snap//) tmp_changelog := $(shell tempfile) packaging-dir := maas_$(packaging-version) packaging-orig-targz := $(packaging-dir).orig.tar.gz -packaging-clean: rm -rf $(packaging-build-area) mkdir -p $(packaging-build-area) -packaging-export-orig: $(packaging-build-area) git archive --format=tar.gz $(packaging-export-extra) \ --prefix=$(packaging-dir)/ \ -o $(packaging-build-area)/$(packaging-orig-targz) HEAD -packaging-export-orig-uncommitted: $(packaging-build-area) git ls-files --others --exclude-standard --cached | grep -v '^debian' | \ xargs tar --transform 's,^,$(packaging-dir)/,' -czf $(packaging-build-area)/$(packaging-orig-targz) -packaging-export: -packaging-export-orig$(if $(export-uncommitted),-uncommitted,) -package-tree: -packaging-export (cd $(packaging-build-area) && tar xfz $(packaging-orig-targz)) (cp -r debian $(packaging-build-area)/$(packaging-dir)) echo "maas ($(packaging-version)-0ubuntu1) UNRELEASED; urgency=medium" \ > $(tmp_changelog) tail -n +2 debian/changelog >> $(tmp_changelog) mv $(tmp_changelog) $(packaging-build-area)/$(packaging-dir)/debian/changelog package: javascript -packaging-clean -package-tree (cd $(packaging-build-area)/$(packaging-dir) && debuild -uc -us) @echo Binary packages built, see $(packaging-build-area). # To build binary packages from uncommitted changes call "make package-dev". package-dev: make export-uncommitted=yes package source-package: -package-tree (cd $(packaging-build-area)/$(packaging-dir) && debuild -S -uc -us) @echo Source package built, see $(packaging-build-area). # To build source packages from uncommitted changes call "make package-dev". source-package-dev: make export-uncommitted=yes source-package # To rebuild packages (i.e. from a clean slate): package-rebuild: package-clean package package-dev-rebuild: package-clean package-dev source-package-rebuild: source-package-clean source-package source-package-dev-rebuild: source-package-clean source-package-dev # To clean built packages away: package-clean: patterns := *.deb *.udeb *.dsc *.build *.changes package-clean: patterns += *.debian.tar.xz *.orig.tar.gz package-clean: @$(RM) -v $(addprefix $(packaging-build-area)/,$(patterns)) source-package-clean: patterns := *.dsc *.build *.changes source-package-clean: patterns += *.debian.tar.xz *.orig.tar.gz source-package-clean: @$(RM) -v $(addprefix $(packaging-build-area)/,$(patterns)) # Debugging target. Allows printing of any variable. # As an example, try: # make print-scss_input print-%: @echo $* = $($*) define phony_package_targets -packaging-export-orig -packaging-export-orig-uncommitted -packaging-export -packaging-fetch -packaging-pull -packaging-refresh -package-tree package package-clean package-dev package-dev-rebuild package-rebuild source-package source-package-clean source-package-dev source-package-dev-rebuild source-package-rebuild endef # # Snap building # snap-clean: $(snapcraft) clean snap: $(snapcraft) snap-cleanbuild: $(snapcraft) cleanbuild define phony_snap_targets snap snap-clean snap-cleanbuild endef # # Helpers for using the snap for development testing. # build/dev-snap: ## Check out a clean version of the working tree. git checkout-index -a --prefix build/dev-snap/ build/dev-snap/prime: build/dev-snap cd build/dev-snap && $(snapcraft) prime sync-dev-snap: build/dev-snap/prime rsync -v --exclude 'maastesting' --exclude 'tests' --exclude 'testing' \ --exclude '*.pyc' --exclude '__pycache__' -r -u -l -t -W -L \ src/ build/dev-snap/prime/lib/python3.6/site-packages/ rsync -v -r -u -l -t -W -L \ src/maasserver/static/ build/dev-snap/prime/usr/share/maas/web/static/ # # Phony stuff. # define phony $(phony_package_targets) $(phony_services_targets) $(phony_snap_targets) $(phony_targets) endef phony := $(sort $(strip $(phony))) .PHONY: $(phony) FORCE # # Secondary stuff. # # These are intermediate files that we want to keep around in the event # that they get built. By declaring them here we're also telling Make # that their absense is okay if a rule target is newer than the rule's # other prerequisites; i.e. don't build them. # # For example, converting foo.scss to foo.css might require bin/node-sass. If # foo.css is newer than foo.scss we know that we don't need to perform that # conversion, and hence don't need bin/node-sass. We declare bin/node-sass as # secondary so that Make knows this too. # define secondary_binaries bin/py bin/buildout bin/node-sass bin/webpack bin/sphinx bin/sphinx-build endef secondary = $(sort $(strip $(secondary_binaries))) .SECONDARY: $(secondary) # # Functions. # # Check if a command is found on PATH. Raise an error if not, citing # the package to install. Return the command otherwise. # Usage: $(call available,,) define available $(if $(shell which $(1)),$(1),$(error $(1) not found; \ install it with 'sudo apt-get install $(2)')) endef maas-2.4.2-7034-g2f5deb8b8.orig/README.rst000066400000000000000000000023571333555657500172470ustar00rootroot00000000000000************************ MAAS: Metal as a Service ************************ Metal as a Service -- MAAS -- lets you treat physical servers like virtual machines in the cloud. Rather than having to manage each server individually, MAAS turns your bare metal into an elastic cloud-like resource. What does that mean in practice? Tell MAAS about the machines you want it to manage and it will boot them, check the hardware's okay, and have them waiting for when you need them. You can then pull nodes up, tear them down and redeploy them at will; just as you can with virtual machines in the cloud. When you're ready to deploy a service, MAAS gives Juju the nodes it needs to power that service. It's as simple as that: no need to manually provision, check and, afterwards, clean-up. As your needs change, you can easily scale services up or down. Need more power for your Hadoop cluster for a few hours? Simply tear down one of your Nova compute nodes and redeploy it to Hadoop. When you're done, it's just as easy to give the node back to Nova. MAAS is ideal where you want the flexibility of the cloud, and the hassle-free power of Juju charms, but you need to deploy to bare metal. For more information see the `MAAS guide`_. .. _MAAS guide: http://maas.io/ maas-2.4.2-7034-g2f5deb8b8.orig/bootstrap-buildout.py000066400000000000000000000164421333555657500217740ustar00rootroot00000000000000############################################################################## # # Copyright (c) 2006 Zope Foundation and Contributors. # All Rights Reserved. # # This software is subject to the provisions of the Zope Public License, # Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution. # THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED # WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED # WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS # FOR A PARTICULAR PURPOSE. # ############################################################################## """Bootstrap a buildout-based project Simply run this script in a directory containing a buildout.cfg. The script accepts buildout command-line options, so you can use the -c option to specify an alternate configuration file. """ from optparse import OptionParser import os import shutil import sys import tempfile __version__ = '2015-07-01' # See zc.buildout's changelog if this version is up to date. tmpeggs = tempfile.mkdtemp(prefix='bootstrap-') usage = '''\ [DESIRED PYTHON FOR BUILDOUT] bootstrap.py [options] Bootstraps a buildout-based project. Simply run this script in a directory containing a buildout.cfg, using the Python that you want bin/buildout to use. Note that by using --find-links to point to local resources, you can keep this script from going over the network. ''' parser = OptionParser(usage=usage) parser.add_option("--version", action="store_true", default=False, help=("Return bootstrap.py version.")) parser.add_option("-t", "--accept-buildout-test-releases", dest='accept_buildout_test_releases', action="store_true", default=False, help=("Normally, if you do not specify a --version, the " "bootstrap script and buildout gets the newest " "*final* versions of zc.buildout and its recipes and " "extensions for you. If you use this flag, " "bootstrap and buildout will get the newest releases " "even if they are alphas or betas.")) parser.add_option("-c", "--config-file", help=("Specify the path to the buildout configuration " "file to be used.")) parser.add_option("-f", "--find-links", help=("Specify a URL to search for buildout releases")) parser.add_option("--allow-site-packages", action="store_true", default=False, help=("Let bootstrap.py use existing site packages")) parser.add_option("--buildout-version", help="Use a specific zc.buildout version") parser.add_option("--setuptools-version", help="Use a specific setuptools version") parser.add_option("--setuptools-to-dir", help=("Allow for re-use of existing directory of " "setuptools versions")) options, args = parser.parse_args() if options.version: print("bootstrap.py version %s" % __version__) sys.exit(0) ###################################################################### # load/install setuptools try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen ez = {} if os.path.exists('ez_setup.py'): exec(open('ez_setup.py').read(), ez) else: exec(urlopen('https://bootstrap.pypa.io/ez_setup.py').read(), ez) if not options.allow_site_packages: # ez_setup imports site, which adds site packages # this will remove them from the path to ensure that incompatible versions # of setuptools are not in the path import site # inside a virtualenv, there is no 'getsitepackages'. # We can't remove these reliably if hasattr(site, 'getsitepackages'): for sitepackage_path in site.getsitepackages(): # Strip all site-packages directories from sys.path that # are not sys.prefix; this is because on Windows # sys.prefix is a site-package directory. if sitepackage_path != sys.prefix: sys.path[:] = [x for x in sys.path if sitepackage_path not in x] setup_args = dict(to_dir=tmpeggs, download_delay=0) if options.setuptools_version is not None: setup_args['version'] = options.setuptools_version if options.setuptools_to_dir is not None: setup_args['to_dir'] = options.setuptools_to_dir ez['use_setuptools'](**setup_args) import setuptools import pkg_resources # This does not (always?) update the default working set. We will # do it. for path in sys.path: if path not in pkg_resources.working_set.entries: pkg_resources.working_set.add_entry(path) ###################################################################### # Install buildout ws = pkg_resources.working_set setuptools_path = ws.find( pkg_resources.Requirement.parse('setuptools')).location # Fix sys.path here as easy_install.pth added before PYTHONPATH cmd = [sys.executable, '-c', 'import sys; sys.path[0:0] = [%r]; ' % setuptools_path + 'from setuptools.command.easy_install import main; main()', '-mZqNxd', tmpeggs] find_links = os.environ.get( 'bootstrap-testing-find-links', options.find_links or ('http://downloads.buildout.org/' if options.accept_buildout_test_releases else None) ) if find_links: cmd.extend(['-f', find_links]) requirement = 'zc.buildout' version = options.buildout_version if version is None and not options.accept_buildout_test_releases: # Figure out the most recent final version of zc.buildout. import setuptools.package_index _final_parts = '*final-', '*final' def _final_version(parsed_version): try: return not parsed_version.is_prerelease except AttributeError: # Older setuptools for part in parsed_version: if (part[:1] == '*') and (part not in _final_parts): return False return True index = setuptools.package_index.PackageIndex( search_path=[setuptools_path]) if find_links: index.add_find_links((find_links,)) req = pkg_resources.Requirement.parse(requirement) if index.obtain(req) is not None: best = [] bestv = None for dist in index[req.project_name]: distv = dist.parsed_version if _final_version(distv): if bestv is None or distv > bestv: best = [dist] bestv = distv elif distv == bestv: best.append(dist) if best: best.sort() version = best[-1].version if version: requirement = '=='.join((requirement, version)) cmd.append(requirement) import subprocess if subprocess.call(cmd) != 0: raise Exception( "Failed to execute command:\n%s" % repr(cmd)[1:-1]) ###################################################################### # Import and run buildout ws.add_entry(tmpeggs) ws.require(requirement) import zc.buildout.buildout if not [a for a in args if '=' not in a]: args.append('bootstrap') # if -c was provided, we push it back into args for buildout' main function if options.config_file is not None: args[0:0] = ['-c', options.config_file] zc.buildout.buildout.main(args) shutil.rmtree(tmpeggs) maas-2.4.2-7034-g2f5deb8b8.orig/buildout.cfg000066400000000000000000000232051333555657500200630ustar00rootroot00000000000000[buildout] parts = cli cli-test config-test coverage flake8 parallel-test rack rack-test region region-test region-test-legacy repl sphinx testing-test versions = versions extends = versions.cfg offline = false newest = false # Uncomment the following two lines and set allow-picked-versions=true # to automatically update versions.cfg when building recipes. # extensions = buildout-versions # buildout_versions_file = versions.cfg prefer-final = true allow-picked-versions = false [common] extra-paths = ${buildout:directory}/etc ${buildout:directory}/src ${buildout:directory} test-eggs = blessings coverage fixtures hypothesis ipdb junitxml nose nose-timer postgresfixture python-subunit testresources testscenarios testtools initialization = ${common:path-munge} ${common:warnings} ${common:environment} path-munge = import pathlib, sys # Eliminate argparse usage outside of the standard library. This is # needed because some deps unittest2 explicitly require # argparse, which zc.buildout then dutifully installs. Unfortunately # argparse 1.1 from PyPI differs substantially to argparse 1.1 in the # standard library. For consistency we want the latter. p_argparse_egg = lambda path: pathlib.Path(path).match("*/argparse-*.egg") sys.path[:] = [path for path in sys.path if not p_argparse_egg(path)] # Sort system paths towards the end of sys.path so that deps defined # here are used in preference to those installed system-wide. p_sys_prefix = lambda path, p=pathlib.Path: p(sys.prefix) in p(path).parents sys.path.sort(key=p_sys_prefix) environment = from os import environ environ.setdefault("MAAS_ROOT", "${buildout:directory}/.run") warnings = from warnings import filterwarnings filterwarnings("ignore", category=RuntimeWarning, module="pkg_resources") asyncio-reactor = # Install the asyncio reactor with uvloop. import asyncio import uvloop from twisted.internet import asyncioreactor, error asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) try: asyncioreactor.install() except error.ReactorAlreadyInstalledError: pass inject-test-options = # When running tests from a console show only dots, but when running # headless increase verbosity so we can see the test being run from a # log file. An `options` list must be defined ahead of the use of this # snippet. options += ( ["--verbosity=1"] if sys.stdout.isatty() else ["--verbosity=2"] ) sys.argv[1:1] = options [database] recipe = zc.recipe.egg eggs = postgresfixture extra-paths = ${common:extra-paths} initialization = ${common:path-munge} interpreter = entry-points = database=postgresfixture.main:main scripts = database [parallel-test] recipe = zc.recipe.egg eggs = ${common:test-eggs} entry-points = test.parallel=maastesting.parallel:main scripts = test.parallel extra-paths = ${common:extra-paths} initialization = ${common:initialization} ${common:asyncio-reactor} [region] recipe = zc.recipe.egg test-eggs = ${common:test-eggs} selenium eggs = ${region:test-eggs} entry-points = maas-region=maasserver:execute_from_command_line regiond=maasserver.server:run initialization = ${common:initialization} environ.setdefault("DJANGO_SETTINGS_MODULE", "maasserver.djangosettings.development") environ.setdefault("MAAS_DEBUG_QUERIES", "1") scripts = maas-region regiond extra-paths = ${common:extra-paths} [region-test] recipe = zc.recipe.egg eggs = ${region:eggs} ${common:test-eggs} entry-points = test.region=maastesting.noseplug:main initialization = ${region:initialization} ${common:asyncio-reactor} # Prevent query logging as it affects the tests. if 'MAAS_DEBUG_QUERIES' in environ: del environ['MAAS_DEBUG_QUERIES'] options = [ "--with-crochet", "--with-resources", "--with-scenarios", "--with-select", "--select-dir=src/maasserver", "--select-dir=src/metadataserver", "--cover-package=maas,maasserver,metadataserver", "--cover-branches", # Reduce the logging level to INFO here as # DebuggingLoggerMiddleware logs the content of all the # requests at DEBUG level: we don't want this in the # tests as it's too verbose. "--logging-level=INFO", "--logging-clear-handlers", # Do not run tests tagged "legacy". "-a", "!legacy", ] ${common:inject-test-options} # Configure logging. TODO: Do this in a plugin. from provisioningserver import logger logger.configure(mode=logger.LoggingMode.COMMAND) # Limit concurrency in all thread-pools to ONE. from maasserver.utils import threads threads.install_default_pool(maxthreads=1) threads.install_database_unpool(maxthreads=1) # Disable all database connections in the reactor. from maasserver.utils import orm from twisted.internet import reactor assert not reactor.running, "The reactor has been started too early." reactor.callFromThread(orm.disable_all_database_connections) # Last and least, configure Django. import django; django.setup() scripts = test.region extra-paths = ${region:extra-paths} [region-test-legacy] recipe = zc.recipe.egg eggs = ${region:eggs} entry-points = test.region.legacy=maasserver:execute_from_command_line initialization = ${region:initialization} ${common:asyncio-reactor} environ.setdefault("MAAS_PREVENT_MIGRATIONS", "1") # Prevent query logging as it affects the tests. if 'MAAS_DEBUG_QUERIES' in environ: del environ['MAAS_DEBUG_QUERIES'] options = [ "test", "--noinput", "--with-crochet", "--with-scenarios", "--with-select", "--select-dir=src/maasserver", "--select-dir=src/metadataserver", "--cover-package=maas,maasserver,metadataserver", "--cover-branches", # Reduce the logging level to INFO here as # DebuggingLoggerMiddleware logs the content of all the # requests at DEBUG level: we don't want this in the # tests as it's too verbose. "--logging-level=INFO", "--logging-clear-handlers", # Run only tests tagged "legacy". "-a", "legacy", ] ${common:inject-test-options} scripts = test.region.legacy extra-paths = ${region:extra-paths} [cli] recipe = zc.recipe.egg eggs = ${region:eggs} initialization = ${common:path-munge} entry-points = maas=maascli:main extra-paths = ${common:extra-paths} scripts = maas [cli-test] recipe = zc.recipe.egg eggs = ${cli:eggs} ${common:test-eggs} entry-points = test.cli=maastesting.noseplug:main initialization = ${common:path-munge} ${common:warnings} options = [ "--with-resources", "--with-scenarios", "--with-select", "--select-dir=src/apiclient", "--select-dir=src/maascli", "--cover-package=apiclient,maascli", "--cover-branches", ] ${common:inject-test-options} extra-paths = ${cli:extra-paths} scripts = test.cli [js-test] recipe = zc.recipe.egg eggs = ${common:test-eggs} entry-points = test.js=maastesting.karma:run_karma extra-paths = ${common:extra-paths} scripts = test.js initialization = ${common:initialization} [testing-test] recipe = zc.recipe.egg eggs = ${common:test-eggs} entry-points = test.testing=maastesting.noseplug:main initialization = ${common:path-munge} ${common:warnings} ${common:asyncio-reactor} options = [ "--with-resources", "--with-scenarios", "--with-select", "--select-dir=src/maastesting", "--cover-package=maastesting", "--cover-branches", ] ${common:inject-test-options} extra-paths = ${common:extra-paths} scripts = test.testing [rack] recipe = zc.recipe.egg eggs = ${common:test-eggs} entry-points = maas-rack=provisioningserver.__main__:main maas-common=provisioningserver.__main__:main rackd=provisioningserver.server:run extra-paths = ${common:extra-paths} scripts = maas-rack maas-common rackd initialization = ${common:initialization} [rack-test] recipe = zc.recipe.egg eggs = ${rack:eggs} ${common:test-eggs} entry-points = test.rack=maastesting.noseplug:main initialization = ${common:initialization} ${common:asyncio-reactor} options = [ "--with-crochet", "--crochet-no-setup", "--with-resources", "--with-scenarios", "--with-select", "--select-dir=src/provisioningserver", "--cover-package=provisioningserver", "--cover-branches", ] ${common:inject-test-options} extra-paths = ${rack:extra-paths} scripts = test.rack [e2e-test] recipe = zc.recipe.egg eggs = ${region:test-eggs} entry-points = test.e2e=maastesting.protractor.runner:run_protractor extra-paths = ${common:extra-paths} scripts = test.e2e initialization = ${common:path-munge} from os import environ environ.setdefault("MAAS_ROOT", "${buildout:directory}/.run-e2e") environ.setdefault("DJANGO_SETTINGS_MODULE", "maasserver.djangosettings.development") environ.setdefault("DEV_DB_NAME", "test_maas_e2e") environ.setdefault("MAAS_PREVENT_MIGRATIONS", "1") [flake8] recipe = zc.recipe.egg eggs = flake8 entry-points = flake8=flake8.main.cli:main initialization = ${common:path-munge} ${common:warnings} [coverage] recipe = zc.recipe.egg eggs = coverage entry-points = coverage=coverage.cmdline:main initialization = ${common:path-munge} ${common:warnings} scripts = coverage [sphinx] recipe = collective.recipe.sphinxbuilder source = ${buildout:directory}/docs build = ${buildout:directory}/docs/_build extra-paths = ${common:extra-paths} eggs = ${region:eggs} ${rack:eggs} # Convenient REPLs with all eggs available. [repl] recipe = zc.recipe.egg eggs = ${region:eggs} ${rack:eggs} ${common:test-eggs} extra-paths = ${common:extra-paths} interpreter = py scripts = ipy entry-points = ipy=IPython.terminal.ipapp:launch_new_instance initialization = ${common:initialization} maas-2.4.2-7034-g2f5deb8b8.orig/contrib/000077500000000000000000000000001333555657500172115ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/contrib/maas-http.conf000066400000000000000000000024361333555657500217630ustar00rootroot00000000000000 SSLEngine On # Do not rely on these certificates, generate your own. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key ExpiresActive On ExpiresByType text/javascript "access plus 1 hours" ExpiresByType application/javascript "access plus 1 hours" ExpiresByType application/x-javascript "access plus 1 hours" ExpiresByType text/css "access plus 1 hours" ExpiresByType image/gif "access plus 1 hours" ExpiresByType image/jpeg "access plus 1 hours" ExpiresByType image/png "access plus 1 hours" Alias /MAAS/static/ /usr/share/maas/web/static/ ProxyPreserveHost on ProxyPass /MAAS/ws "ws://localhost:5240/MAAS/ws" ProxyPass /MAAS/static/ ! ProxyPass /MAAS/ http://localhost:5240/MAAS/ ProxyPass /MAAS http://localhost:5240/MAAS/ RewriteEngine On # Redirect (permanently) requests for /MAAS to /MAAS/. RewriteRule ^/MAAS$ %{REQUEST_URI}/ [R=301,L] maas-2.4.2-7034-g2f5deb8b8.orig/contrib/maas-rsyslog.conf000066400000000000000000000003031333555657500224750ustar00rootroot00000000000000# Log MAAS messages to their own file. :syslogtag,contains,"maas" /var/log/maas/maas.log # Doing this will stop logging maas generated log messages containing # 'maas' on /var/log/syslog & stop maas-2.4.2-7034-g2f5deb8b8.orig/contrib/preseeds_v2/000077500000000000000000000000001333555657500214325ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/contrib/preseeds_v2/commissioning000066400000000000000000000000211333555657500242240ustar00rootroot00000000000000{{preseed_data}} maas-2.4.2-7034-g2f5deb8b8.orig/contrib/preseeds_v2/curtin000066400000000000000000000000211333555657500226520ustar00rootroot00000000000000{{preseed_data}} maas-2.4.2-7034-g2f5deb8b8.orig/contrib/preseeds_v2/curtin_userdata000066400000000000000000000042001333555657500245450ustar00rootroot00000000000000#cloud-config debconf_selections: maas: | {{for line in str(curtin_preseed).splitlines()}} {{line}} {{endfor}} early_commands: {{if third_party_drivers and driver}} {{py: key_string = ''.join(['\\x%x' % x for x in driver['key_binary']])}} {{if driver['key_binary'] and driver['repository'] and driver['package']}} driver_00_get_key: /bin/echo -en '{{key_string}}' > /tmp/maas-{{driver['package']}}.gpg driver_01_add_key: ["apt-key", "add", "/tmp/maas-{{driver['package']}}.gpg"] {{endif}} {{if driver['repository']}} driver_02_add: ["add-apt-repository", "-y", "deb {{driver['repository']}} {{node.get_distro_series()}} main"] {{endif}} {{if driver['package']}} driver_03_update_install: ["sh", "-c", "apt-get update --quiet && apt-get --assume-yes install {{driver['package']}}"] {{endif}} {{if driver['module']}} driver_04_load: ["sh", "-c", "depmod && modprobe {{driver['module']}} || echo 'Warning: Failed to load module: {{driver['module']}}'"] {{endif}} {{else}} driver_00: ["sh", "-c", "echo third party drivers not installed or necessary."] {{endif}} late_commands: maas: [wget, '--no-proxy', {{node_disable_pxe_url|escape.json}}, '--post-data', {{node_disable_pxe_data|escape.json}}, '-O', '/dev/null'] {{if third_party_drivers and driver}} {{if driver['key_binary'] and driver['repository'] and driver['package']}} driver_00_key_get: curtin in-target -- sh -c "/bin/echo -en '{{key_string}}' > /tmp/maas-{{driver['package']}}.gpg" driver_02_key_add: ["curtin", "in-target", "--", "apt-key", "add", "/tmp/maas-{{driver['package']}}.gpg"] {{endif}} {{if driver['repository']}} driver_03_add: ["curtin", "in-target", "--", "add-apt-repository", "-y", "deb {{driver['repository']}} {{node.get_distro_series()}} main"] {{endif}} driver_04_update_install: ["curtin", "in-target", "--", "apt-get", "update", "--quiet"] {{if driver['package']}} driver_05_install: ["curtin", "in-target", "--", "apt-get", "-y", "install", "{{driver['package']}}"] {{endif}} driver_06_depmod: ["curtin", "in-target", "--", "depmod"] driver_07_update_initramfs: ["curtin", "in-target", "--", "update-initramfs", "-u"] {{endif}} maas-2.4.2-7034-g2f5deb8b8.orig/contrib/preseeds_v2/curtin_userdata_centos000066400000000000000000000003761333555657500261320ustar00rootroot00000000000000#cloud-config debconf_selections: maas: | {{for line in str(curtin_preseed).splitlines()}} {{line}} {{endfor}} late_commands: maas: [wget, '--no-proxy', '{{node_disable_pxe_url}}', '--post-data', '{{node_disable_pxe_data}}', '-O', '/dev/null'] maas-2.4.2-7034-g2f5deb8b8.orig/contrib/preseeds_v2/curtin_userdata_custom000066400000000000000000000003761333555657500261510ustar00rootroot00000000000000#cloud-config debconf_selections: maas: | {{for line in str(curtin_preseed).splitlines()}} {{line}} {{endfor}} late_commands: maas: [wget, '--no-proxy', '{{node_disable_pxe_url}}', '--post-data', '{{node_disable_pxe_data}}', '-O', '/dev/null'] maas-2.4.2-7034-g2f5deb8b8.orig/contrib/preseeds_v2/curtin_userdata_suse000066400000000000000000000003761333555657500256160ustar00rootroot00000000000000#cloud-config debconf_selections: maas: | {{for line in str(curtin_preseed).splitlines()}} {{line}} {{endfor}} late_commands: maas: [wget, '--no-proxy', '{{node_disable_pxe_url}}', '--post-data', '{{node_disable_pxe_data}}', '-O', '/dev/null'] maas-2.4.2-7034-g2f5deb8b8.orig/contrib/preseeds_v2/curtin_userdata_windows000066400000000000000000000004611333555657500263240ustar00rootroot00000000000000#cloud-config debconf_selections: maas: | {{for line in str(curtin_preseed).splitlines()}} {{line}} {{endfor}} late_commands: maas: [wget, '--no-proxy', '{{node_disable_pxe_url}}', '--post-data', '{{node_disable_pxe_data}}', '-O', '/dev/null'] license_key: {{node.get_effective_license_key()}} maas-2.4.2-7034-g2f5deb8b8.orig/contrib/preseeds_v2/enlist000066400000000000000000000004711333555657500226550ustar00rootroot00000000000000#cloud-config datasource: MAAS: timeout : 50 max_wait : 120 # there are no default values for metadata_url or oauth credentials # If no credentials are present, non-authed attempts will be made. metadata_url: {{metadata_enlist_url}} output: {all: '| tee -a /var/log/cloud-init-output.log'} maas-2.4.2-7034-g2f5deb8b8.orig/contrib/preseeds_v2/enlist_userdata000066400000000000000000000125311333555657500245450ustar00rootroot00000000000000#cloud-config rsyslog: remotes: maas: "{{syslog_host_port}}" power_state: delay: now mode: poweroff timeout: 1800 condition: test ! -e /tmp/block-poweroff misc_bucket: - &maas_enlist | # Bring up all interfaces. ip -o link | cut -d: -f2 | xargs -I{} ip link set dev {} up #### IPMI setup ###### # If IPMI network settings have been configured statically, you can # make them DHCP. If 'true', the IPMI network source will be changed # to DHCP. IPMI_CHANGE_STATIC_TO_DHCP="false" # In certain hardware, the parameters for the ipmi_si kernel module # might need to be specified. If you wish to send parameters, uncomment # the following line. #IPMI_SI_PARAMS="type=kcs ports=0xca2" TEMP_D=$(mktemp -d "${TMPDIR:-/tmp}/${0##*/}.XXXXXX") IPMI_CONFIG_D="${TEMP_D}/ipmi.d" BIN_D="${TEMP_D}/bin" OUT_D="${TEMP_D}/out" PATH="$BIN_D:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" mkdir -p "$BIN_D" "$OUT_D" "$IPMI_CONFIG_D" load_modules() { modprobe ipmi_msghandler modprobe ipmi_devintf modprobe ipmi_si ${IPMI_SI_PARAMS} modprobe ipmi_ssif udevadm settle } add_bin() { cat > "${BIN_D}/$1" chmod "${2:-755}" "${BIN_D}/$1" } add_ipmi_config() { cat > "${IPMI_CONFIG_D}/$1" chmod "${2:-644}" "${IPMI_CONFIG_D}/$1" } add_bin "maas-ipmi-autodetect-tool" <<"END_MAAS_IPMI_AUTODETECT_TOOL" {{for line in maas_ipmi_autodetect_tool_py.splitlines()}} {{line}} {{endfor}} END_MAAS_IPMI_AUTODETECT_TOOL add_bin "maas-ipmi-autodetect" <<"END_MAAS_IPMI_AUTODETECT" {{for line in maas_ipmi_autodetect_py.splitlines()}} {{line}} {{endfor}} END_MAAS_IPMI_AUTODETECT add_bin "maas-moonshot-autodetect" <<"END_MAAS_MOONSHOT_AUTODETECT" {{for line in maas_moonshot_autodetect_py.splitlines()}} {{line}} {{endfor}} END_MAAS_MOONSHOT_AUTODETECT add_bin "maas-wedge-autodetect" <<"END_MAAS_WEDGE_AUTODETECT" {{for line in maas_wedge_autodetect_sh.splitlines()}} {{line}} {{endfor}} END_MAAS_WEDGE_AUTODETECT add_bin "maas-enlist" <<"END_MAAS_ENLIST" {{for line in maas_enlist_sh.splitlines()}} {{line}} {{endfor}} END_MAAS_ENLIST # we could obtain the interface that booted from the kernel cmdline # thanks to 'IPAPPEND' (http://www.syslinux.org/wiki/index.php/SYSLINUX) url="{{server_url}}" # Early check to see if this machine already exists in MAAS. Already # existing machines just stop running and power off. We do not want to # update the power parameters of an existing machine. maas-enlist --serverurl "$url" --exists if [ $? -eq 1 ]; then msg="already registered on '$url'; skipping enlistment" echo echo "== $(date -R): $msg" maas-enlist --serverurl "$url" --in-action if [ $? -eq 1 ]; then msg="rebooting the machine to resume action" echo echo "=== $(date -R): $msg" reboot fi sleep 10 exit 0 fi # load ipmi modules load_modules pargs="" if $IPMI_CHANGE_STATIC_TO_DHCP; then pargs="--dhcp-if-static" fi set -x power_type=$(maas-ipmi-autodetect-tool) if [ -z $power_type ]; then power_type=$(maas-wedge-autodetect --check) || power_type="" fi case "$power_type" in ipmi) power_params=$(maas-ipmi-autodetect --configdir "$IPMI_CONFIG_D" ${pargs} --commission-creds) && [ -n "${power_params}" ] && power_params=${power_params%.} ;; moonshot) power_params=$(maas-moonshot-autodetect --commission-creds) && [ -n "${power_params}" ] && power_params=${power_params%.} ;; wedge) power_params=$(maas-wedge-autodetect --get-enlist-creds) || power_params="" ;; esac # Try maas-enlist without power parameters on failure for older versions of # maas-enlist without power parameter support maas-enlist --serverurl "$url" ${power_params:+--power-params "${power_params}" --power-type "${power_type}"}>/tmp/enlist.out ||\ maas-enlist --serverurl "$url" >/tmp/enlist.out if [ $? -eq 0 ]; then msg="successfully enlisted to '$url'" echo echo "=== $(date -R): $msg" cat /tmp/enlist.out echo ============================================= sleep 10 # Uncomment the following to allow troubleshooting for an hour. # echo "ubuntu:ubuntu" | chpasswd # bfile="/tmp/block-poweroff" # { echo "#!/bin/sh"; echo "touch $bfile"; } > /etc/profile.d/A01-block.sh # sleep 3600 # [ -e $bfile ] && exit 0 else user="ubuntu" pass="ubuntu" echo "$user:$pass" | chpasswd bfile="/tmp/block-poweroff" { echo "#!/bin/sh"; echo "touch $bfile"; } > /etc/profile.d/A01-block.sh chmod 755 /etc/profile.d/A01-block.sh echo echo ============================================= echo "failed to enlist system maas server" echo "sleeping 60 seconds then poweroff" echo echo "login with '$user:$pass' to debug and disable poweroff" echo cat /tmp/enlist.out echo ============================================= sleep 60 [ -e $bfile ] && exit 0 fi packages: [ freeipmi-tools, openipmi, ipmitool, archdetect-deb, sshpass ] output: {all: '| tee -a /var/log/cloud-init-output.log'} runcmd: - [ sh, -c, *maas_enlist ] maas-2.4.2-7034-g2f5deb8b8.orig/contrib/pycharm/000077500000000000000000000000001333555657500206545ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/contrib/pycharm/codeStyleSettings.xml000066400000000000000000000006111333555657500250500ustar00rootroot00000000000000 maas-2.4.2-7034-g2f5deb8b8.orig/contrib/pycharm/encodings.xml000066400000000000000000000003341333555657500233470ustar00rootroot00000000000000 maas-2.4.2-7034-g2f5deb8b8.orig/contrib/pycharm/inspectionProfiles/000077500000000000000000000000001333555657500245335ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/contrib/pycharm/inspectionProfiles/Project_Default.xml000066400000000000000000000017051333555657500303320ustar00rootroot00000000000000 maas-2.4.2-7034-g2f5deb8b8.orig/contrib/pycharm/inspectionProfiles/profiles_settings.xml000066400000000000000000000003531333555657500310210ustar00rootroot00000000000000 maas-2.4.2-7034-g2f5deb8b8.orig/contrib/pycharm/maas.iml000066400000000000000000000030051333555657500222760ustar00rootroot00000000000000 maas-2.4.2-7034-g2f5deb8b8.orig/contrib/pycharm/misc.xml000066400000000000000000000012521333555657500223310ustar00rootroot00000000000000 maas-2.4.2-7034-g2f5deb8b8.orig/contrib/pycharm/modules.xml000066400000000000000000000004041333555657500230440ustar00rootroot00000000000000 maas-2.4.2-7034-g2f5deb8b8.orig/contrib/pycharm/sqldialects.xml000066400000000000000000000002731333555657500237100ustar00rootroot00000000000000 maas-2.4.2-7034-g2f5deb8b8.orig/contrib/tgt.conf000066400000000000000000000000621333555657500206540ustar00rootroot00000000000000include /var/lib/maas/ephemeral/tgt.conf.d/*.conf maas-2.4.2-7034-g2f5deb8b8.orig/docs/000077500000000000000000000000001333555657500165015ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/docs/_static/000077500000000000000000000000001333555657500201275ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/docs/_static/versions.js000066400000000000000000000063021333555657500223360ustar00rootroot00000000000000/* Javascript utilities to create a version switcher widget. This is mostly done, but not limited to support creating links between the different versions of the MAAS documentation on maas.io. */ function page_exists(url) { // Returns wether a page at the give URL exists or not. var result = false; $.ajax({ type: 'HEAD', url: url, async: false, success: function () { result = true; } }); return result; }; function doc_page(version, doc_prefix) { // Returns the URL of the page equivalent to the current page but from // the given version of the documentation. // e.g. if the current page is 'http://host/1.6/somepage.html', calling // doc_page('1.7') will return 'http://host/1.7/somepage.html'. var pattern = new RegExp('\/' + doc_prefix + '([\\d\\.]*)\/') var newpathname = window.location.pathname.replace( pattern, '/' + doc_prefix + version + '/'); return window.location.origin + newpathname + window.location.hash; }; function doc_homepage(version, doc_prefix) { // Returns the URL of the homepage for the documentation of the given // version. return window.location.origin + '/' + doc_prefix + version + '/'; }; function set_up_version_switcher(selector, doc_prefix) { // Create version switcher widget. $(selector).replaceWith($('\

Version

\ ')); release_select = $("#id_sidebar_release"); // Request version list and populate version switcher widget with it. var json_url = "/" + doc_prefix + "/_static/versions.json"; var jqxhr = $.getJSON(json_url, function(data) { var first = true; $.each(data, function (value, text) { var option_value = value; if (first) { // The first element corresponds to the documentation for trunk. option_value = ''; first = false; } var option = $("").attr("value", option_value).text(text); if (value == DOCUMENTATION_OPTIONS.VERSION) { option.attr('selected', 'selected'); } release_select.append(option); }); }); // jqxhr.fail only exists in recent versions of jQuery; // it's not there with the version shipped with Sphinx // on Precise. if ($.isFunction(jqxhr.fail)) { jqxhr.fail(function(jqXHR) { console.log("error requesting versions file"); console.log(jqXHR); }); } // Handle version switcher change: redirect to the equivalent page in the // selected version of the documentation if that page exists, redirects to the // homepage of the selected version of the documentation otherwise. $("#id_sidebar_release").change(function () { var version = $(this).find('option:selected').val(); var same_page_in_other_version = doc_page(version, doc_prefix); if (page_exists(same_page_in_other_version)) { window.location = same_page_in_other_version; } else { window.location = doc_homepage(version, doc_prefix); } }); }; maas-2.4.2-7034-g2f5deb8b8.orig/docs/_templates/000077500000000000000000000000001333555657500206365ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/docs/_templates/maas/000077500000000000000000000000001333555657500215575ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/docs/_templates/maas/layout.html000077500000000000000000000026441333555657500237730ustar00rootroot00000000000000{%- extends "basic/layout.html" %} {% set css_files = ['https://assets.ubuntu.com/sites/guidelines/css/latest/ubuntu-styles.css', 'https://assets.ubuntu.com/sites/ubuntu/latest/u/css/global.css', '_static/css/main.css'] %} {% block rootrellink %} {% endblock %} {% block sidebarlogo %} MAAS logo

MAAS

Metal As A Service.



{% endblock %} {# Remove 'modules' and 'index' from rellinks: they point to autogenerated code documentation pages that we don't want to advertise too much. #} {%- set rellinks = rellinks[2:] %} {%- block footer %}
{%- endblock %} maas-2.4.2-7034-g2f5deb8b8.orig/docs/_templates/maas/localtoc.html000066400000000000000000000000571333555657500242470ustar00rootroot00000000000000{%- if display_toc %} {{ toc }} {%- endif %} maas-2.4.2-7034-g2f5deb8b8.orig/docs/_templates/maas/relations.html000077500000000000000000000016341333555657500244540ustar00rootroot00000000000000

Related Topics

maas-2.4.2-7034-g2f5deb8b8.orig/docs/_templates/maas/static/000077500000000000000000000000001333555657500230465ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/docs/_templates/maas/static/css/000077500000000000000000000000001333555657500236365ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/docs/_templates/maas/static/css/main.css000066400000000000000000000030661333555657500253010ustar00rootroot00000000000000pre { background-color:#EEEEEE; background-position:initial initial; background-repeat:initial initial; line-height:1.3em; margin:15px 0px; padding:7px 30px; } div.document { width: 984px; margin: 10px auto 0 auto; } div.body h1 { margin-top: 20px; padding-top: 0; font-size: 250%; } .bodywrapper ul ul { margin-top: .4em; margin-bottom: .4em; } div.sphinxsidebar ul ul { margin-top: .4em; margin-bottom: .4em; } body { padding-top: 0px; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6, div.admonition p.admonition-title { font-family: Ubuntu,Arial,"libra sans",sans-serif; } div.admonition { margin: 20px 0px; color: #3E4349; } div.admonition p.admonition-title { font-size: 20px; } .document { -moz-box-sizing: border-box; background: none repeat scroll 0 0 #FFFFFF; border-radius: 4px; box-shadow: 0 0 3px #C9C9C9; } div.sphinxsidebarwrapper { padding: 18px 10px 18px 20px; } form input[type="text"] { display: inline; } div.sphinxsidebar #searchbox input[type="text"] { width: 110px; padding: 4px 3px; } div.sphinxsidebar #searchbox input[type="submit"] { width: 40px; } a:active, a:focus, a:hover { text-decoration: none; border-bottom: 1px solid #6D4100; } /* * Custom CSS selectors for the API documentation page. * * Make subtitles for each API endpoint smaller, so they don't overwhelm * the remainder of the documentation. */ div#maas-api div#operations h4 code.docutils { font-size: 75%; } div#maas-api div#operations div.section h5 { font-size: 90%; } maas-2.4.2-7034-g2f5deb8b8.orig/docs/_templates/maas/static/flasky.css_t000077500000000000000000000210271333555657500254010ustar00rootroot00000000000000/* * flasky.css_t * ~~~~~~~~~~~~ * * :copyright: Copyright 2013-2015 by Armin Ronacher. Modifications by Kenneth Reitz. * :license: Flask Design License, see LICENSE for details. */ {% set page_width = '940px' %} {% set sidebar_width = '220px' %} @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: "Georgia", "Open Sans", OpenSansRegular, sans-serif; font-size: 16px; background: #fff; font-weight: 400; color: #000; margin: 0; padding: 0; } div.document { width: {{ page_width }}; margin: 10px auto 0 auto; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 {{ sidebar_width }}; } div.sphinxsidebar { width: {{ sidebar_width }}; } hr { border: 1px solid #B1B4B6; } div.body { background-color: white; color: #3E4349; padding: 0 30px 0 30px; } img.floatingflask { padding: 0 0 10px 10px; float: right; } div.footer { width: {{ page_width }}; margin: 20px auto 30px auto; font-size: 14px; color: #888; text-align: right; } div.footer a { color: #888; } div.related { width: {{ page_width }}; margin: 10px auto 0 auto; } div.related h3 { display: none; } div.sphinxsidebar a { color: #444; text-decoration: none; border-bottom: 1px dotted #999; } div.sphinxsidebar a:hover { border-bottom: 1px solid #999; } div.sphinxsidebar { font-size: 14px; line-height: 1.5; } div.sphinxsidebarwrapper { padding: 18px 10px; } div.sphinxsidebarwrapper p.logo { padding: 0; margin: -10px 0 0 -20px; text-align: center; } div.sphinxsidebar h3, div.sphinxsidebar h4 { font-family: 'Antic Slab' ,'Garamond', 'Georgia', serif; color: #000; font-size: 24px; font-weight: normal; margin: 30px 0 5px 0; padding: 0; } div.sphinxsidebar h4 { font-size: 20px; } div.sphinxsidebar h3 a { color: #000; } div.sphinxsidebar p.logo a, div.sphinxsidebar h3 a, div.sphinxsidebar p.logo a:hover, div.sphinxsidebar h3 a:hover { border: none; } div.sphinxsidebar p { color: #555; margin: 10px 0; } div.sphinxsidebar ul { margin: 10px 0px; padding: 0; color: #000; } div.sphinxsidebar input { border: 1px solid #ccc; font-family: 'Georgia', serif; font-size: 1em; } /* -- body styles ----------------------------------------------------------- */ a { color: #004B6B; text-decoration: underline; } a:hover { color: #6D4100; text-decoration: underline; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: 'Antic Slab', serif; font-weight: normal; margin: 30px 0px 10px 0px; padding: 0; text-shadow: 1px 1px 3px #ddd; color: #000; } div.body h1 { margin-top: 0; padding-top: 0; font-size: 250%; } div.body h2 { font-size: 190%; } div.body h3 { font-size: 160%; } div.body h4 { font-size: 140%; } div.body h5 { font-size: 110%; } div.body h6 { font-size: 110%; } a.headerlink { color: #ddd; padding: 0 4px; text-decoration: none; } a.headerlink:hover { color: #444; background: #eaeaea; } div.body p, div.body dd, div.body li { line-height: 1.4em; } div.admonition { background: #fafafa; margin: 20px -30px; padding: 10px 30px; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; } div.admonition tt.xref, div.admonition a tt { border-bottom: 1px solid #fafafa; } dd div.admonition { margin-left: -60px; padding-left: 60px; } div.admonition p.admonition-title { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; font-size: 24px; margin: 0 0 10px 0; padding: 0; line-height: 1; } div.admonition p.last { margin-bottom: 0; } div.highlight { background-color: white; } dt:target, .highlight { background: #FAF3E8; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre { font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.88em; } tt { font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.95em; } img.screenshot { } tt.descname, tt.descclassname { font-size: 0.95em; } tt.descname { padding-right: 0.08em; } img.screenshot { -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils { border: 1px solid #888; -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils td, table.docutils th { border: 1px solid #888; padding: 0.25em 0.7em; } table.field-list, table.footnote { border: none; -moz-box-shadow: none; -webkit-box-shadow: none; box-shadow: none; } table.footnote { margin: 15px 0; width: 100%; border: 1px solid #eee; background: #fdfdfd; font-size: 0.9em; } table.footnote + table.footnote { margin-top: -15px; border-top: none; } table.field-list th { padding: 0 0.8em 0 0; } table.field-list td { padding: 0; } table.footnote td.label { width: 0px; padding: 0.3em 0 0.3em 0.5em; } table.footnote td { padding: 0.3em 0.5em; } dl { margin: 0; padding: 0; } dl dd { margin-left: 30px; } blockquote { margin: 0 0 0 30px; padding: 0; } ul, ol { margin: 10px 0 10px 30px; padding: 0; } pre { background: #eee; padding: 7px 30px; margin: 15px -30px; line-height: 1.3em; } dl pre, blockquote pre, li pre { margin-left: 0px; padding-left: 15px; } dl dl pre { margin-left: 0px; padding-left: 15px; } tt { background-color: #ecf0f3; color: #222; /* padding: 1px 2px; */ } tt.xref, a tt { background-color: #FBFBFB; color: #2277bb; border-bottom: 1px solid white; } a.reference { text-decoration: none; border-bottom: 1px dotted #004B6B; } a.reference:hover { border-bottom: 1px solid #6D4100; } a.footnote-reference { text-decoration: none; font-size: 0.7em; vertical-align: top; border-bottom: 1px dotted #004B6B; } a.footnote-reference:hover { border-bottom: 1px solid #6D4100; } a:hover tt { background: #EEE; } li { margin-bottom: 0.3em; } @media screen and (max-width: 870px) { div.sphinxsidebar { display: none; } div.document { width: 100%; } div.documentwrapper { margin-left: 0; margin-top: 0; margin-right: 0; margin-bottom: 0; } div.bodywrapper { margin-top: 0; margin-right: 0; margin-bottom: 0; margin-left: 0; } ul { margin-left: 0; } .document { width: auto; } .footer { width: auto; } .bodywrapper { margin: 0; } .footer { width: auto; } .github { display: none; } } @media screen and (max-width: 875px) { body { margin: 0; padding: 20px 30px; } div.documentwrapper { float: none; background: white; } div.sphinxsidebar { display: block; float: none; width: 102.5%; margin: 50px -30px -20px -30px; padding: 10px 20px; background: #333; color: white; } div.sphinxsidebar h3, div.sphinxsidebar h4, div.sphinxsidebar p, div.sphinxsidebar h3 a { color: white; } div.sphinxsidebar a { color: #aaa; } div.sphinxsidebar p.logo { display: none; } div.document { width: 100%; margin: 0; } div.related { display: block; margin: 0; padding: 10px 0 20px 0; } div.related ul, div.related ul li { margin: 0; padding: 0; } div.footer { display: none; } div.bodywrapper { margin: 0; } div.body { min-height: 0; padding: 0; } .rtd_doc_footer { display: none; } .document { width: auto; } .footer { width: auto; } .footer { width: auto; } .github { display: none; } } /* misc. */ .revsys-inline { display: none!important; } div.sphinxsidebar #searchbox input[type="text"] { width: 140px; padding: 4px 3px; } .highlight .nv { color: #C65D09!important; } maas-2.4.2-7034-g2f5deb8b8.orig/docs/_templates/maas/theme.conf000077500000000000000000000001501333555657500235270ustar00rootroot00000000000000[theme] inherit = basic stylesheet = flasky.css headfont = Ubuntu bodyfont = free-sans,sans,sans-serif maas-2.4.2-7034-g2f5deb8b8.orig/docs/api_authentication.rst000066400000000000000000000050371333555657500231100ustar00rootroot00000000000000.. -*- mode: rst -*- .. _api_authentication: API authentication ================== MAAS's API uses OAuth_ as its authentication mechanism. There isn't a third party involved (as in 3-legged OAuth) and so the process used is what's commonly referred to as 0-legged OAuth: the consumer accesses protected resources by submitting OAuth signed requests. .. _OAuth: http://en.wikipedia.org/wiki/OAuth Note that some API endpoints support unauthenticated requests (i.e. anonymous access). See the :doc:`API documentation ` for details. Examples ======== Here are two examples on how to perform an authenticated GET request to retrieve the list of nodes. The , , tokens are the three elements that compose the API key (API key = '::'). Python ------ .. code:: python import oauth.oauth as oauth import httplib2 import uuid def perform_API_request(site, uri, method, key, secret, consumer_key): resource_tok_string = "oauth_token_secret=%s&oauth_token=%s" % ( secret, key) resource_token = oauth.OAuthToken.from_string(resource_tok_string) consumer_token = oauth.OAuthConsumer(consumer_key, "") oauth_request = oauth.OAuthRequest.from_consumer_and_token( consumer_token, token=resource_token, http_url=site, parameters={'oauth_nonce': uuid.uuid4().hex}) oauth_request.sign_request( oauth.OAuthSignatureMethod_PLAINTEXT(), consumer_token, resource_token) headers = oauth_request.to_header() url = "%s%s" % (site, uri) http = httplib2.Http() return http.request(url, method, body=None, headers=headers) # API key = '::' response = perform_API_request( 'http://server/MAAS/api/2.0', '/nodes/?op=list', 'GET', '', '', '') Ruby ---- .. code:: ruby require 'oauth' require 'oauth/signature/plaintext' def perform_API_request(site, uri, key, secret, consumer_key) consumer = OAuth::Consumer.new( consumer_key, "", { :site => "http://localhost/MAAS/api/2.0", :scheme => :header, :signature_method => "PLAINTEXT"}) access_token = OAuth::AccessToken.new(consumer, key, secret) return access_token.request(:get, "/nodes/?op=list") end # API key = "::" response = perform_API_request( "http://server/MAAS/api/2.0", "/nodes/?op=list", "", "", "consumer_key>") maas-2.4.2-7034-g2f5deb8b8.orig/docs/changelog.rst000066400000000000000000000705451333555657500211750ustar00rootroot00000000000000========= Changelog ========= MAAS 2.3.0 ========== Important announcements ----------------------- **Machine network configuration now deferred to cloud-init.** Starting from MAAS 2.3, machine network configuration is now handled by cloud-init. In previous MAAS (and curtin) releases, the network configuration was performed by curtin during the installation process. In an effort to improve robustness, network configuration has now been consolidated in cloud-init. MAAS will continue to pass network configuration to curtin, which in turn, will delegate the configuration to cloud-init. **Ephemeral images over HTTP** As part of the effort to reduce dependencies and improve reliability, MAAS ephemeral (network boot) images are no longer loaded using iSCSI (tgt). By default, the ephemeral images are now obtained using HTTP requests to the rack controller. After upgrading to MAAS 2.3, please ensure you have the latest available images. For more information please refer to the section below (New features & improvements). **Advanced network configuration for CentOS & Windows** MAAS 2.3 now supports the ability to perform network configuration for CentOS and Windows. The network configuration is performed via cloud-init. MAAS CentOS images now use the latest available version of cloud-init that includes these features. New features & improvements --------------------------- **CentOS network configuration** MAAS can now perform machine network configuration for CentOS 6 and 7, providing networking feature parity with Ubuntu for those operating systems. The following can now be configured for MAAS deployed CentOS images: * Bonds, VLAN and bridge interfaces. * Static network configuration. Our thanks to the cloud-init team for improving the network configuration support for CentOS. **Windows network configuration** MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information (https://maas.io/contact-us). **Improved Hardware Testing** MAAS 2.3 introduces a new and improved hardware testing framework that significantly improves the granularity and provision of hardware testing feedback. These improvements include: * An improved testing framework that allows MAAS to run each component individually. This allows MAAS to run tests against storage devices for example, and capture results individually. * The ability to describe custom hardware tests with a YAML definition: * This provides MAAS with information about the tests themselves, such as script name, description, required packages, and other metadata about what information the script will gather. All of which will be used by MAAS to render in the UI. * Determines whether the test supports a parameter, such as storage, allowing the test to be run against individual storage devices. * Provides the ability to run tests in parallel by setting this in the YAML definition. * Capture performance metrics for tests that can provide it. * CPU performance tests now offer a new ‘7z’ test, providing metrics. * Storage performance tests now include a new ‘fio’ test providing metrics. * Storage test ‘badblocks’ has been improved to provide the number of badblocks found as a metric. * The ability to override a machine that has been marked ‘Failed testing’. This allows administrators to acknowledge that a machine is usable despite it having failed testing. Hardware testing improvements include the following UI changes: * Machine Listing page * Displays whether a test is pending, running or failed for the machine components (CPU, Memory or Storage.) * Displays whether a test not related to CPU, Memory or Storage has failed. * Displays a warning when the machine has been overridden and has failed tests, but is in a ‘Ready’ or ‘Deployed’ state. * Machine Details page * Summary tab - Provides hardware testing information about the different components (CPU, Memory, Storage). * Hardware Tests /Commission tab - Provides an improved view of the latest test run, its runtime as well as an improved view of previous results. It also adds more detailed information about specific tests, such as status, exit code, tags, runtime and logs/output (such as stdout and stderr). * Storage tab - Displays the status of specific disks, including whether a test is OK or failed after running hardware tests. For more information please refer to https://docs.ubuntu.com/maas/2.3/en/nodes-hw-testing. **Network discovery & beaconing** In order to confirm network connectivity and aide with the discovery of VLANs, fabrics and subnets, MAAS 2.3 introduces network beaconing. MAAS now sends out encrypted beacons, facilitating network discovery and monitoring. Beacons are sent using IPv4 and IPv6 multicast (and unicast) to UDP port 5240. When registering a new controller, MAAS uses the information gathered from the beaconing protocol to ensure that newly registered interfaces on each controller are associated with existing known networks in MAAS. This aids MAAS by providing better information on determining the network topology. Using network beaconing, MAAS can better correlate which networks are connected to its controllers, even if interfaces on those controller are not configured with IP addresses. Future uses for beaconing could include validation of networks from commissioning nodes, MTU verification, and a better user experience for registering new controllers. **Ephemeral Images over HTTP** Historically, MAAS has used ‘tgt’ to provide images over iSCSI for the ephemeral environments (e.g commissioning, deployment environment, rescue mode, etc). MAAS 2.3 changes the default behaviour by now providing images over HTTP. These images are now downloaded directly by the initrd. The change means that the initrd loaded on PXE will contact the rack controller to download the image to load in the ephemeral environment. Support for using 'tgt' is being phased out in MAAS 2.3, and will no longer be supported from MAAS 2.4 onwards. For users who would like to continue to use & load their ephemeral images via ‘tgt’, they can disable http boot with the following command. maas maas set-config name=http_boot value=False **Upstream Proxy** MAAS 2.3 now enables an upstream HTTP proxy to be used while allowing MAAS deployed machines to continue to use the caching proxy for the repositories. Doing so provides greater flexibility for closed environments, including: * Enabling MAAS itself to use a corporate proxy while allowing machines to continue to use the MAAS proxy. * Allowing machines that don’t have access to a corporate proxy to gain network access using the MAAS proxy. Adding upstream proxy support also includes an improved configuration on the settings page. Please refer to Settings > Proxy for more details. **UI Improvements** * Machines, Devices, Controllers MAAS 2.3 introduces an improved design for the machines, devices and controllers detail pages that include the following changes. * "Summary" tab now only provides information about the specific node (machine, device or controller), organised across cards. * "Configuration" has been introduced, which includes all editable settings for the specific node (machine, device or controllers). * "Logs" consolidates the commissioning output and the installation log output. * Other UI improvements Other UI improvements that have been made for MAAS 2.3 include: * Added DHCP status column on the ‘Subnet’s tab. * Added architecture filters * Updated VLAN and Space details page to no longer allow inline editing. * Updated VLAN page to include the IP ranges tables. * Zones page converted to AngularJS (away from YUI). * Added warnings when changing a Subnet’s mode (Unmanaged or Managed). * Renamed “Device Discovery†to “Network Discoveryâ€. * Discovered devices where MAAS cannot determine the hostname now show the hostname as “unknown†and greyed out instead of using the MAC address manufacturer as the hostname. **Rack Controller Deployment** MAAS 2.3 can now automatically deploy rack controllers when deploying a machine. This is done by providing cloud-init user data, and once a machine is deployed, cloud-init will install and configure the rack controller. Upon rack controller registration, MAAS will automatically detect the machine is now a rack controller and it will be transitioned automatically. To deploy a rack controller, users can do so via the API (or CLI), e.g: maas machine deploy install_rackd=True Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap. **Controller Versions & Notifications** MAAS now surfaces the version of each running controller and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as within a HA setup. **Improved DNS Reloading** This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made. **API Improvements** The machines API endpoint now provides more information on the configured storage and provides additional output that includes volume_groups, raids, cache_sets, and bcaches fields. **Django 1.11 support** MAAS 2.3 now supports the latest Django LTS version, Django 1.11. This allows MAAS to work with the newer Django version in Ubuntu Artful, which serves as a preparation for the next Ubuntu LTS release. * Users running MAAS in Ubuntu Artful will use Django 1.11. * Users running MAAS in Ubuntu Xenial will continue to use Django 1.9. Issues fixed in this release ---------------------------- For issues fixed in MAAS 2.3, please refer to the following milestones: https://launchpad.net/maas/+milestone/2.3.0 https://launchpad.net/maas/+milestone/2.3.0rc2 https://launchpad.net/maas/+milestone/2.3.0rc1 https://launchpad.net/maas/+milestone/2.3.0beta3 https://launchpad.net/maas/+milestone/2.3.0beta2 https://launchpad.net/maas/+milestone/2.3.0beta1 https://launchpad.net/maas/+milestone/2.3.0alpha3 https://launchpad.net/maas/+milestone/2.3.0alpha2 https://launchpad.net/maas/+milestone/2.3.0alpha1 MAAS 2.3.0 (rc2) ================ For more information, visit: https://launchpad.net/maas/+milestone/2.3.0rc2 Issues fixed in this release ---------------------------- LP: #1730481 [2.3, HWTv2] When 'other' test fails, node listing incorrectly shows two icons LP: #1723425 [2.3, HWTv2] Hardware tests do not provide start, current running or estimated run time LP: #1728304 [2.3, HWTv2] Tests fail but transition to ready with "Unable to map parameters" when disks are missing LP: #1731075 [2.3, HWTv2] Rogue test results when machine fails to commission for the first time LP: #1721825 [2.3, HWTv2] Tests are not run in meaningful order LP: #1731350 [2.3, HWTv2, UI] Aborting commissioning (+ testing) of a machine never commissioned before, leaves 'pending' icons in the UI LP: #1721743 [2.3b2] Rack and region controller versions still not updated LP: #1722646 [2.x] 2 out of 3 rack controller interfaces are missing links LP: #1730474 [2.x] MAAS region startup sequence leads to race conditions LP: #1662343 [2.1.3] Commissioning doesn't pick up new storage devices LP: #1730485 [2.2+, HWT] badblocks fails with LVM LP: #1730799 [2.3] Traceback when viewing controller commissioning scripts LP: #1731292 [2.3, UI, regression] Hardware testing / commissioning table doesn't fit in a small screen but there's a lot of whitespace LP: #1730703 [2.3, UI] Rename the section "Settings" of machine details to Configuration MAAS 2.3.0 (rc1) ================ Issues fixed in this release ---------------------------- For more information, visit: https://launchpad.net/maas/+milestone/2.3.0rc1 LP: #1727576 [2.3, HWTv2] When specific tests timesout there's no log/output LP: #1728300 [2.3, HWTv2] smartctl interval time checking is too short LP: #1721887 [2.3, HWTv2] No way to override a machine that Failed Testing LP: #1728302 [2.3, HWTv2, UI] Overall health status is redundant LP: #1721827 [2.3, HWTv2] Logging when and why a machine failed testing (due to missing heartbeats/locked/hanged) not available in maas.log LP: #1722665 [2.3, HWTv2] MAAS stores a limited amount of test results LP: #1718779 [2.3] 00-maas-06-get-fruid-api-data fails to run on controller LP: #1729857 [2.3, UI] Whitespace after checkbox on node listing page LP: #1696122 [2.2] Failed to get virsh pod storage: cryptic message if no pools are defined LP: #1716328 [2.2] VM creation with pod accepts the same hostname and push out the original VM LP: #1718044 [2.2] Failed to process node status messages - twisted.internet.defer.QueueOverflow LP: #1723944 [2.x, UI] Node auto-assigned address is not always shown while in rescue mode LP: #1718776 [UI] Tooltips missing from the machines listing page LP: #1724402 no output for failing test LP: #1724627 00-maas-06-get-fruid-api-data fails relentlessly, causes commissioning to fail LP: #1727962 Intermittent failure: TestDeviceHandler.test_list_num_queries_is_the_expected_number LP: #1727360 Make partition size field optional in the API (CLI) LP: #1418044 Avoid picking the wrong IP for MAAS_URL and DEFAULT_MAAS_URL LP: #1729902 When commissioning don't show message that user has overridden testing MAAS 2.3.0 (beta3) ================== Issues fixed in this release ---------------------------- For more information, visit: https://launchpad.net/maas/+milestone/2.3.0beta3 LP: #1727551 [2.3] Commissioning shows results from script that no longer exists LP: #1696485 [2.2, HA] MAAS dhcp does not offer up multiple domains to search LP: #1696661 [2.2, HA] MAAS should offer multiple DNS servers in HA case LP: #1724235 [2.3, HWTv2] Aborted test should not show as failure LP: #1721824 [2.3, HWTv2] Overall health status is missing LP: #1727547 [2.3, HWTv2] Aborting testing goes back into the incorrect state LP: #1722848 [2.3, HWTv2] Memtester test is not robust LP: #1727568 [2.3, HWTv2, regression] Hardware Tests tab does not show what tests are running LP: #1721268 [2.3, UI, HWTv2] Metrics table (e.g. from fio test) is not padded to MAAS' standard LP: #1721823 [2.3, UI, HWTv2] No way to surface a failed test that's non CPU, Mem, Storage in machine listing page LP: #1721886 [2.3, UI, HWTv2] Hardware Test tab doesn't auto-update LP: #1559353 [2.0a3] "Add Hardware > Chassis" cannot find off-subnet chassis BMCs LP: #1705594 [2.2] rackd errors after fresh install LP: #1718517 [2.3] Exceptions while processing commissioning output cause timeouts rather than being appropriately surfaced LP: #1722406 [2.3] API allows "deploying" a machine that's already deployed LP: #1724677 [2.x] [critical] TFTP back-end failed right after node repeatedly requests same file via tftp LP: #1726474 [2.x] psycopg2.IntegrityError: update or delete on table "maasserver_node" violates foreign key constraint LP: #1727073 [2.3] rackd — 12% connected to region controllers. LP: #1722671 [2.3, pod] Unable to delete a machine or a pod if the pod no longer exists LP: #1680819 [2.x, UI] Tooltips go off screen LP: #1725908 [2.x] deleting user with static ip mappings throws 500 LP: #1726865 [snap,2.3beta3] maas init uses the default gateway in the default region URL LP: #1724181 maas-cli missing dependencies: netifaces, tempita LP: #1724904 Changing PXE lease in DHCP snippets global sections does not work MAAS 2.3.0 (beta2) ================== Issues fixed in this release ---------------------------- For more information, visit: https://launchpad.net/maas/+milestone/2.3.0beta2 LP: #1719015 $TTL in zone definition is not updated LP: #1711760 [2.3] Workaround issue in 'resolvconf', where resolv.conf is not set in ephemeral envs (commissioning, testing, etc) LP: #1721548 [2.3] Failure on controller refresh seem to be causing version to not get updated LP: #1721108 [2.3, UI, HWTv2] Machine details cards - Don't show "see results" when no tests have been run on a machine LP: #1721111 [2.3, UI, HWTv2] Machine details cards - Storage card doesn't match CPU/Memory one LP: #1721524 [2.3, UI, HWTv2] When upgrading from older MAAS, Storage HW tests are not mapped to the disks LP: #1721276 [2.3, UI, HWTv2] Hardware Test tab - Table alignment for the results doesn't align with titles LP: #1721525 [2.3, UI, HWTv2] Storage card on machine details page missing red bar on top if there are failed tests LP: #1719361 [2.3, UI, HWTv2] On machine listing page, remove success icons for components that passed the tests LP: #1721105 [2.3, UI, HWTv2] Remove green success icon from Machine listing page LP: #1721273 [2.3, UI, HWTv2] Storage section on Hardware Test tab does not describe each disk to match the design LP: #1719353 [2.3a3, Machine listing] Improve the information presentation of the exact tasks MAAS is running when running hardware testing LP: #1721113 [2.3, UI] Group physical block devices in the storage card off of their size and type MAAS 2.3.0 (beta1) ================== New Features & Improvements --------------------------- **Hardware Testing** MAAS 2.3 beta 1 overhauls and improves the visibility of hardware tests results and information. This includes various changes across MAAS: * Machine Listing page * Surface progress and failures of hardware tests, actively showing when a test is pending, running, successful or failed. * Machine Details page * Summary tab - Provide hardware testing information about the different components (CPU, Memory, Storage) * Hardware Tests tab - Completely re-design of the Hardware Test tab. It now shows a list of test results per component. Adds the ability to view more details about the test itself. **UI Improvements** MAAS 2.3 beta 1 introduces a new design for the node summary pages: * "Summary tab" now only shows information of the machine, in a complete new design. * "Settings tab" has been introduced. It now includes the ability to edit such node. * "Logs tab" now consolidates the commissioning output and the installation log output. * Add DHCP status column on the ‘Subnet’s tab. * Add architecture filters * Update VLAN and Space details page to no longer allow inline editing. * Update VLAN page to include the IP ranges tables. * Convert the Zones page into AngularJS (away from YUI). * Add warnings when changing a Subnet’s mode (Unmanaged or Managed). **Rack Controller Deployment** MAAS beta 1 now adds the ability to deploy any machine with the rack controller, which is only available via the API. **API Improvements** MAAS 2.3 beta 1 introduces API output for volume_groups, raids, cache_sets, and bcaches field to the machines endpoint. Issues fixed in this release ---------------------------- For more information, visit: https://launchpad.net/maas/+milestone/2.3.0beta1 LP: #1711320 [2.3, UI] Can't 'Save changes' and 'Cancel' on machine/device details page LP: #1696270 [2.3] Toggling Subnet from Managed to Unmanaged doesn't warn the user that behavior changes LP: #1717287 maas-enlist doesn't work when provided with serverurl with IPv6 address LP: #1718209 PXE configuration for dhcpv6 is wrong LP: #1718270 [2.3] MAAS improperly determines the version of some installs LP: #1718686 [2.3, master] Machine lists shows green checks on components even when no tests have been run LP: #1507712 cli: maas logout causes KeyError for other profiles LP: #1684085 [2.x, Accessibility] Inconsistent save states for fabric/subnet/vlan/space editing LP: #1718294 [packaging] dpkg-reconfigure for region controller refers to an incorrect network topology assumption MAAS 2.3.0 (alpha3) =================== New Features & Improvements --------------------------- **Hardware Testing (backend only)** MAAS has now introduced an improved hardware testing framework. This new framework allows for MAAS to test individual components of a single machine, as well as providing better feedback to the user for each of those tests. This feature has introduced: * Ability to define a custom testing script with a YAML definition - Each custom test can be defined with a YAML that will provide information about the test. This information includes the script name, description, required packages, and other metadata about what information the script will gather. This information can then be displayed in the UI. * Ability to pass parameters - Adds the ability to pass specific parameters to the scripts. For example, in upcoming beta releases, users would be able to select which disks they want to test if they don't want to test all disks. * Running test individually - Improves the way how hardware tests are run per component. This allows MAAS to run tests against any individual component (such a single disk). * Added additional performance tests: * Added a CPU performance test with 7z. * Added a storage performance test with fio. Please note that individual results for each of the components is currently only available over the API. Upcoming beta release will include various UI allow the user to better surface and interface with these new features. **Rack Controller Deployment in Whitebox Switches (with the MAAS snap)** MAAS has now the ability to install and configure a MAAS rack controller once a machine has been deployed. As of today, this feature is only available when MAAS detects the machine is a whitebox switch. As such, all MAAS certified whitebox switches will be deployed with a MAAS rack controller. Currently certified switches include the Wedge 100 and the Wedge 40. Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap. **Improved DNS Reloading** This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made. **UI - Controller Versions & Notifications** MAAS now surfaces the version of each running controller, and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as a HA setup. **UI - Zones tab has been migrated to AngularJS** The Zones tab and related pages have now been transferred to AngularJS, moving away from using YUI. As of today, the only remaining section still requiring the use of YUI is some sections inside the settings page. Thanks to the Ubuntu Web Team for their contribution! Issues fixed in this release ---------------------------- Issues fixed in this release are detailed at: https://launchpad.net/maas/+milestone/2.3.0alpha3 MAAS 2.3.0 (alpha2) =================== Important announcements ----------------------- **Advanced Network for CentOS & Windows** The MAAS team is happy to announce that MAAS 2.3 now supports the ability to perform network configuration for CentOS and Windows. The network configuration is performed via cloud-init. MAAS CentOS images now use the latest available version of cloud-init that includes these features. New Features & Improvements --------------------------- **CentOS Networking support** MAAS can now perform machine network configuration for CentOS, giving CentOS networking feature parity with Ubuntu. The following can now be configured for MAAS deployed CentOS images: * Static network configuration. * Bonds, VLAN and bridge interfaces. Thanks for the cloud-init team for improving the network configuration support for CentOS. **Support for Windows Network configuration** MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information (https://maas.io/contact-us). **Network Discovery & Beaconing** MAAS now sends out encrypted beacons to facilitate network discovery and monitoring. Beacons are sent using IPv4 and IPv6 multicast (and unicast) to UDP port 5240. When registering a new controller, MAAS uses the information gathered from the beaconing protocol to ensure that newly registered interfaces on each controller are associated with existing known networks in MAAS. **UI improvements** Minor UI improvements have been made: * Renamed “Device Discovery†to “Network Discoveryâ€. * Discovered devices where MAAS cannot determine the hostname now show the hostname as “unknown†and greyed out instead of using the MAC address manufacturer as the hostname. Issues fixed in this release ---------------------------- Issues fixed in this release are detailed at: https://launchpad.net/maas/+milestone/2.3.0alpha1 2.3.0 (alpha1) ============== Important annoucements ---------------------- **Machine Network configuration now deferred to cloud-init.** The machine network configuration is now deferred to cloud-init. In previous MAAS (and curtin) releases, the machine network configuration was performed by curtin during the installation process. In an effort to consolidate and improve robustness, network configuration has now been consolidated in cloud-init. Since MAAS 2.3 now depends on the latest version of curtin, the network configuration is now deferred to cloud-init. As such, while MAAS will continue to send the network configuration to curtin for backwards compatibility, curtin itself will defer the network configuration to cloud-init. Cloud-init will then perform such configuration on first boot after the installation process has completed. New Features & Improvements --------------------------- **Django 1.11 support** MAAS 2.3 now supports the latest Django LTS version, Django 1.11. This allows MAAS to work with the newer Django version in Ubuntu Artful, which serves as a preparation for the next Ubuntu LTS release. * Users running MAAS from the snap in any Ubuntu release will use Django 1.11 * Users running MAAS in Ubuntu Artful will use Django 1.11. * Users running MAAS in Ubuntu Xenial will continue to use Django 1.9. **Upstream Proxy** MAAS 2.3 now supports the ability to use an upstream proxy. Doing so, provides greater flexibility for closed environments provided that: * It allows MAAS itself to use the corporate proxy at the same time as allowing machines to continue to use the MAAS proxy. * It allows machines that don’t have access to the corporate proxy, to have access to other pieces of the infrastructure via MAAS’ proxy. Adding upstream proxy support als includes an improved configuration on the settings page. Please refer to Settings > Proxy for more details. **Fabric deduplication and beaconing** MAAS is introducing a beaconing to improve the fabric creation and network infrastructure discovery. Beaconing is not yet turned by default in MAAS 2.3 Alpha 1, however, improvements to fabric discovery and creation have been made as part of this process. As of alpha 1 MAAS will no longer create empty fabrics. **Ephemeral Images over HTTP** Historically, MAAS has used ‘tgt’ to provide images over iSCSI for the ephemeral environments (e.g commissioning, deployment environment, rescue mode, etc). MAAS 2.3 changes that behavior in favor of loading images via HTTP. This means that ‘tgt’ will be dropped as a dependency in following releases. MAAS 2.3 Alpha 1 includes this feature behind a feature flag. While the feature is enabled by default, users experiencing issues who would want to go back to use 'tgt' can do so by turning of the feature flag: maas maas set-config name=http_boot value=False Issues fixed in this release ---------------------------- Issues fixed in this release are detailed at: https://launchpad.net/maas/+milestone/2.3.0alpha1 maas-2.4.2-7034-g2f5deb8b8.orig/docs/conf.py000066400000000000000000000237641333555657500200140ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # MAAS documentation build configuration file, created by # sphinx-quickstart on Thu Jan 19 14:48:25 2012. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. from collections import OrderedDict from datetime import datetime import os from os import environ from subprocess import ( CalledProcessError, check_output, ) import sys from pytz import UTC # Configure MAAS's settings. environ.setdefault( "DJANGO_SETTINGS_MODULE", "maasserver.djangosettings.settings") # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # Include '.' in the path so that our custom extension, 'versions', can # be found. sys.path.insert(0, os.path.abspath('.')) # -- Multiple documentation options. # Add a widget to switch between different versions of the documentation to # each generated page. add_version_switcher = False # In order for the version widget to be able to redirect correctly to the # other versions of the documentation, each version of the documentation # has to be accessible at the following addresses: # // -> documentation for trunk. # /1.4/ -> documentation for 1.4. # etc. doc_prefix = 'docs' # Path of the JSON document, relative to homepage of the documentation for trunk # (i.e. '//'), with the list of the versions to include in the # version switcher widget. versions_path = '_static/versions.js' # Versions to include in the version switcher. # Note that the version switcher fetches the list of the documentation versions # from the list published by the trunk documentation (i.e. in '//'). # This means the following list is meaningful only for trunk. # The first item should be the development version. doc_versions = OrderedDict([ ('dev', 'Development trunk'), ('2.0', 'MAAS 2.0'), ('1.9', 'MAAS 1.9'), ('1.8', 'MAAS 1.8'), ('1.7', 'MAAS 1.7'), ('1.5', 'MAAS 1.5'), ('1.4', 'MAAS 1.4'), ]) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.autosummary', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx', 'sphinx.ext.pngmath', 'sphinx.ext.viewcode', 'versions', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'MAAS' copyright = u'2012-2015, MAAS Developers' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. (version, _), *_ = doc_versions.items() # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build', '_templates'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # AutoDoc autodoc_default_flags = ['members', 'show-inheritance'] autodoc_member_order = 'bysource' autodoc_docstring_signature = True # AutoSummary autosummary_generate = True # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'maas' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] html_theme_path = ['_templates'] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. html_logo = 'media/maas-logo-200.png' # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. html_favicon = 'media/maas.ico' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'MAASdoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). latex_paper_size = 'a4' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'MAAS.tex', u'MAAS Documentation', u'MAAS Developers', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('man/maas.8', 'maas', u'MAAS API commandline utility', [u'Canonical 2013-2014'], 8), ('man/maas-region.8', 'maas-region', u'MAAS administration tool', [u'Canonical 2013-2014'], 8) ] # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'http://docs.python.org/': None} # Gather information about the branch and the build date. try: revision_number = check_output( ['git', 'log', '-1', '--pretty=%H', 'HEAD']).decode('ascii') revision_date = check_output( ['git', 'log', '-1', '--pretty=%ai', 'HEAD']).decode('ascii') except CalledProcessError: # This is not a repository branch. revision_number = 'unknown' revision_date = ( datetime.utcnow().replace(tzinfo=UTC) .strftime('+%Y-%m-%d %H:%M:%S %z')) # Populate html_context with the variables used in the templates. html_context = { 'add_version_switcher': 'true' if add_version_switcher else 'false', 'versions_json_path': '/'.join(['', doc_prefix, versions_path]), 'doc_prefix': doc_prefix, 'revision_number': revision_number, 'revision_date': revision_date } maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/000077500000000000000000000000001333555657500210235ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/building-packages.rst000066400000000000000000000044461333555657500251360ustar00rootroot00000000000000Building Ubuntu packages of MAAS ================================ Using a virtual machine from a cloud provider seems to be easier and less hassle than using a local VM or LXC container, but the same recipe ought to apply. You need to build on the same OS that the package will be targeted to, so use a Precise instance to make packages for Precise, for example. #. Start up an instance, log in, and bring it up to date:: sudo apt-get update && sudo apt-get upgrade #. Get the appropriate packaging branch:: bzr branch lp:~maas-maintainers/maas/packaging{...} The `MAAS Maintainers `_ own all the `official MAAS branches`_. #. Move into the new branch directory. #. Check that all the build dependencies are installed. The dependencies are defined in ``debian/control``:: fgrep -i build-depends -A 10 debian/control This will yield, for example:: Build-Depends: debhelper (>= 8.1.0~), dh-apport, po-debconf, python (>= 2.7), python-distribute, python-django Standards-Version: 3.9.3 ... Install these dependencies:: sudo apt get install \ debhelper dh-apport po-debconf python \ python-distribute python-django #. Edit ``debian/changelog`` so it contains: * the right upstream revision number in the version, * the series you're building for; if ``UNRELEASED`` appears in the first entry, ``s/UNRELEASED/trusty/`` (or the series you want), * the name and email address that correspond to the PGP key you want to use to sign the package; these appear near the end of the topmost entry. #. Build:: bzr bd -S -- -uc -us The latter options tell it not to sign the files. You need to do this because the remote machine will not have your GPG key. #. Sign the build on your local machine:: debsign -r user@host '~/*.changes' where ``user@host`` is an SSH string for accessing the remote instance. This will scp the changes and dsc locally, sign them, and put them back. #. On the remote instance you can optionally upload to a PPA:: dput -fu ppa:maas-maintainers/name-of-ppa *.changes .. _official MAAS branches: https://code.launchpad.net/~maas-maintainers maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/cluster-bootstrap.rst000066400000000000000000000040021333555657500252450ustar00rootroot00000000000000Bootstrapping a cluster ======================= Considerations -------------- A new cluster needs to register itself with the region. At the same moment that it's accepted by the region, the region starts configuring it via RPC, so we need an RPC connection open when registering. Before a cluster is accepted, we want to restrict the available RPC calls to a small set, both on the region and the cluster. Before a cluster is accepted, we also do not want to start some services on the cluster, like lease uploads, DHCP scanning, and so forth, because the region will reject interaction from them. Start-up procedure ------------------ This procedure will be followed by existing clusters and new clusters alike: #. Cluster starts. #. If shared secret not available, shutdown, **DONE**. #. ``ClusterClientService`` starts. #. Services other than ``log`` are **not** started. #. Wait for a connection to the region to become available. #. Do not allow any RPC calls other than ``Identify`` and ``Authenticate``. #. Call ``Identify``. #. Call ``Authenticate``. - On success, continue. - On failure, shutdown, **DONE**. #. Permit all other RPC calls. - This allows for side-effects from calling ``Register`` next, like DHCP configuration. #. Call ``Register``. Region accepts cluster. #. Start all services. #. **DONE**. Work items ---------- #. **DONE:** Add ``Authenticate`` RPC call. #. **DONE:** Add ``Register`` RPC call. #. **DONE:** Command-line to install shared-secret. #. **DONE:** Check for shared-secret during start-up (packaging change too?). #. **DONE:** Perform ``Authenticate`` handshake. #. **DONE:** Perform ``Register`` handshake. #. **DONE:** Pass MAAS_URL in ``Register`` call. This replicates functionality found in ``update_nodegroup_maas_url``, which is no longer used. #. Display secret to admins in UI, or provide tool to obtain secret locally on region controller's machine. #. Mechanism to limit available RPC calls. #. Mechanism to defer start-up of "full" services. maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/cluster-registration.rst000066400000000000000000000040721333555657500257510ustar00rootroot00000000000000====================================== How cluster registration works in MAAS ====================================== A region controller associates with one or more cluster controllers, each of which is responsible for contacting the region controller itself and announcing its presence. An admin must accept or reject each cluster that registers itself with the region controller, except in the special circumstance mentioned in :ref:`first-cluster`. There is always at least one cluster controller in MAAS (known as a NodeGroup in the code) which is known as the 'master'. The Nodegroup entry always exists even if no cluster controllers have contacted the region controller yet, so that it can be used as a default when adding nodes in the API or UI before the cluster controller is defined. Once a real cluster controller connects it will become this master. This logic was originally implemented as an easy way to upgrade older installations that were created before nodegroups were introduced. Region Controller Location -------------------------- The cluster obviously needs to know where the region controller is, and this is configured in a file ``/etc/maas/rackd.conf``. This should only ever be modified via the ``maas-rack`` command. .. _first-cluster: First cluster to connect ------------------------ For the convenience of small setups (and the development environment), the first cluster controller to connect to the region controller becomes the 'master' nodegroup, and if the cluster connects *from the same host*, it is automatically accepted. The logic currently looks like this: #. If there is one, the oldest nodegroup is the master. #. If none exists, the code creates a placeholder master nodegroup on the fly, pre-accepted but without a UUID. #. If the placeholder is the only nodegroup, the first cluster controller to register becomes the master. Sadly, there is some code complexity to clear up here as this logic is not encapsulated in a single place, but instead in both: * ``NodeGroup.objects.ensure_data()`` and * ``AnonNodeGroupsHandler.register()``. maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/metadata.rst000066400000000000000000000116441333555657500233430ustar00rootroot00000000000000The Metadata API ================ A MAAS region controller provides a separate API (the metadata API) for the benefit of the nodes themselves. This is where a node obtains the details of how it should be set up, such as which SSH keys should be installed in order to give a particular user remote access to the system. As a MAAS user or administrator, you do not need to request this information yourself. It is normally up to ``cloud-init`` to do this while setting up the node. You'll find more about how this works in cloud-init's datasources_ documentation. .. _datasources: http://cloudinit.readthedocs.org/en/latest/topics/datasources.html Similarity to EC2 ----------------- The metadata API is very similar, and partly identical, to the EC2 metadata service. It follows a similar directory structure and provides several items in the same formats. For example, in order to find out its own host name, a node would perform an http GET request for:: /2012-03-01/meta-data/local-hostname The first item in the path is the API version. The API has been extended since March 2012, but not changed incompatibly and so that date is still the current version number. The items following that form a directory hierarchy based on that of the EC2 metadata service. The metadata service "knows" which node makes the request, and so there is no need for the request URL to identify the node whose hostname should be retrieved. The request automatically returns the hostname for the node which requested it. Just like EC2, the MAAS metadata API will accept GET requests to:: / /2012-03-01/ /2012-03-01/meta-data/ /2012-03-01/meta-data/instance-id /2012-03-01/meta-data/local-hostname /2012-03-01/meta-data/public-keys /2012-03-01/user-data Hopefully their meanings are fairly obvious. The ``public-keys`` will contain the user's SSH keys. MAAS adds a tarball of scripts that a node should run during its commissioning phase:: /2012-03-01/meta-data/maas-commissioning-scripts There are other differences. Where EC2 makes the metadata service available at a fixed IP address, MAAS configures the location of the metadata service on the node while installing its operating system. It does this through installation :ref:`preseeds `. These preseeds also include the node's access credentials. Additional Directory Trees -------------------------- .. _enlistment-tree: MAAS adds some entirely new directories as well. An enlisting node (which does not have access credentials for the metadata API yet) can anonymously request the same items for itself under ``/enlist/``, e.g.:: /enlist/2012-03-01/meta-data/local-hostname If you set the ``ALLOW_UNSAFE_METADATA_ACCESS`` configuration item, the metadata service will also provide the information on arbitrary nodes to authenticated users under a separate sub-tree. For security reasons this is not recommended, but it may be useful in debugging. With this option enabled, the information for the node with MAC address 99:99:99:99:99:99 is available at:: /2012-03-01/by-mac/99:99:99:99:99:99/meta-data/local-hostname And so on for the other information. There is a similar facility keyed by MAAS system identifiers. .. _curtin-tree: Finally, a curtin-specific metadata API with largely the same information lives in the ``/curtin/`` subtree:: /curtin/2012-03-01/meta-data/local-hostname The curtin subtree however differs in the ``user-data`` endpoint. It returns a curtin-specific form of user data. Authentication -------------- Most metadata requests are authenticated similar to ones to the region-controller API, through OAuth. Every node in a MAAS has its own OAuth key. (On the region controller side, these keys collectively belong to a single special user called ``node-init``. You will not see such special users listed in the UI, however.) When a node asks for information about itself, the OAuth key is what tells the metadata service which node that is. Not all requests are authenticated in this way. For instance, a node can access the items under the enlistment subdirectory (see :ref:`above `) anonymously. The metadata service will identify the requesting node by its IP address. API Operations -------------- The MAAS metadata API supports a few non-GET operations. These work just like the ones on the main :ref:`region-controller API `, but they are meant to be invoked by nodes. The URL for these calls is ``/2013-03-01/``, and the operation name is passed as a multipart form item called "op". Other parameters are passed in the same way. The ``signal`` call notifies the region controller of the state of a commissioning node. The node sends running updates, as well as output produced by the commissioning scripts, and finally completion information through this call. When a node is done installing, it may call POST operations ``netboot_on`` and ``netboot_off`` to instruct MAAS to enable or disable its network boot setting. maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/notes/000077500000000000000000000000001333555657500221535ustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/notes/anatomy-of-recommissioning-in-maas-2.0.rst000066400000000000000000001072761333555657500320170ustar00rootroot00000000000000.. -*- mode: rst -*- **************************************** Anatomy of a recommissioning in MAAS 2.0 **************************************** **2016-04-30, mpontillo** You may be asking yourself, "what exactly happens during commissioning" in MAAS? Well, maybe you're not. ;-) But as part of a recent bug I triaged, I wanted to find out for myself. So I analyzed a packet capture during a recent recommission. I (painstakingly) looked at almost every TCP stream, to see what status messages cloud-init was posting, and when. See the timeline later. Also note that you can use a capture filter of "syslog" to see the syslog of the commissioning node, even if the syslog packets are being discarded by the server! Notes on capturing the preseed ------------------------------ If you want to see the preseed and you have a packet capture, the easiest way I found is to use *File > Export Objects > HTTP...* in Wireshark. You can export them all if you want, but there will be a lot of data. The very first one is the preseed; once exported, it will have a filename like ``%3fop=get_preseed``. See an example below, based on what it was in the packet trace I looked at. If you dig deeper, you can also see things like the downloaded ``.deb`` files, the commissioning scripts tarball, the output of each commissioning script, and each JSON request/reply. Similarly, if you need to do some triage based on the files that were TFTP'd, there is also *File > Export Objects > TFTP*. .. code-block:: yaml #cloud-config apt_proxy: http://192.168.100.10:8000/ datasource: MAAS: {consumer_key: 7uCdQBcxmKWMrpaJ8W, metadata_url: 'http://192.168.100.10/MAAS/metadata/', token_key: nYLwP4pYm5q2aqvCzU, token_secret: GppvznAv6cueKnFKV6tKGqxdraNEzkjJ} power_state: {condition: test ! -e /tmp/block-poweroff, delay: now, mode: poweroff, timeout: 3600} reporting: maas: {consumer_key: 7uCdQBcxmKWMrpaJ8W, endpoint: 'http://192.168.100.10/MAAS/metadata/status/4y3h7r', token_key: nYLwP4pYm5q2aqvCzU, token_secret: GppvznAv6cueKnFKV6tKGqxdraNEzkjJ, type: webhook} rsyslog: remotes: {maas: '192.168.100.10:514'} system_info: package_mirrors: - arches: [i386, amd64] failsafe: {primary: 'http://archive.ubuntu.com/ubuntu', security: 'http://security.ubuntu.com/ubuntu'} search: primary: ['http://archive.ubuntu.com/ubuntu'] security: ['http://archive.ubuntu.com/ubuntu'] - arches: [default] failsafe: {primary: 'http://ports.ubuntu.com/ubuntu-ports', security: 'http://ports.ubuntu.com/ubuntu-ports'} search: primary: ['http://ports.ubuntu.com/ubuntu-ports'] security: ['http://ports.ubuntu.com/ubuntu-ports'] Timeline -------- (0) PXE BIOS loads ================== Current time: 9.2s (This was on a ``qemu-kvm`` virtual machine.) - Sends a *DHCP Discover* packet from ``0.0.0.0`` to ``255.255.255.255``. - Sends option 60 (vendor class identifier): ``PXEClient:Arch:00000:UNDI:002001`` - Sends numerous options related to netboot, including 66 (tftp server name), 67 (bootfile name), 129 through 135 (PXE), 175 (Etherboot) and 203 (?). - Sends ICMPv6 router solicitation packets to the ``ff02::2`` multicast address (which corresponds to multicast MAC ``33:33:00:00:00:02``) to check for an IPv6 router. (1) MAAS DHCP offers an IP address ================================== Current time: 10.2s - The offer is made from the dynamic range for the subnet the node booted from. - The *DHCP Offer* packet has a 30 second lease time, specifies a boot filename of ``pxelinux.0``, and a ``next-server`` IP address of the MAAS rack. (2) PXE BIOS sends a DHCP ACK ============================= Current time: 12.1s (3) PXE BIOS announces itself to the world ========================================== First with an ARP for the ``next-server``, and then it begins TFTP requests: - First for ``pxelinux.0``. - Next, various combinations of ``ldlinux.c32``, ``/boot/isolinux/ldlinux.c32``, ``/isolinux/ldlinux.c32``, (et cetera) until it finds ``/syslinux/ldlinux.c32``. - Next, ``pxelinux.cfg/`` (fails), then ``pxelinux.cfg/01-`` (succeeds) The configuration file ultimately contains:: DEFAULT execute LABEL execute SAY Booting under MAAS direction... SAY nomodeset iscsi_target_name=iqn.2004-05.com.ubuntu:maas:ephemeral-ubuntu-amd64-generic-xenial-release iscsi_target_ip= iscsi_target_port=3260 iscsi_initiator= ip=:::::BOOTIF ro root=/dev/disk/by-path/ip-:3260-iscsi-iqn.2004-05.com.ubuntu:maas:ephemeral-ubuntu-amd64-generic-xenial-release-lun-1 overlayroot=tmpfs cloud-config-url=http:///MAAS/metadata/latest/by-id//?op=get_preseed log_host= log_port=514 KERNEL ubuntu/amd64/generic/xenial/release/boot-kernel INITRD ubuntu/amd64/generic/xenial/release/boot-initrd APPEND nomodeset iscsi_target_name=iqn.2004-05.com.ubuntu:maas:ephemeral-ubuntu-amd64-generic-xenial-release iscsi_target_ip= iscsi_target_port=3260 iscsi_initiator= ip=:::::BOOTIF ro root=/dev/disk/by-path/ip-:3260-iscsi-iqn.2004-05.com.ubuntu:maas:ephemeral-ubuntu-amd64-generic-xenial-release-lun-1 overlayroot=tmpfs cloud-config-url=http:///MAAS/metadata/latest/by-id//?op=get_preseed log_host= log_port=514 IPAPPEND 2 - Next, TFTP requests ``ubuntu/amd64/generic/xenial/release/boot-kernel`` (takes ~1 second) - Next, TFTP requests ``ubuntu/amd64/generic/xenial/release/boot-initrd`` (takes ~3 seconds) Start time: 21.1s End time: 25.2s (4) The commissioning node boots the kernel and loads the initrd ================================================================ Current time: 31.5s This is evidenced by a new *DHCP Discover* request (followed by an offer, request, and ack) this time without any of the netboot options. The node also sends a multicast listener report to ``ff02::16``, followed by a neighbor solicitation for its link-local address (this is for IPv6 duplicate address detection purposes; 1 second later when duplicate address detection completes, it sends a router soliciation message from the address it originally probed with the neighbor soliciation). (5) An iSCSI session begins for the ephemeral image =================================================== This will persist during the remainder of the commissioning, so it's best to filter out with a display filter if you're viewing the commissioning in Wireshark (``not tcp.port = 3260``). (6) A DNS query (A/AAAA) is issued for ntp.ubuntu.com ===================================================== Current time: 35.4s (7) The node ARPs for the router ================================ Current time: 35.7s So that it can try to reach ntp.ubuntu.com, probably! (8) The node tries to look up an A record for "ubuntu" ====================================================== Current time: 35.7s Not sure why (because that's its hostname?) DNS returns "no such name". (9) cloud-init requests its metadata ==================================== :: GET /MAAS/metadata/latest/by-id//?op=get_preseed HTTP/1.1 Host: User-Agent: Cloud-Init/0.7.7 Accept: */* Connection: keep-alive Accept-Encoding: gzip, deflate (10) cloud-init posts its first status, and searches for a data source ====================================================================== Example:: POST /MAAS/metadata/status/ HTTP/1.1 Host: User-Agent: python-requests/2.9.1 Authorization: OAuth oauth_nonce="62097414331357635371461972194", oauth_timestamp="1461972194", oauth_version="1.0", oauth_signature_method="PLAINTEXT", oauth_consumer_key="7uCdQBcxmKWMrpaJ8W", oauth_token="nYLwP4pYm5q2aqvCzU", oauth_signature="%26GppvznAv6cueKnFKV6tKGqxdraNEzkjJ" Accept: */* Content-Length: 171 Connection: keep-alive Accept-Encoding: gzip, deflate cloud-init continues to post status throughout the process, such as:: {"description": "attempting to read from cache [trust]", "name": "init-network/check-cache", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "no cache found", "name": "init-network/check-cache"} {"description": "searching for network data from DataSourceNoCloudNet", "name": "init-network/search-NoCloudNet", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "no network data found from DataSourceNoCloudNet", "name": "init-network/search-NoCloudNet"} {"description": "searching for network data from DataSourceConfigDriveNet", "name": "init-network/search-ConfigDriveNet", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "no network data found from DataSourceConfigDriveNet", "name": "init-network/search-ConfigDriveNet"} {"description": "searching for network data from DataSourceOpenNebulaNet", "name": "init-network/search-OpenNebulaNet", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "no network data found from DataSourceOpenNebulaNet", "name": "init-network/search-OpenNebulaNet"} {"description": "searching for network data from DataSourceAzureNet", "name": "init-network/search-AzureNet", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "no network data found from DataSourceAzureNet", "name": "init-network/search-AzureNet"} {"description": "searching for network data from DataSourceAltCloud", "name": "init-network/search-AltCloud", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "no network data found from DataSourceAltCloud", "name": "init-network/search-AltCloud"} {"description": "searching for network data from DataSourceOVFNet", "name": "init-network/search-OVFNet", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "no network data found from DataSourceOVFNet", "name": "init-network/search-OVFNet"} {"description": "searching for network data from DataSourceMAAS", "name": "init-network/search-MAAS", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} Aha! It seems to have found the MAAS data source. (11) cloud-init requests metadata from MAAS =========================================== Current time: 39.5s Sends a GET request:: GET /MAAS/metadata//2012-03-01/meta-data/instance-id HTTP/1.1 (along with its OAuth *Authorization* header.) Followed by the following requests:: GET /MAAS/metadata//2012-03-01/meta-data/local-hostname HTTP/1.1 GET /MAAS/metadata//2012-03-01/meta-data/instance-id HTTP/1.1 GET /MAAS/metadata//2012-03-01/meta-data/public-keys HTTP/1.1 GET /MAAS/metadata//2012-03-01/user-data HTTP/1.1 (The result of this request is a binary blob — presumably the commissioning scripts.) Continued:: POST /MAAS/metadata/status/4y3h7r HTTP/1.1 {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "found network data from DataSourceMAAS", "name": "init-network/search-MAAS"} (12) cloud-init begins consuming the user-data ============================================== Current time: 39.8s It posts more status:: {"description": "reading and applying user-data", "name": "init-network/consume-user-data", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "reading and applying user-data", "name": "init-network/consume-user-data"} {"description": "reading and applying vendor-data", "name": "init-network/consume-vendor-data", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "reading and applying vendor-data", "name": "init-network/consume-vendor-data"} {"description": "running config-migrator with frequency always", "name": "init-network/config-migrator", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "config-migrator ran successfully", "name": "init-network/config-migrator"} {"description": "running config-ubuntu-init-switch with frequency once-per-instance", "name": "init-network/config-ubuntu-init-switch", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "config-ubuntu-init-switch ran successfully", "name": "init-network/config-ubuntu-init-switch"} {"description": "running config-seed_random with frequency once-per-instance", "name": "init-network/config-seed_random", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"description": "running config-seed_random with frequency once-per-instance", "name": "init-network/config-seed_random", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"description": "running config-bootcmd with frequency always", "name": "init-network/config-bootcmd", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "config-bootcmd ran successfully", "name": "init-network/config-bootcmd"} {"description": "running config-write-files with frequency once-per-instance", "name": "init-network/config-write-files", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "config-write-files ran successfully", "name": "init-network/config-write-files"} {"description": "running config-growpart with frequency always", "name": "init-network/config-growpart", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "config-growpart ran successfully", "name": "init-network/config-growpart"} {"description": "running config-resizefs with frequency always", "name": "init-network/config-resizefs", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "config-resizefs ran successfully", "name": "init-network/config-resizefs"} {"description": "running config-set_hostname with frequency once-per-instance", "name": "init-network/config-set_hostname", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"description": "running config-set_hostname with frequency once-per-instance", "name": "init-network/config-set_hostname", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"description": "running config-update_hostname with frequency always", "name": "init-network/config-update_hostname", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "config-update_hostname ran successfully", "name": "init-network/config-update_hostname"} {"description": "running config-update_etc_hosts with frequency always", "name": "init-network/config-update_etc_hosts", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"description": "running config-update_etc_hosts with frequency always", "name": "init-network/config-update_etc_hosts", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"description": "running config-ca-certs with frequency once-per-instance", "name": "init-network/config-ca-certs", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"description": "running config-ca-certs with frequency once-per-instance", "name": "init-network/config-ca-certs", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"description": "running config-rsyslog with frequency once-per-instance", "name": "init-network/config-rsyslog", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"description": "running config-rsyslog with frequency once-per-instance", "name": "init-network/config-rsyslog", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} (I suppose this means from about ~42 seconds onward in the capture, we'll see rsyslog entries, too.) :: {"description": "running config-users-groups with frequency once-per-instance", "name": "init-network/config-users-groups", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "config-users-groups ran successfully", "name": "init-network/config-users-groups"} {"description": "running config-ssh with frequency once-per-instance", "name": "init-network/config-ssh", "event_type": "start", "timestamp": 1461972194.3447218, "origin": "cloudinit"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "config-ssh ran successfully", "name": "init-network/config-ssh"} {"origin": "cloudinit", "event_type": "finish", "result": "SUCCESS", "timestamp": 1461972194.3447218, "description": "searching for network datasources", "name": "init-network"} [stream 61 seems to be an iSCSI conversation] :: {"event_type": "start", "description": "running config-emit_upstart with frequency always", "origin": "cloudinit", "name": "modules-config/config-emit_upstart", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-emit_upstart", "description": "config-emit_upstart ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-disk_setup with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-disk_setup", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-disk_setup", "description": "config-disk_setup ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-mounts with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-mounts", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-mounts", "description": "config-mounts ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-ssh-import-id with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-ssh-import-id", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-ssh-import-id", "description": "config-ssh-import-id ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-locale with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-locale", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-locale", "description": "config-locale ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-set-passwords with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-set-passwords", "timestamp": 1461972199.9740574} {"event_type": "start", "description": "running config-set-passwords with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-set-passwords", "timestamp": 1461972199.9740574} {"event_type": "start", "description": "running config-snappy with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-snappy", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-snappy", "description": "config-snappy ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-grub-dpkg with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-grub-dpkg", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-grub-dpkg", "description": "config-grub-dpkg ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-apt-pipelining with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-apt-pipelining", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-apt-pipelining", "description": "config-apt-pipelining ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-apt-configure with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-apt-configure", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-apt-configure", "description": "config-apt-configure ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-package-update-upgrade-install with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-package-update-upgrade-install", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-package-update-upgrade-install", "description": "config-package-update-upgrade-install ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-fan with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-fan", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-fan", "description": "config-fan ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-landscape with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-landscape", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-landscape", "description": "config-landscape ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-timezone with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-timezone", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-timezone", "description": "config-timezone ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-lxd with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-lxd", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-lxd", "description": "config-lxd ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-puppet with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-puppet", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-puppet", "description": "config-puppet ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-chef with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-chef", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-chef", "description": "config-chef ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-salt-minion with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-salt-minion", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-salt-minion", "description": "config-salt-minion ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-mcollective with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-mcollective", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-mcollective", "description": "config-mcollective ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-disable-ec2-metadata with frequency always", "origin": "cloudinit", "name": "modules-config/config-disable-ec2-metadata", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-disable-ec2-metadata", "description": "config-disable-ec2-metadata ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-runcmd with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-runcmd", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-runcmd", "description": "config-runcmd ran successfully", "origin": "cloudinit"} {"event_type": "start", "description": "running config-byobu with frequency once-per-instance", "origin": "cloudinit", "name": "modules-config/config-byobu", "timestamp": 1461972199.9740574} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config/config-byobu", "description": "config-byobu ran successfully", "origin": "cloudinit"} {"result": "SUCCESS", "event_type": "finish", "timestamp": 1461972199.9740574, "name": "modules-config", "description": "running modules for config", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running config-rightscale_userdata with frequency once-per-instance", "name": "modules-final/config-rightscale_userdata", "origin": "cloudinit", "event_type": "start"} {"timestamp": 1461972207.09086, "description": "config-rightscale_userdata ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-rightscale_userdata", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running config-scripts-vendor with frequency once-per-instance", "name": "modules-final/config-scripts-vendor", "origin": "cloudinit", "event_type": "start"} {"timestamp": 1461972207.09086, "description": "config-scripts-vendor ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-scripts-vendor", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running config-scripts-per-once with frequency once", "name": "modules-final/config-scripts-per-once", "origin": "cloudinit", "event_type": "start"} {"timestamp": 1461972207.09086, "description": "config-scripts-per-once ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-scripts-per-once", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running config-scripts-per-boot with frequency always", "name": "modules-final/config-scripts-per-boot", "origin": "cloudinit", "event_type": "start"} {"timestamp": 1461972207.09086, "description": "config-scripts-per-boot ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-scripts-per-boot", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running config-scripts-per-instance with frequency once-per-instance", "name": "modules-final/config-scripts-per-instance", "origin": "cloudinit", "event_type": "start"} {"timestamp": 1461972207.09086, "description": "config-scripts-per-instance ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-scripts-per-instance", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running config-scripts-user with frequency once-per-instance", "name": "modules-final/config-scripts-user", "origin": "cloudinit", "event_type": "start"} [stream 120 is http://archive.ubuntu.com//ubuntu/dists/xenial/InRelease] [stream 121 is a duplicate request which returns Not Modified] [stream 122 through stream 125 is updates, backport, security] Jumping around a bit, finally when the filter is set to ``tcp.stream eq 171``, some commissioning output is posted:: POST /MAAS/metadata//2012-03-01/ HTTP/1.1 Accept-Encoding: identity Connection: close Host: ... User-Agent: Python-urllib/3.5 Authorization: OAuth ... --IgeOqQzkofxNCLEJhNqXEZVCsEXdZgS Content-Disposition: form-data; name="op" signal --IgeOqQzkofxNCLEJhNqXEZVCsEXdZgS Content-Disposition: form-data; name="script_result" 0 --IgeOqQzkofxNCLEJhNqXEZVCsEXdZgS Content-Disposition: form-data; name="status" WORKING --IgeOqQzkofxNCLEJhNqXEZVCsEXdZgS Content-Disposition: form-data; name="error" finished 00-maas-03-install-lldpd [3/9]: 0 --IgeOqQzkofxNCLEJhNqXEZVCsEXdZgS Content-Disposition: form-data; name="00-maas-03-install-lldpd.out"; filename="00-maas-03-install-lldpd.out" Content-Type: application/octet-stream Reading package lists... Building dependency tree... Reading state information... The following additional packages will be installed: libjansson4 Suggested packages: snmpd The following NEW packages will be installed: libjansson4 lldpd 0 upgraded, 2 newly installed, 0 to remove and 17 not upgraded. Need to get 171 kB of archives. After this operation, 577 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com//ubuntu xenial/main amd64 libjansson4 amd64 2.7-3 [26.9 kB] Get:2 http://archive.ubuntu.com//ubuntu xenial/universe amd64 lldpd amd64 0.7.19-1 [145 kB] Fetched 171 kB in 0s (0 B/s) Selecting previously unselected package libjansson4:amd64. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 25719 files and directories currently installed.) Preparing to unpack .../libjansson4_2.7-3_amd64.deb ... Unpacking libjansson4:amd64 (2.7-3) ... Selecting previously unselected package lldpd. Preparing to unpack .../lldpd_0.7.19-1_amd64.deb ... Unpacking lldpd (0.7.19-1) ... Processing triggers for libc-bin (2.23-0ubuntu3) ... Processing triggers for man-db (2.7.5-1) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for systemd (229-4ubuntu4) ... Setting up libjansson4:amd64 (2.7-3) ... Setting up lldpd (0.7.19-1) ... Processing triggers for libc-bin (2.23-0ubuntu3) ... Processing triggers for ureadahead (0.100.0-19) ... Processing triggers for systemd (229-4ubuntu4) ... --IgeOqQzkofxNCLEJhNqXEZVCsEXdZgS-- HTTP/1.1 200 OK Date: Fri, 29 Apr 2016 23:23:40 GMT Server: TwistedWeb/16.0.0 Content-Type: text/plain X-Maas-Api-Hash: 330962629c417d2f60a5e18c279ca1db7b710cf3 X-Frame-Options: SAMEORIGIN Vary: Authorization,Cookie,Accept-Encoding Connection: close Transfer-Encoding: chunked 2 OK 0 This behavior continues until all the scripts are finished. Finally, in stream 191, the scripts finish:: {"timestamp": 1461972207.09086, "description": "config-scripts-user ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-scripts-user", "origin": "cloudinit"} and cloud-init continues with other things:: {"timestamp": 1461972207.09086, "description": "running config-ssh-authkey-fingerprints with frequency once-per-instance", "name": "modules-final/config-ssh-authkey-fingerprints", "origin": "cloudinit", "event_type": "start"} {"timestamp": 1461972207.09086, "description": "config-ssh-authkey-fingerprints ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-ssh-authkey-fingerprints", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running config-keys-to-console with frequency once-per-instance", "name": "modules-final/config-keys-to-console", "origin": "cloudinit", "event_type": "start"} {"timestamp": 1461972207.09086, "description": "config-keys-to-console ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-keys-to-console", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running config-phone-home with frequency once-per-instance", "name": "modules-final/config-phone-home", "origin": "cloudinit", "event_type": "start"} By now MAAS has revoked the ``oauth_token`` and returns *Authorization Error: Invalid access token: nYLwP4pYm5q2aqvCzU* but cloud-init keeps posting:: {"timestamp": 1461972207.09086, "description": "config-phone-home ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-phone-home", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running config-final-message with frequency always", "name": "modules-final/config-final-message", "origin": "cloudinit", "event_type": "start"} {"timestamp": 1461972207.09086, "description": "config-final-message ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-final-message", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running config-power-state-change with frequency once-per-instance", "name": "modules-final/config-power-state-change", "origin": "cloudinit", "event_type": "start"} {"timestamp": 1461972207.09086, "description": "config-power-state-change ran successfully", "event_type": "finish", "result": "SUCCESS", "name": "modules-final/config-power-state-change", "origin": "cloudinit"} {"timestamp": 1461972207.09086, "description": "running modules for final", "event_type": "finish", "result": "SUCCESS", "name": "modules-final", "origin": "cloudinit"} (13) Finished ============= Total time: 130s maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/notes/check-imports.rst000066400000000000000000000122111333555657500254520ustar00rootroot00000000000000.. -*- mode: rst -*- **************** Checking imports **************** **2016-03-03, allenap** ``utilities/check-imports`` is a static import checker that replaces ``maasfascist``, which intercepted imports at runtime. The latter sounds like it would be more effective, but it didn't work out in practice. Working as an import hook means it could only ever intercept the *first* import of a module. Often the code doing the importing was core Django, as it pulled in applications. It was not possible to, for example, impose a different policy on migrations, because these only ever run after Django has imported the world. ``maasfascist`` was fiddly in other ways. When walking the stack, you find some frames without associated modules (this seems to be new to Python 3) or you find multiple frames belonging to importlib or modules related to its workings. The code was getting unwieldy and slow. In addition, the import hook API that ``maasfascist`` used had been updated in Python 3, so there was work on the table to update it (Python 3 still runs old hooks, but the old API is, AFAIK, already deprecated). I decided to cut losses and switch to a static check. It won't detect abuses of policy via ``exec()`` or ``importlib``, but we can deal with those in review; they're not techniques we make much use of. ``check-imports`` examines the AST of every file for which policy has been defined, extracting all imports. These are normalised into fully- qualified names and tested against policy. Policy is quite simple: sets of files are each associated with a ``Rule``. These are defined within ``utilities/check-imports`` itself. For example:: # The set of test-only files. Tests = files( "src/**/test_*.py", "src/**/testing/**/*.py", "src/**/testing.py") # The set of apiclient files. APIClient = files("src/apiclient/**/*.py") # See check-imports for the definitions of these: StandardLibraries = Pattern(...) TestingLibraries = Pattern(...) # A list of (files, rule) tuples that define the policy to be applied. # Below, the first tuple defines the policy that will be applied to # distributed apiclient code, the second is the policy for apiclient's # tests. checks = [ ( APIClient - Tests, Rule( Allow("apiclient|apiclient.**"), Allow("django.utils.**"), Allow("oauth.oauth"), Allow(StandardLibraries), ), ), ( APIClient & Tests, Rule( Allow("apiclient|apiclient.**"), Allow("django.**"), Allow("oauth.oauth"), Allow("piston3|piston3.**"), Allow(StandardLibraries), Allow(TestingLibraries), ), ), ... ] There are two possible *actions*: ``Allow`` and ``Deny``. The latter isn't used in any policy so far, but that's because imports are denied by default. Actions are initialised with patterns: * ``Allow("foo.bar")`` allows for ``import foo.bar`` **or** ``from foo import bar`` but **not** for ``import foo.bar.baz``. * ``Allow("foo.bar.*")`` allows for ``import foo.bar.baz`` **or** ``from foo.bar import bar``, but **not** for ``import foo.bar`` and **not** ``from foo.bar.baz import thing``, i.e. it allows for any name within ``foo.bar`` to be imported. * ``Allow("foo.bar.**")`` allows for ``import foo.bar.baz`` **or** ``from foo.bar import bar`` **or** ``import foo.bar.alice.bob.carol``, i.e. it allows for any name or submodule of ``foo.bar`` to be imported. * ``Allow("foo.bar|foo.bar.**")`` allows allows ``foo.bar`` **or** for any name or submodule of ``foo.bar`` to be imported. * ``Allow(Pattern("foo"))`` is equivalent to ``Allow("foo")``. You can pre-create ``Pattern`` instances and ``Allow`` or ``Deny`` them as necessary. Multiple patterns can be passed to ``Allow`` or ``Deny``, either as separate strings — ``Allow("foo", "bar")`` — or combined using ``|`` as above. Any number of actions are wrapped up into a ``Rule``:: rule = Rule(Allow("foo"), Allow("bar")) Rules can also be combined using ``|``:: rule = Rule(Allow("foo")) | Rule(Allow("bar")) You'll see examples of this in ``check-imports``. Having said all that, the best way to learn this is to find yourself at the sharp end of ``check-import``'s policy and having to add the rule to permit what you need. I've already discovered `bug 1547874`_ and `bug 1547877`_ with ``check-imports``, but it should now *prevent* bugs like that. .. _bug 1547874: https://bugs.launchpad.net/maas/+bug/1547874 .. _bug 1547877: https://bugs.launchpad.net/maas/+bug/1547877 Another goal for ``check-imports`` is to inhibit the use of application code from within Django-native migrations, which I expect it to play out something like: first, we replace all imports of application code in existing (non-South) migrations with *copies* of the imported code; second, we tighten up the import rules to prevent future migrations from landing with application imports, so that we don't forget to copy code in when generating new migrations. ``check-imports`` is called by `make lint` so it should become part of your workflow without any change necessary. maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/notes/index.rst000066400000000000000000000005121333555657500240120ustar00rootroot00000000000000.. -*- mode: rst -*- ***************** Development notes ***************** A collection of miscellaneous notes, perhaps first shared as emails with the development team, that might be useful to developers. .. toctree:: :maxdepth: 1 check-imports unhandled-error-in-deferred anatomy-of-recommissioning-in-maas-2.0 maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/notes/unhandled-error-in-deferred.rst000066400000000000000000000046061333555657500301660ustar00rootroot00000000000000.. -*- mode: rst -*- *************************** Unhandled error in Deferred *************************** **2016-03-16, allenap** Last night I saw an *Unhandled error in Deferred* message when running ``bin/test.rack`` and it took me a while to figure it out. I managed to track it down to the following code:: def test_chassis_type_unknown_logs_error_to_maaslog(self): fake_error = factory.make_name('error') self.patch(clusterservice, 'maaslog') mock_deferToThread = self.patch_autospec( clusterservice, 'deferToThread') mock_deferToThread.return_value = fail(Exception(fake_error)) ... What's the matter with that? That's not obviously problematic. ``fail(an_exception)`` creates a Deferred that's itching to call an errback with the given error. In this test nothing was ever calling the mocked ``deferToThread``, so no errback was ever called, and Twisted complained about it when the Deferred was garbage collected. ``deferToThread`` was mocked here as belt-n-braces: if the method under test did not follow the expected execution path this mock would prevent real work being done in a thread. In this case I felt it was safe to omit it: .. code-block:: udiff def test_chassis_type_unknown_logs_error_to_maaslog(self): - fake_error = factory.make_name('error') self.patch(clusterservice, 'maaslog') - mock_deferToThread = self.patch_autospec( - clusterservice, 'deferToThread') - mock_deferToThread.return_value = fail(Exception(fake_error)) but I could have fixed it like so: .. code-block:: udiff def test_chassis_type_unknown_logs_error_to_maaslog(self): - fake_error = factory.make_name('error') + fake_error = factory.make_exception() self.patch(clusterservice, 'maaslog') mock_deferToThread = self.patch_autospec( clusterservice, 'deferToThread') - mock_deferToThread.return_value = fail(Exception(fake_error)) + mock_deferToThread.side_effect = lambda: fail(fake_error) or: .. code-block:: udiff ... - mock_deferToThread.return_value = fail(Exception(fake_error)) + mock_deferToThread.side_effect = always_fail_with(fake_error) Either way, no Deferred would be created in an error state unless there is a consumer for it. An *Unhandled error* warning seen after that would be a legitimate bug. maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/notifications.rst000066400000000000000000000146751333555657500244430ustar00rootroot00000000000000Notifications ============= When you need to inform or warn users, administrators, or a specific user about something that has happened or is happening, consider using notifications. These can be created by code running in the region or via the Web API if you're an administrator. Tell all users that MAAS is on fire: >>> from maasserver.models.notification import Notification >>> Notification.objects.create_error_for_users("MAAS is on fire.") Warn all admins that MAAS is taking on water: >>> Notification.objects.create_warning_for_admins( ... "MAAS is taking on water.") Tell a specific user that they've won the lottery: >>> from maasserver.testing.factory import factory >>> user = factory.make_User() >>> Notification.objects.create_success_for_user( ... "Congratulations {name}! You've won €10 in the lottery!", ... user=user, context={"name": user.username}) Context ------- A notification's ``context`` is a dict — saved into the database as JSON — that gets interpolated into the message (new-style, not %-based). What's its purpose? >>> Notification.objects.create_warning_for_admins( ... "Disk space is low; only {amount:0.2f} GiB remaining.", ... context={"amount": 1.3}, ident="disk-space-warning") Later: >>> ds_warning = Notification.objects.get(ident="disk-space-warning") >>> ds_warning.context = {"amount": 0.8} >>> ds_warning.save() This will update the message, live, in the browser, but will not show it again to people that have dismissed it already. This could be done by just changing the message! True, but the context does give a convenient location for context of all kinds; it does not have to be consumed by the message: >>> ii_warning = Notification.objects.create_warning_for_users( ... "Image import from {url} has failed.", ... ident="import:http://foobar.example.com/", ... context={ ... "url": "http://foobar.example.com/", ... "failures": ["2016-02-14 13:58:37"], ... }) Later, after another failure: >>> ii_warning = Notification.objects.get( ... ident="import:http://foobar.example.com/") >>> ii_warning.context = { ... "url": "http://foobar.example.com/", ... "failures": ["2016-02-14 13:58:37", "2016-02-14 16:58:02"], ... "count": 2, ... "hours": 3, ... } >>> ii_warning.message = ( ... "Image import from {url} has failed {count} times " ... "in the last {hours} hours.") >>> ii_warning.save() >>> ii_warning.render() 'Image import from http://foobar.example.com/ has failed 2 times in the last 3 hours.' Rendering and HTML ------------------ As you can see, rendering the message and context should be done with the ``render`` method: >>> ds_warning.render() 'Disk space is low; only 0.80 GiB remaining.' Why? Notifications are primarily for a browser environment and so some limited amount of HTML is tolerated — it's sanitised by AngularJS in the UI so nothing fancy will get through. The ``render`` method knows about this and allows HTML content in the *message* through, but escapes the *context*: >>> ds_warning.message = "Hello {name}!" >>> ds_warning.context = {"name": ""} >>> ds_warning.render() 'Hello <script>nasty();</script>!' Creating notifications ---------------------- There are many methods to create notifications For a specific user: ^^^^^^^^^^^^^^^^^^^^ >>> Notification.objects.create_error_for_user("abc", user) >>> Notification.objects.create_warning_for_user("abc", user) >>> Notification.objects.create_success_for_user("abc", user) >>> Notification.objects.create_info_for_user("abc", user) For all users: ^^^^^^^^^^^^^^ >>> Notification.objects.create_error_for_users("abc") >>> Notification.objects.create_warning_for_users("abc") >>> Notification.objects.create_success_for_users("abc") >>> Notification.objects.create_info_for_users("abc") These methods create notifications that are visible to both users and admins: >>> notification = Notification.objects.create_info_for_users("abc") >>> notification.users True >>> notification.admins True For administrators: ^^^^^^^^^^^^^^^^^^^ >>> Notification.objects.create_error_for_admins("abc") >>> Notification.objects.create_warning_for_admins("abc") >>> Notification.objects.create_success_for_admins("abc") >>> Notification.objects.create_info_for_admins("abc") For users and **not** administrators: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Using the test factory, or by creating a ``Notification`` directly, it's possible to create a notification that's only for users and not for admins: >>> notification = factory.make_Notification(users=True, admins=False) >>> admin = factory.make_admin() >>> notification.is_relevant_to(admin) False This isn't explicitly catered for in the model API. If you find a need for this use case, adapt ``NotificationManager`` to accommodate it. Finding notifications --------------------- Finding notifications that are both: - relevant to a particular user, and - have not been dismissed by that user should be done with ``find_for_user``: >>> list(Notification.objects.find_for_user(user)) [ /tmp/maas_system_id" maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/rpc.rst000066400000000000000000000123571333555657500223510ustar00rootroot00000000000000.. -*- mode: rst -*- RPC HOWTO ========= MAAS contains an RPC mechanism such that every process in the region is connected to every process in the cluster (strictly, every rackd process). It's based on AMP_, specifically `Twisted's implementation`_. .. _AMP: http://amp-protocol.net/ .. _Twisted's implementation: http://twistedmatrix.com/documents/current/core/howto/amp.html Where do I start? ----------------- Start in the :py:mod:`provisioningserver.rpc` package. The first two files to look at are ``cluster.py`` and ``region.py``. This contain the declarations of what commands are available on clusters and regions respectively. A new command could be declared like so:: from twisted.protocols import amp class EatCheez(amp.Command): arguments = [ (b"name", amp.Unicode()), (b"origin", amp.Unicode()), ] response = [ (b"rating", amp.Integer()), ] It's also possible to map exceptions across the wire using an ``errors`` attribute; see the docs or code for more information. Note that byte-strings are used for parameter names. Twisted gets quite fussy about this, so remember to do it. Implementing commands --------------------- To implement a new command on the cluster, see the class :py:class:`provisioningserver.rpc.clusterserver.Cluster`. A method decorated with ``@cluster.EatCheez.responder`` is the implementation of the ``EatCheez`` command. There's no trick to this, they're just plain old functions. However: * They only receive named parameters, so the arguments *must* match the names used in the command's ``arguments`` declaration. * They *must* return a dict that matches the command's ``response`` declaration. * If the ``response`` declaration is empty they *must* still return an empty dict. To implement a new command on the region, see the class :py:class:`maasserver.rpc.regionserver.Region`. It works the same. Making remote calls from the region to the cluster -------------------------------------------------- There's a convenient API in :py:mod:`maasserver.rpc`: * :py:func:`~maasserver.rpc.getClientFor` returns a client for calling remote functions against the cluster identified by a specified UUID. * :py:func:`~maasserver.rpc.getAllClients` will return clients for all connections to cluster processes. The clients returned are designed to be used in either the reactor thread *or* in another thread; when called from the latter, a :py:class:`crochet.EventualResult` will be returned. Making remote calls from the cluster to the region -------------------------------------------------- You need to get a handle to the ``rpc`` service that will have been started by ``twistd``. Probably the best way to do this is implement the behaviour you want as a new service, start it up via same mechanism as the ``rpc`` service (see :py:mod:`provisioningserver.plugin`, and pass over a reference. Then call :py:func:`~provisioningserver.rpc.getClient`, and you will get a client for calling into a region process. You're given a random client. Making multiple calls at the same time from outside the reactor --------------------------------------------------------------- A utility function -- :py:func:`~maasserver.utils.async.gather` -- helps here. An example:: from functools import partial from maasserver.rpc import getAllClients from maasserver.utils import async from twisted.python.failure import Failure # Wrap those calls you want to make into no-argument callables, but # don't call them yet. calls = [ partial(client, EatCheez) for client in getAllClients() ] # Use gather() to issue all the calls simultaneously and process the # results as they come in. Note that responses can be failures too. for response in async.gather(calls, timeout=10): if isinstance(response, Failure): pass # Do something sensible with this. else: celebrate_a_cheesy_victory(response) Responses can be processed as soon as they come in. Any responses not received within ``timeout`` seconds will be discarded. Miscellaneous advice -------------------- * Don't hang onto client objects for long periods of time. It's okay for a sequence of operations, but don't keep one around as a global, for example; get a new one each time. * It's a distributed system, and errors are going to be normal, so be prepared. API --- Controlling the event-loop in region controllers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. automodule:: maasserver.eventloop RPC declarations for region controllers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. automodule:: provisioningserver.rpc.region RPC implementation for region controllers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: maasserver.rpc.regionservice.Region RPC declarations for cluster controllers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. automodule:: provisioningserver.rpc.cluster RPC implementation for cluster controllers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. autoclass:: provisioningserver.rpc.clusterservice.Cluster Helpers ^^^^^^^ .. autofunction:: maasserver.rpc.getAllClients .. autofunction:: maasserver.rpc.getClientFor .. autofunction:: maasserver.utils.async.gather .. automethod:: provisioningserver.rpc.clusterservice.ClusterClientService.getClient maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/security.rst000066400000000000000000000030501333555657500234220ustar00rootroot00000000000000Fixing security-related issues in MAAS ====================================== The critical thing to remember is that details of the bug *must not leak* before a package exists in the Ubuntu archive that fixes it. Only then can details be unembargoed. Here's a list, in order, of things we should do. #. The initiator will be a security bug that is filed. Ensure that the bug is flagged as a security vulnerability, and is thusly **private**. #. Notify the `Ubuntu Security Team`_ and subscribe them to the bug. #. Discuss a fix for the bug in private, **not** public IRC channels! #. When a fix has been decided, work out where it needs to land. It could be only trunk, or all of the released branches. #. Make an empty branch from each release series and push up to LP. Mark the branch as *private security* **before** you then push up any revisions to it. #. It's possible that the branch could be *public security* but better safe than sorry! The `Ubuntu Security Team`_ may advise. #. Do a merge proposal as normal; it should remain private due to the private branch. #. When finished, **do not** land the branch. Notify the `Ubuntu Server Team`_ and the `Ubuntu Security Team`_ that there is a patch available on the merge proposal (and subscribe the security team to it). #. When they notify that the package(s) has/have been published to the archive, the branch(es) can now land on our upstream. .. _Ubuntu Security Team: https://launchpad.net/~ubuntu-security .. _Ubuntu Server Team: https://launchpad.net/~ubuntu-server maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/tagging.rst000066400000000000000000000055041333555657500232010ustar00rootroot00000000000000.. -*- mode: rst -*- ******* Tagging ******* Auto tags, or tags with expressions =================================== These kind of tags have an associated XPath expression that is evaluated against hardware information as obtained from running ``lshw`` during node commissioning. New or updated tag definition ----------------------------- **Note** that this is somewhat outdated. See `bug 1372544`_ (*Tag changes will never be evaluated on unconnected clusters*) for more information. .. _bug 1372544: https://bugs.launchpad.net/maas/+bug/1372544 When a new tag is created or an existing tag is modified, its expression must be evaluated for every node known to the region. It's a moderately computationally intensive process, so the work is spread out to cluster controllers. Here's how: #. The region dispatches a ``provisioningserver.tasks.update_node_tags`` job to each cluster for each tag it wants evaluated. It sends the tag name and its definition, an XPath expression. See ``maasserver.model.tags.Tag.save`` and ``maasserver.populate_tags``. #. The task is run (by Celery) on each cluster. This calls ``provisioningserver.tags.process_node_tags``. #. The system IDs for all nodes in that cluster (aka node group) are fetched by calling back to the region. See ``provisioningserver.tags.get_nodes_for_node_group``. #. Nodes are then processed in batches. For each batch: #. Hardware details are obtained from the region for the batch as a whole. See ``provisioningserver.tags.get_hardware_details_for_nodes``. #. The tag expression is evaluated against each node's hardware details. The result of the expression, cast as a boolean, determines if the tag applies to this node. See ``provisioningserver.tags.process_batch``. #. The results are sent back to the region for the batch as a whole. See ``provisioningserver.tags.post_updated_nodes``. New or updated commissioning result ----------------------------------- When a new commissioning result comes in containing ``lshw`` or ``lldp`` XML output, every tag with an expression must be evaluated against the result so that the node is correctly tagged. To do this, ``VersionIndexHandler.signal`` calls ``populate_tags_for_single_node`` just before saving all the changes. This happens in the **region**. While it's a computationally expensive operation, the overhead of spinning this work out to a cluster controller negates any benefit that might gained by doing so. Manual tags, or tags without expressions ======================================== The *manual* part refers to how these tags are associated with nodes. Instead of being automatically associated as the result of evaluating the tag expression, these tags must be manually associated with a node. A manual tag is denoted by the absence of an expression. maas-2.4.2-7034-g2f5deb8b8.orig/docs/development/transactions.rst000066400000000000000000000072441333555657500242740ustar00rootroot00000000000000.. -*- mode: rst -*- ------------ Transactions ------------ How we roll(back) in MAAS ------------------------- MAAS runs almost all transactions using `serializable isolation`_. This is very strict and PostgreSQL can and does reject transactions because of conflicts with other transactions; this are business-as-usual. .. _serializable isolation: http://www.postgresql.org/docs/9.4/static/transaction-iso.html MAAS is prepared for this, and will retry transactions that have failed because of these specific types of conflicts. MAAS has two tools to deal with serialisation failures: * :py:class:`~maasserver.utils.views.WebApplicationHandler` deals with failures in MAAS's Django application (the web UI and API). This is the shim between Twisted's web server and Django's WSGI application. It's not designed for any other purpose. * :py:func:`~maasserver.utils.orm.transactional` is used to decorate functions and methods such that they're run within transactions. This is general purpose, and can be used almost everywhere in the MAAS region controller. Slightly different strategies are employed in each, but they share a lot of their implementations. .. note:: Only MAAS's region — a.k.a. ``regiond`` — connects to the PostgreSQL database. MAAS's clusters — ``rackd`` — are not directly relevant here. Always prefer @transactional ---------------------------- MAAS's :py:func:`~maasserver.utils.orm.transactional` decorator, in almost all situations, should be used to ensure that a piece of code runs within a transaction. It has very similar behaviour to Django's ``transaction.atomic`` — for good reason; it is based around it — so has savepoint-commit/rollback semantics when encountered within an existing transaction. In fact, when called from within a web or API request — i.e. within a transaction — it'll behave *exactly* like ``transaction.atomic``. However, if it's ever called from outside of a transaction, via ``deferToThread`` for example, it'll also ensure that transactions are retried after serialisation failures, that post-commit hooks are run, and that connections are cleared up at the end. Even where we know that code cannot be reached from outside of a transaction, it's a good habit to always use ``transactional`` in preference to ``transaction.atomic``. It's an easier rule to follow. Uses of ``transaction.atomic`` should also be exceptions and thus few in number, rendering them easy to audit. If you find that ``transactional`` doesn't Do The Right Thing for you, treat it first as a bug to be fixed before reaching for another tool. Except that… ------------ * ``transactional`` cannot be used as a context manager — and it can't be adapted into one, because of its retry behaviour — so:: with transaction.atomic(): do_stuff() can be okay. *However*, this should typically be inside a function that is decorated with ``transactional``, or in a private function that's only ever called from within a transaction that's being wrapped by ``transactional``. In other words, you would only do this if you want to run a block of code with savepoint-commit/rollback behaviour, and a context manager is more convenient or appropriate than defining a new function decorated with ``transactional``. * ``transaction.atomic`` is also okay as a context manager in **tests**, because you shouldn't run into serialisation failures there, and you may want more control over how post-commit hooks are handled anyway. Don't stress about this too much though: as long as your test inherits ``PostCommitHooksTestMixin`` *as most region tests already do* then your tests will fail if post-commit hooks are left dangling. maas-2.4.2-7034-g2f5deb8b8.orig/docs/enum.rst000066400000000000000000000003401333555657500201740ustar00rootroot00000000000000========== MAAS Enums ========== .. This only lists the enums that are relevant to outside users, e.g. people writing client applications using MAAS's web API. .. autoclass:: maasserver.enum.NODE_STATUS :members: maas-2.4.2-7034-g2f5deb8b8.orig/docs/hacking.rst000077700000000000000000000000001333555657500226432../HACKING.rstustar00rootroot00000000000000maas-2.4.2-7034-g2f5deb8b8.orig/docs/index.rst000066400000000000000000000043741333555657500203520ustar00rootroot00000000000000.. MAAS documentation master file ######################## MAAS: Metal As A Service ######################## This is the documentation for the `MAAS project`_. Metal as a Service -- MAAS -- lets you treat physical servers like virtual machines in the cloud. Rather than having to manage each server individually, MAAS turns your bare metal into an elastic cloud-like resource. What does that mean in practice? Tell MAAS about the machines you want it to manage and it will boot them, check the hardware's okay, and have them waiting for when you need them. You can then pull nodes up, tear them down and redeploy them at will; just as you can with virtual machines in the cloud. When you're ready to deploy a service, MAAS gives `Juju`_ the nodes it needs to power that service. It's as simple as that: no need to manually provision, check and, afterwards, clean-up. As your needs change, you can easily scale services up or down. Need more power for your Hadoop cluster for a few hours? Simply tear down one of your Nova compute nodes and redeploy it to Hadoop. When you're done, it's just as easy to give the node back to Nova. .. _MAAS project: http://maas.io/ .. _Juju: https://juju.ubuntu.com/ MAAS is ideal where you want the flexibility of the cloud, and the hassle-free power of Juju charms, but you need to deploy to bare metal. ************ Introduction ************ .. toctree:: :maxdepth: 2 releases changelog *********************** API / CLI Documentation *********************** .. toctree:: :maxdepth: 2 api api_authentication maascli version ****************** Command-line Tools ****************** .. toctree:: :maxdepth: 1 man/maas-region.8 man/maas.8 *************** Developing MAAS *************** .. toctree:: :maxdepth: 2 development/philosophy hacking models enum development/security development/building-packages development/cluster-registration development/cluster-bootstrap development/tagging development/preseeds development/metadata development/notifications development/rpc development/transactions development/notes/index ****************** Indices and tables ****************** .. toctree:: :maxdepth: 2 * :ref:`genindex` * :ref:`modindex` * :ref:`search` maas-2.4.2-7034-g2f5deb8b8.orig/docs/maascli.rst000066400000000000000000000415431333555657500206530ustar00rootroot00000000000000.. _cli: ---------------------- Command Line Interface ---------------------- As well as the web interface, many tasks can be performed by accessing the MAAS API directly through the `maas` command. This section details how to log in with this tool and perform some common operations. .. _api-key: Logging in ---------- Before the API will accept any commands from maas, you must first log in. To do this, you need an API key for your MAAS account. A key was generated for you as soon as your account was created, although you can still generate additional keys if you prefer. The key can be found in the web user interface, or if you have root privileges on the region controller, retrieved from the command line. To obtain the key from the web user interface, log in and click on your user name in the top right corner of the page, and select 'Preferences' from the menu which appears. .. image:: media/maascli-prefs.* A new page will load... .. image:: media/maascli-key.* Your MAAS API keys appear at the top of the preferences form. It's easiest to just select and copy the key (it's quite long!) and then paste it into the command line. To obtain the key through the command line, run this command on the region controller (it requires root access):: $ sudo maas-region apikey --username=my-username (Substitute your MAAS user name for my-username). Once you have your API key, log in with:: $ maas login This command logs you in, and creates a "profile" with the profile name you have selected. The profile is an easy way of storing the server URL and your login credentials, and re-using them across command-line invocations. Think of the profile as a persistent session. You can have multiple profiles open at the same time, and so as part of the login command, you assign a unique name to the new profile. Later invocations of the maas command line will refer to the profile by this name. For example, you might log in with a command line like:: $ maas login my-maas http://10.98.0.13/MAAS/api/2.0 AWSCRMzqMNy:jjk...5e1FenoP82Qm5te2 This creates the profile 'my-maas' and registers it with the given key at the specified API endpoint URL. If you omit the API key, the command will prompt you for it in the console. It is also possible to use a hyphen, '-' in place of the API key. In this case the command will read the API key from standard input, as a single line, ignoring whitespace. This mode of input can be useful if you want to read the API key from a file, or if you wish to avoid including the API key in a command line where it may be observed by other users on the system. Specifying an empty string instead of an API key will make the profile act as an anonymous user. Some calls in the API are accessible without logging in, but most of them are not. maas commands ------------- The ``maas`` command exposes the whole API, so you can do anything you actually *can* do with MAAS using this command. Unsurprisingly, this leaves us with a vast number of options, but before we delve into detail on the specifics, here is a sort of 'cheat-sheet' for common tasks you might want to do using ``maas``. * :ref:`Configure DHCP and DNS services ` * :ref:`Commission all enlisted nodes ` * :ref:`Setting IPMI power parameters for a node ` The main maas commands are: .. program:: maas :samp:`list` lists the details [name url auth-key] of all the currently logged-in profiles. :samp:`login ` Logs in to the MAAS controller API at the given URL, using the key provided and associates this connection with the given profile name. :samp:`logout ` Logs out from the given profile, flushing the stored credentials. :samp:`refresh` Refreshes the API descriptions of all the current logged in profiles. This may become necessary for example when upgrading the maas packages to ensure the command-line options match with the API. :samp:` [command] [options] ...` Using the given profile name instructs ``maas`` to direct the subsequent commands and options to the relevant MAAS, which for the current API are detailed below... account ^^^^^^^ This command is used for creating and destroying the MAAS authorisation tokens associated with a profile. Usage: maas ** account [-d --debug] [-h --help] create-authorisation-token | delete-authorisation-token [token_key=\ **] .. program:: maas account :samp:`-d, --debug` Displays debug information listing the API responses. :samp:`-h, --help` Display usage information. :samp:`-k, --insecure` Disables the SSL certificate check. :samp:`create-authorisation-token` Creates a new MAAS authorisation token for the current profile which can be used to authenticate connections to the API. :samp:`delete-authorisation-token token_key=` Removes the given key from the list of authorisation tokens. .. boot-images - not useful in user context .. ^^^^^^^^^^^ .. files - not useful in user context .. ^^^^^ node ^^^^ API calls which operate on individual nodes. With these commands, the node is always identified by its "system_id" property - a unique tag allocated at the time of enlistment. To discover the value of the system_id, you can use the ``maas nodes list`` command. USAGE: maas node [-h] release | start | stop | delete | read | update .. program:: maas node :samp:`-h, --help` Display usage information. :samp:`release ` Releases the node given by ** :samp:`start ` Powers up the node identified by ** (where MAAS has information for power management for this node). :samp:`stop ` Powers off the node identified by ** (where MAAS has information for power management for this node). :samp:`delete ` Removes the given node from the MAAS database. :samp:`read ` Returns all the current known information about the node specified by ** :samp:`update [parameters...]` Used to change or set specific values for the node. The valid parameters are listed below:: hostname= The new hostname for this node. architecture= Sets the architecture type, where is a string containing a valid architecture type, e.g. "i386/generic" distro_series= Sets the distro series of Ubuntu to use (e.g. "precise"). power_type= Set the given power type on the node. (e.g. "ipmi") power_parameters_{param1}... = Set the given power parameters. Note that the valid options for these depend on the power type chosen. power_parameters_skip_check 'true' | 'false' Whether to sanity check the supplied parameters against this node's declared power type. The default is 'false'. .. _cli-power: Example: Setting the power parameters for an ipmi enabled node:: maas maas node update \ power_type="ipmi" \ power_parameters_power_address=192.168.22.33 \ power_parameters_power_user=root \ power_parameters_power_pass=ubuntu; nodes ^^^^^ Usage: maas nodes [-h] is-registered | list-allocated | acquire | list | accept | accept-all | new .. program:: maas nodes :samp:`-h, --help` Display usage information. :samp:`accept ` Accepts the node referenced by . :samp:`accept-all` Accepts all currently discovered but not previously accepted nodes. :samp:`acquire` Allocates a node to the profile used to issue the command. Any ready node may be allocated. :samp:`is-registered mac_address=
` Checks to see whether the specified MAC address is registered to a node. :samp:`list` Returns a JSON formatted object listing all the currently known nodes, their system_id, status and other details. :samp:`list-allocated` Returns a JSON formatted object listing all the currently allocated nodes, their system_id, status and other details. :samp:`new architecture= mac_addresses= [parameters]` Creates a new node entry given the provided key=value information for the node. A minimum of the MAC address and architecture must be provided. Other parameters may also be supplied:: architecture="" - The architecture of the node, must be one of the recognised architecture strings (e.g. "i386/generic") hostname="" - a name for this node. If not supplied a name will be generated. mac_addresses="" - The mac address(es) allocated to this node. power_type="" - the power type of the node (e.g. virsh, ipmi) .. _cli-commission: Examples: Accept and commission all discovered nodes:: $ maas maas nodes accept-all List all known nodes:: $ maas maas nodes list Filter the list using specific key/value pairs:: $ maas maas nodes list architecture="i386/generic" node-groups ^^^^^^^^^^^ Usage: maas node-groups [-d --debug] [-h --help] [-k --insecure] register | list | accept | reject .. program:: maas node-groups :samp:`-d, --debug` Displays debug information listing the API responses. :samp:`-h, --help` Display usage information. :samp:`-k, --insecure` Disables the SSL certificate check. :samp:`register uuid= name= interfaces=` Registers a new node group with the given name and uuid. The interfaces parameter must be supplied in the form of a JSON string comprising the key/value data for the interface to be used, for example: interface='["ip":"192.168.21.5","interface":"eth1", \ "subnet_mask":"255.255.255.0","broadcast_ip":"192.168.21.255", \ "router_ip":"192.168.21.1", "ip_range_low":"192.168.21.10", \ "ip_range_high":"192.168.21.50"}]' :samp:`list` Returns a JSON list of all currently defined node groups. :samp:`accept ` Accepts a node-group or number of nodegroups indicated by the supplied UUID :samp:`reject ` Rejects a node-group or number of nodegroups indicated by the supplied UUID node-group-interface ^^^^^^^^^^^^^^^^^^^^ For managing the interfaces. See also :ref:`node-group-interfaces` Usage: maas ** node-group-interfaces [-d --debug] [-h --help] [-k --insecure] read | update | delete [parameters...] ..program:: maas node-group-interface :samp:`read ` Returns the current settings for the given UUID and interface :samp:`update [parameters]` Changes the settings for the interface according to the given parameters:: management= 0 | 1 | 2 The service to be managed on the interface ( 0= none, 1=DHCP, 2=DHCP and DNS). subnet_mask= Apply the given dotted decimal value as the subnet mask. broadcast_ip= Apply the given dotted decimal value as the broadcast IP address for this subnet. router_ip= Apply the given dotted decimal value as the default router address for this subnet. ip_range_low= The lowest value of IP address to allocate via DHCP ip_range_high= The highest value of IP address to allocate via DHCP :samp:`delete ` Removes the entry for the given UUID and interface. .. _cli-dhcp: Example: Configuring DHCP and DNS. To enable MAAS to manage DHCP and DNS, it needs to be supplied with the relevant interface information. To do this we need to first determine the UUID of the node group affected:: $ uuid=$(maas node-groups list | grep uuid | cut -d\" -f4) Once we have the UUID we can use this to update the node-group-interface for that nodegroup, and pass it the relevant interface details:: $ maas node-group-interface update $uuid eth0 \ ip_range_high=192.168.123.200 \ ip_range_low=192.168.123.100 \ management=2 \ broadcast_ip=192.168.123.255 \ router_ip=192.168.123.1 \ Replacing the example values with those required for this network. The only non-obvious parameter is 'management' which takes the values 0 (no management), 1 (manage DHCP) and 2 (manage DHCP and DNS). .. _node-group-interfaces: node-group-interfaces ^^^^^^^^^^^^^^^^^^^^^ The node-group-interfaces commands are used for configuring the management of DHCP and DNS services where these are managed by MAAS. Usage: maas ** node-group-interfaces [-d --debug] [-h --help] [-k --insecure] list | new [parameters...] .. program:: maas node-group-interfaces :samp:`-d, --debug` Displays debug information listing the API responses. :samp:`-h, --help` Display usage information. :samp:`-k, --insecure` Disables the SSL certificate check. :samp:`list